00:00:00.001 Started by upstream project "autotest-per-patch" build number 120541 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "jbp-per-patch" build number 21500 00:00:00.001 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.104 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.105 The recommended git tool is: git 00:00:00.105 using credential 00000000-0000-0000-0000-000000000002 00:00:00.106 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.133 Fetching changes from the remote Git repository 00:00:00.134 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.163 Using shallow fetch with depth 1 00:00:00.163 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.163 > git --version # timeout=10 00:00:00.182 > git --version # 'git version 2.39.2' 00:00:00.182 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.183 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.183 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/39/22839/2 # timeout=5 00:00:13.918 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:13.929 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:13.942 Checking out Revision f7115024b58324eb1821d2923066970ea28490fc (FETCH_HEAD) 00:00:13.942 > git config core.sparsecheckout # timeout=10 00:00:13.952 > git read-tree -mu HEAD # timeout=10 00:00:13.970 > git checkout -f f7115024b58324eb1821d2923066970ea28490fc # timeout=5 00:00:13.989 Commit message: "jobs/autotest-upstream: Enable ASan, UBSan on all jobs" 00:00:13.989 > git rev-list --no-walk 77e645413453ce9660898a799e28995c970fadc7 # timeout=10 00:00:14.113 [Pipeline] Start of Pipeline 00:00:14.125 [Pipeline] library 00:00:14.126 Loading library shm_lib@master 00:00:14.126 Library shm_lib@master is cached. Copying from home. 00:00:14.144 [Pipeline] node 00:00:14.154 Running on WFP22 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:14.156 [Pipeline] { 00:00:14.169 [Pipeline] catchError 00:00:14.171 [Pipeline] { 00:00:14.185 [Pipeline] wrap 00:00:14.192 [Pipeline] { 00:00:14.199 [Pipeline] stage 00:00:14.201 [Pipeline] { (Prologue) 00:00:14.387 [Pipeline] sh 00:00:14.705 + logger -p user.info -t JENKINS-CI 00:00:14.726 [Pipeline] echo 00:00:14.728 Node: WFP22 00:00:14.735 [Pipeline] sh 00:00:15.025 [Pipeline] setCustomBuildProperty 00:00:15.035 [Pipeline] echo 00:00:15.036 Cleanup processes 00:00:15.038 [Pipeline] sh 00:00:15.308 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:15.308 2173437 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:15.320 [Pipeline] sh 00:00:15.595 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:15.595 ++ grep -v 'sudo pgrep' 00:00:15.595 ++ awk '{print $1}' 00:00:15.595 + sudo kill -9 00:00:15.595 + true 00:00:15.608 [Pipeline] cleanWs 00:00:15.639 [WS-CLEANUP] Deleting project workspace... 00:00:15.639 [WS-CLEANUP] Deferred wipeout is used... 00:00:15.645 [WS-CLEANUP] done 00:00:15.650 [Pipeline] setCustomBuildProperty 00:00:15.663 [Pipeline] sh 00:00:15.938 + sudo git config --global --replace-all safe.directory '*' 00:00:16.009 [Pipeline] nodesByLabel 00:00:16.010 Found a total of 1 nodes with the 'sorcerer' label 00:00:16.021 [Pipeline] httpRequest 00:00:16.026 HttpMethod: GET 00:00:16.027 URL: http://10.211.164.101/packages/jbp_f7115024b58324eb1821d2923066970ea28490fc.tar.gz 00:00:16.029 Sending request to url: http://10.211.164.101/packages/jbp_f7115024b58324eb1821d2923066970ea28490fc.tar.gz 00:00:16.039 Response Code: HTTP/1.1 200 OK 00:00:16.040 Success: Status code 200 is in the accepted range: 200,404 00:00:16.041 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_f7115024b58324eb1821d2923066970ea28490fc.tar.gz 00:00:19.292 [Pipeline] sh 00:00:19.574 + tar --no-same-owner -xf jbp_f7115024b58324eb1821d2923066970ea28490fc.tar.gz 00:00:19.593 [Pipeline] httpRequest 00:00:19.598 HttpMethod: GET 00:00:19.598 URL: http://10.211.164.101/packages/spdk_65b4e17c6736ae69784017a5d5557443b6997899.tar.gz 00:00:19.599 Sending request to url: http://10.211.164.101/packages/spdk_65b4e17c6736ae69784017a5d5557443b6997899.tar.gz 00:00:19.621 Response Code: HTTP/1.1 200 OK 00:00:19.622 Success: Status code 200 is in the accepted range: 200,404 00:00:19.622 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_65b4e17c6736ae69784017a5d5557443b6997899.tar.gz 00:00:34.007 [Pipeline] sh 00:00:34.289 + tar --no-same-owner -xf spdk_65b4e17c6736ae69784017a5d5557443b6997899.tar.gz 00:00:36.833 [Pipeline] sh 00:00:37.135 + git -C spdk log --oneline -n5 00:00:37.135 65b4e17c6 uuid: clarify spdk_uuid_generate_sha1() return code 00:00:37.135 5d5e4d333 nvmf/rpc: Fail listener add with different secure channel 00:00:37.135 54944c1d1 event: don't NOTICELOG when no RPC server started 00:00:37.135 460a2e391 lib/init: do not fail if missing RPC's subsystem in JSON config doesn't exist in app 00:00:37.135 5dc808124 init: add spdk_subsystem_exists() 00:00:37.150 [Pipeline] } 00:00:37.169 [Pipeline] // stage 00:00:37.176 [Pipeline] stage 00:00:37.178 [Pipeline] { (Prepare) 00:00:37.196 [Pipeline] writeFile 00:00:37.206 [Pipeline] sh 00:00:37.483 + logger -p user.info -t JENKINS-CI 00:00:37.498 [Pipeline] sh 00:00:37.780 + logger -p user.info -t JENKINS-CI 00:00:37.790 [Pipeline] sh 00:00:38.071 + cat autorun-spdk.conf 00:00:38.071 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:38.071 SPDK_TEST_NVMF=1 00:00:38.071 SPDK_TEST_NVME_CLI=1 00:00:38.071 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:38.071 SPDK_TEST_NVMF_NICS=e810 00:00:38.071 SPDK_TEST_VFIOUSER=1 00:00:38.071 SPDK_RUN_ASAN=1 00:00:38.071 SPDK_RUN_UBSAN=1 00:00:38.071 NET_TYPE=phy 00:00:38.079 RUN_NIGHTLY=0 00:00:38.118 [Pipeline] readFile 00:00:38.146 [Pipeline] withEnv 00:00:38.148 [Pipeline] { 00:00:38.162 [Pipeline] sh 00:00:38.556 + set -ex 00:00:38.556 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:38.556 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:38.556 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:38.556 ++ SPDK_TEST_NVMF=1 00:00:38.556 ++ SPDK_TEST_NVME_CLI=1 00:00:38.556 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:38.556 ++ SPDK_TEST_NVMF_NICS=e810 00:00:38.556 ++ SPDK_TEST_VFIOUSER=1 00:00:38.556 ++ SPDK_RUN_ASAN=1 00:00:38.556 ++ SPDK_RUN_UBSAN=1 00:00:38.556 ++ NET_TYPE=phy 00:00:38.556 ++ RUN_NIGHTLY=0 00:00:38.556 + case $SPDK_TEST_NVMF_NICS in 00:00:38.556 + DRIVERS=ice 00:00:38.556 + [[ tcp == \r\d\m\a ]] 00:00:38.556 + [[ -n ice ]] 00:00:38.556 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:38.557 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:38.557 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:38.557 rmmod: ERROR: Module irdma is not currently loaded 00:00:38.557 rmmod: ERROR: Module i40iw is not currently loaded 00:00:38.557 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:38.557 + true 00:00:38.557 + for D in $DRIVERS 00:00:38.557 + sudo modprobe ice 00:00:38.557 + exit 0 00:00:38.564 [Pipeline] } 00:00:38.578 [Pipeline] // withEnv 00:00:38.582 [Pipeline] } 00:00:38.596 [Pipeline] // stage 00:00:38.604 [Pipeline] catchError 00:00:38.605 [Pipeline] { 00:00:38.615 [Pipeline] timeout 00:00:38.615 Timeout set to expire in 40 min 00:00:38.616 [Pipeline] { 00:00:38.626 [Pipeline] stage 00:00:38.627 [Pipeline] { (Tests) 00:00:38.639 [Pipeline] sh 00:00:38.921 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:38.921 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:38.921 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:38.921 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:38.921 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:38.921 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:38.921 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:38.921 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:38.921 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:38.921 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:38.921 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:38.921 + source /etc/os-release 00:00:38.921 ++ NAME='Fedora Linux' 00:00:38.921 ++ VERSION='38 (Cloud Edition)' 00:00:38.921 ++ ID=fedora 00:00:38.921 ++ VERSION_ID=38 00:00:38.921 ++ VERSION_CODENAME= 00:00:38.921 ++ PLATFORM_ID=platform:f38 00:00:38.921 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:38.921 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:38.921 ++ LOGO=fedora-logo-icon 00:00:38.921 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:38.921 ++ HOME_URL=https://fedoraproject.org/ 00:00:38.921 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:38.921 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:38.921 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:38.921 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:38.921 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:38.921 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:38.921 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:38.921 ++ SUPPORT_END=2024-05-14 00:00:38.921 ++ VARIANT='Cloud Edition' 00:00:38.921 ++ VARIANT_ID=cloud 00:00:38.921 + uname -a 00:00:38.921 Linux spdk-wfp-22 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:38.921 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:41.453 Hugepages 00:00:41.453 node hugesize free / total 00:00:41.453 node0 1048576kB 0 / 0 00:00:41.453 node0 2048kB 0 / 0 00:00:41.453 node1 1048576kB 0 / 0 00:00:41.453 node1 2048kB 0 / 0 00:00:41.453 00:00:41.453 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:41.453 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:41.453 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:41.453 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:41.453 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:41.453 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:41.453 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:41.453 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:41.453 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:41.453 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:41.453 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:41.453 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:41.453 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:41.453 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:41.453 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:41.453 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:41.453 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:41.453 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:41.453 + rm -f /tmp/spdk-ld-path 00:00:41.453 + source autorun-spdk.conf 00:00:41.453 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:41.453 ++ SPDK_TEST_NVMF=1 00:00:41.453 ++ SPDK_TEST_NVME_CLI=1 00:00:41.453 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:41.453 ++ SPDK_TEST_NVMF_NICS=e810 00:00:41.453 ++ SPDK_TEST_VFIOUSER=1 00:00:41.453 ++ SPDK_RUN_ASAN=1 00:00:41.453 ++ SPDK_RUN_UBSAN=1 00:00:41.453 ++ NET_TYPE=phy 00:00:41.453 ++ RUN_NIGHTLY=0 00:00:41.453 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:41.453 + [[ -n '' ]] 00:00:41.453 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:41.453 + for M in /var/spdk/build-*-manifest.txt 00:00:41.453 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:41.453 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:41.453 + for M in /var/spdk/build-*-manifest.txt 00:00:41.453 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:41.453 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:41.453 ++ uname 00:00:41.712 + [[ Linux == \L\i\n\u\x ]] 00:00:41.712 + sudo dmesg -T 00:00:41.712 + sudo dmesg --clear 00:00:41.712 + dmesg_pid=2174370 00:00:41.712 + [[ Fedora Linux == FreeBSD ]] 00:00:41.712 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:41.712 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:41.712 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:41.712 + [[ -x /usr/src/fio-static/fio ]] 00:00:41.712 + export FIO_BIN=/usr/src/fio-static/fio 00:00:41.712 + FIO_BIN=/usr/src/fio-static/fio 00:00:41.712 + sudo dmesg -Tw 00:00:41.712 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:41.712 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:41.712 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:41.712 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:41.712 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:41.712 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:41.712 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:41.712 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:41.712 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:41.712 Test configuration: 00:00:41.712 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:41.712 SPDK_TEST_NVMF=1 00:00:41.712 SPDK_TEST_NVME_CLI=1 00:00:41.712 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:41.712 SPDK_TEST_NVMF_NICS=e810 00:00:41.712 SPDK_TEST_VFIOUSER=1 00:00:41.712 SPDK_RUN_ASAN=1 00:00:41.712 SPDK_RUN_UBSAN=1 00:00:41.712 NET_TYPE=phy 00:00:41.712 RUN_NIGHTLY=0 11:36:32 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:41.712 11:36:32 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:41.712 11:36:32 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:41.712 11:36:32 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:41.712 11:36:32 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:41.712 11:36:32 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:41.712 11:36:32 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:41.712 11:36:32 -- paths/export.sh@5 -- $ export PATH 00:00:41.712 11:36:32 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:41.712 11:36:32 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:41.712 11:36:32 -- common/autobuild_common.sh@435 -- $ date +%s 00:00:41.712 11:36:32 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713432992.XXXXXX 00:00:41.712 11:36:32 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713432992.RHA5PG 00:00:41.712 11:36:32 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:00:41.712 11:36:32 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:00:41.712 11:36:32 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:41.712 11:36:32 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:41.712 11:36:32 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:41.712 11:36:32 -- common/autobuild_common.sh@451 -- $ get_config_params 00:00:41.712 11:36:32 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:00:41.712 11:36:32 -- common/autotest_common.sh@10 -- $ set +x 00:00:41.712 11:36:32 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user' 00:00:41.712 11:36:32 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:00:41.712 11:36:32 -- pm/common@17 -- $ local monitor 00:00:41.712 11:36:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:41.712 11:36:32 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2174406 00:00:41.712 11:36:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:41.712 11:36:32 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2174408 00:00:41.712 11:36:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:41.712 11:36:32 -- pm/common@21 -- $ date +%s 00:00:41.712 11:36:32 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2174410 00:00:41.712 11:36:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:41.712 11:36:32 -- pm/common@21 -- $ date +%s 00:00:41.712 11:36:32 -- pm/common@21 -- $ date +%s 00:00:41.712 11:36:32 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2174413 00:00:41.712 11:36:32 -- pm/common@26 -- $ sleep 1 00:00:41.712 11:36:32 -- pm/common@21 -- $ date +%s 00:00:41.712 11:36:32 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713432992 00:00:41.712 11:36:32 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713432992 00:00:41.712 11:36:32 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713432992 00:00:41.712 11:36:32 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713432992 00:00:41.971 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713432992_collect-bmc-pm.bmc.pm.log 00:00:41.971 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713432992_collect-vmstat.pm.log 00:00:41.971 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713432992_collect-cpu-load.pm.log 00:00:41.971 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713432992_collect-cpu-temp.pm.log 00:00:42.908 11:36:33 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:00:42.908 11:36:33 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:42.908 11:36:33 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:42.908 11:36:33 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:42.908 11:36:33 -- spdk/autobuild.sh@16 -- $ date -u 00:00:42.908 Thu Apr 18 09:36:33 AM UTC 2024 00:00:42.908 11:36:33 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:42.908 v24.05-pre-407-g65b4e17c6 00:00:42.908 11:36:33 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:00:42.908 11:36:33 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:00:42.908 11:36:33 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:00:42.908 11:36:33 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:00:42.908 11:36:33 -- common/autotest_common.sh@10 -- $ set +x 00:00:42.908 ************************************ 00:00:42.908 START TEST asan 00:00:42.908 ************************************ 00:00:42.908 11:36:33 -- common/autotest_common.sh@1111 -- $ echo 'using asan' 00:00:42.908 using asan 00:00:42.908 00:00:42.908 real 0m0.000s 00:00:42.908 user 0m0.000s 00:00:42.908 sys 0m0.000s 00:00:42.908 11:36:33 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:00:42.908 11:36:33 -- common/autotest_common.sh@10 -- $ set +x 00:00:42.908 ************************************ 00:00:42.908 END TEST asan 00:00:42.908 ************************************ 00:00:43.167 11:36:33 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:43.167 11:36:33 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:43.167 11:36:33 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:00:43.167 11:36:33 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:00:43.167 11:36:33 -- common/autotest_common.sh@10 -- $ set +x 00:00:43.167 ************************************ 00:00:43.167 START TEST ubsan 00:00:43.167 ************************************ 00:00:43.167 11:36:33 -- common/autotest_common.sh@1111 -- $ echo 'using ubsan' 00:00:43.167 using ubsan 00:00:43.167 00:00:43.167 real 0m0.000s 00:00:43.167 user 0m0.000s 00:00:43.167 sys 0m0.000s 00:00:43.167 11:36:33 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:00:43.167 11:36:33 -- common/autotest_common.sh@10 -- $ set +x 00:00:43.167 ************************************ 00:00:43.167 END TEST ubsan 00:00:43.167 ************************************ 00:00:43.167 11:36:33 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:43.167 11:36:33 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:43.167 11:36:33 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:43.167 11:36:33 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:43.167 11:36:33 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:43.167 11:36:33 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:43.167 11:36:33 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:43.167 11:36:33 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:43.167 11:36:33 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:43.426 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:43.426 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:43.684 Using 'verbs' RDMA provider 00:00:59.487 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:11.688 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:11.688 Creating mk/config.mk...done. 00:01:11.688 Creating mk/cc.flags.mk...done. 00:01:11.688 Type 'make' to build. 00:01:11.688 11:37:01 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:01:11.688 11:37:01 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:11.688 11:37:01 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:11.688 11:37:01 -- common/autotest_common.sh@10 -- $ set +x 00:01:11.688 ************************************ 00:01:11.688 START TEST make 00:01:11.688 ************************************ 00:01:11.688 11:37:01 -- common/autotest_common.sh@1111 -- $ make -j112 00:01:11.688 make[1]: Nothing to be done for 'all'. 00:01:12.711 The Meson build system 00:01:12.711 Version: 1.3.1 00:01:12.711 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:12.711 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:12.711 Build type: native build 00:01:12.711 Project name: libvfio-user 00:01:12.711 Project version: 0.0.1 00:01:12.711 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:12.711 C linker for the host machine: cc ld.bfd 2.39-16 00:01:12.711 Host machine cpu family: x86_64 00:01:12.711 Host machine cpu: x86_64 00:01:12.711 Run-time dependency threads found: YES 00:01:12.711 Library dl found: YES 00:01:12.711 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:12.711 Run-time dependency json-c found: YES 0.17 00:01:12.711 Run-time dependency cmocka found: YES 1.1.7 00:01:12.711 Program pytest-3 found: NO 00:01:12.711 Program flake8 found: NO 00:01:12.711 Program misspell-fixer found: NO 00:01:12.711 Program restructuredtext-lint found: NO 00:01:12.711 Program valgrind found: YES (/usr/bin/valgrind) 00:01:12.711 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:12.711 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:12.711 Compiler for C supports arguments -Wwrite-strings: YES 00:01:12.711 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:12.711 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:12.711 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:12.711 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:12.711 Build targets in project: 8 00:01:12.711 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:12.711 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:12.711 00:01:12.711 libvfio-user 0.0.1 00:01:12.711 00:01:12.711 User defined options 00:01:12.711 buildtype : debug 00:01:12.711 default_library: shared 00:01:12.711 libdir : /usr/local/lib 00:01:12.711 00:01:12.711 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:13.279 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:13.279 [1/37] Compiling C object samples/null.p/null.c.o 00:01:13.279 [2/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:13.279 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:13.279 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:13.279 [5/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:13.279 [6/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:13.279 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:13.279 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:13.279 [9/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:13.279 [10/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:13.279 [11/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:13.279 [12/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:13.279 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:13.279 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:13.279 [15/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:13.279 [16/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:13.279 [17/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:13.279 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:13.279 [19/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:13.279 [20/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:13.279 [21/37] Compiling C object samples/server.p/server.c.o 00:01:13.279 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:13.279 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:13.279 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:13.279 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:13.279 [26/37] Compiling C object samples/client.p/client.c.o 00:01:13.279 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:13.536 [28/37] Linking target samples/client 00:01:13.536 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:13.536 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:13.536 [31/37] Linking target test/unit_tests 00:01:13.536 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:13.536 [33/37] Linking target samples/gpio-pci-idio-16 00:01:13.536 [34/37] Linking target samples/shadow_ioeventfd_server 00:01:13.536 [35/37] Linking target samples/null 00:01:13.536 [36/37] Linking target samples/lspci 00:01:13.536 [37/37] Linking target samples/server 00:01:13.536 INFO: autodetecting backend as ninja 00:01:13.536 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:13.794 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:14.054 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:14.054 ninja: no work to do. 00:01:19.328 The Meson build system 00:01:19.328 Version: 1.3.1 00:01:19.328 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:19.328 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:19.328 Build type: native build 00:01:19.328 Program cat found: YES (/usr/bin/cat) 00:01:19.328 Project name: DPDK 00:01:19.328 Project version: 23.11.0 00:01:19.328 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:19.328 C linker for the host machine: cc ld.bfd 2.39-16 00:01:19.328 Host machine cpu family: x86_64 00:01:19.328 Host machine cpu: x86_64 00:01:19.328 Message: ## Building in Developer Mode ## 00:01:19.328 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:19.328 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:19.328 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:19.328 Program python3 found: YES (/usr/bin/python3) 00:01:19.328 Program cat found: YES (/usr/bin/cat) 00:01:19.328 Compiler for C supports arguments -march=native: YES 00:01:19.328 Checking for size of "void *" : 8 00:01:19.328 Checking for size of "void *" : 8 (cached) 00:01:19.328 Library m found: YES 00:01:19.328 Library numa found: YES 00:01:19.328 Has header "numaif.h" : YES 00:01:19.328 Library fdt found: NO 00:01:19.328 Library execinfo found: NO 00:01:19.328 Has header "execinfo.h" : YES 00:01:19.328 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:19.328 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:19.328 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:19.328 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:19.328 Run-time dependency openssl found: YES 3.0.9 00:01:19.328 Run-time dependency libpcap found: YES 1.10.4 00:01:19.328 Has header "pcap.h" with dependency libpcap: YES 00:01:19.328 Compiler for C supports arguments -Wcast-qual: YES 00:01:19.328 Compiler for C supports arguments -Wdeprecated: YES 00:01:19.328 Compiler for C supports arguments -Wformat: YES 00:01:19.328 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:19.328 Compiler for C supports arguments -Wformat-security: NO 00:01:19.328 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:19.328 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:19.328 Compiler for C supports arguments -Wnested-externs: YES 00:01:19.328 Compiler for C supports arguments -Wold-style-definition: YES 00:01:19.328 Compiler for C supports arguments -Wpointer-arith: YES 00:01:19.328 Compiler for C supports arguments -Wsign-compare: YES 00:01:19.328 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:19.328 Compiler for C supports arguments -Wundef: YES 00:01:19.328 Compiler for C supports arguments -Wwrite-strings: YES 00:01:19.328 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:19.328 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:19.328 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:19.329 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:19.329 Program objdump found: YES (/usr/bin/objdump) 00:01:19.329 Compiler for C supports arguments -mavx512f: YES 00:01:19.329 Checking if "AVX512 checking" compiles: YES 00:01:19.329 Fetching value of define "__SSE4_2__" : 1 00:01:19.329 Fetching value of define "__AES__" : 1 00:01:19.329 Fetching value of define "__AVX__" : 1 00:01:19.329 Fetching value of define "__AVX2__" : 1 00:01:19.329 Fetching value of define "__AVX512BW__" : 1 00:01:19.329 Fetching value of define "__AVX512CD__" : 1 00:01:19.329 Fetching value of define "__AVX512DQ__" : 1 00:01:19.329 Fetching value of define "__AVX512F__" : 1 00:01:19.329 Fetching value of define "__AVX512VL__" : 1 00:01:19.329 Fetching value of define "__PCLMUL__" : 1 00:01:19.329 Fetching value of define "__RDRND__" : 1 00:01:19.329 Fetching value of define "__RDSEED__" : 1 00:01:19.329 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:19.329 Fetching value of define "__znver1__" : (undefined) 00:01:19.329 Fetching value of define "__znver2__" : (undefined) 00:01:19.329 Fetching value of define "__znver3__" : (undefined) 00:01:19.329 Fetching value of define "__znver4__" : (undefined) 00:01:19.329 Library asan found: YES 00:01:19.329 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:19.329 Message: lib/log: Defining dependency "log" 00:01:19.329 Message: lib/kvargs: Defining dependency "kvargs" 00:01:19.329 Message: lib/telemetry: Defining dependency "telemetry" 00:01:19.329 Library rt found: YES 00:01:19.329 Checking for function "getentropy" : NO 00:01:19.329 Message: lib/eal: Defining dependency "eal" 00:01:19.329 Message: lib/ring: Defining dependency "ring" 00:01:19.329 Message: lib/rcu: Defining dependency "rcu" 00:01:19.329 Message: lib/mempool: Defining dependency "mempool" 00:01:19.329 Message: lib/mbuf: Defining dependency "mbuf" 00:01:19.329 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:19.329 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:19.329 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:19.329 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:19.329 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:19.329 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:19.329 Compiler for C supports arguments -mpclmul: YES 00:01:19.329 Compiler for C supports arguments -maes: YES 00:01:19.329 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:19.329 Compiler for C supports arguments -mavx512bw: YES 00:01:19.329 Compiler for C supports arguments -mavx512dq: YES 00:01:19.329 Compiler for C supports arguments -mavx512vl: YES 00:01:19.329 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:19.329 Compiler for C supports arguments -mavx2: YES 00:01:19.329 Compiler for C supports arguments -mavx: YES 00:01:19.329 Message: lib/net: Defining dependency "net" 00:01:19.329 Message: lib/meter: Defining dependency "meter" 00:01:19.329 Message: lib/ethdev: Defining dependency "ethdev" 00:01:19.329 Message: lib/pci: Defining dependency "pci" 00:01:19.329 Message: lib/cmdline: Defining dependency "cmdline" 00:01:19.329 Message: lib/hash: Defining dependency "hash" 00:01:19.329 Message: lib/timer: Defining dependency "timer" 00:01:19.329 Message: lib/compressdev: Defining dependency "compressdev" 00:01:19.329 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:19.329 Message: lib/dmadev: Defining dependency "dmadev" 00:01:19.329 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:19.329 Message: lib/power: Defining dependency "power" 00:01:19.329 Message: lib/reorder: Defining dependency "reorder" 00:01:19.329 Message: lib/security: Defining dependency "security" 00:01:19.329 Has header "linux/userfaultfd.h" : YES 00:01:19.329 Has header "linux/vduse.h" : YES 00:01:19.329 Message: lib/vhost: Defining dependency "vhost" 00:01:19.329 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:19.329 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:19.329 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:19.329 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:19.329 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:19.329 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:19.329 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:19.329 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:19.329 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:19.329 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:19.329 Program doxygen found: YES (/usr/bin/doxygen) 00:01:19.329 Configuring doxy-api-html.conf using configuration 00:01:19.329 Configuring doxy-api-man.conf using configuration 00:01:19.329 Program mandb found: YES (/usr/bin/mandb) 00:01:19.329 Program sphinx-build found: NO 00:01:19.329 Configuring rte_build_config.h using configuration 00:01:19.329 Message: 00:01:19.329 ================= 00:01:19.329 Applications Enabled 00:01:19.329 ================= 00:01:19.329 00:01:19.329 apps: 00:01:19.329 00:01:19.329 00:01:19.329 Message: 00:01:19.329 ================= 00:01:19.329 Libraries Enabled 00:01:19.329 ================= 00:01:19.329 00:01:19.329 libs: 00:01:19.329 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:19.329 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:19.329 cryptodev, dmadev, power, reorder, security, vhost, 00:01:19.329 00:01:19.329 Message: 00:01:19.329 =============== 00:01:19.329 Drivers Enabled 00:01:19.329 =============== 00:01:19.329 00:01:19.329 common: 00:01:19.329 00:01:19.329 bus: 00:01:19.329 pci, vdev, 00:01:19.329 mempool: 00:01:19.329 ring, 00:01:19.329 dma: 00:01:19.329 00:01:19.329 net: 00:01:19.329 00:01:19.329 crypto: 00:01:19.329 00:01:19.329 compress: 00:01:19.329 00:01:19.329 vdpa: 00:01:19.329 00:01:19.329 00:01:19.329 Message: 00:01:19.329 ================= 00:01:19.329 Content Skipped 00:01:19.329 ================= 00:01:19.329 00:01:19.329 apps: 00:01:19.329 dumpcap: explicitly disabled via build config 00:01:19.329 graph: explicitly disabled via build config 00:01:19.329 pdump: explicitly disabled via build config 00:01:19.329 proc-info: explicitly disabled via build config 00:01:19.329 test-acl: explicitly disabled via build config 00:01:19.329 test-bbdev: explicitly disabled via build config 00:01:19.329 test-cmdline: explicitly disabled via build config 00:01:19.329 test-compress-perf: explicitly disabled via build config 00:01:19.329 test-crypto-perf: explicitly disabled via build config 00:01:19.329 test-dma-perf: explicitly disabled via build config 00:01:19.329 test-eventdev: explicitly disabled via build config 00:01:19.329 test-fib: explicitly disabled via build config 00:01:19.329 test-flow-perf: explicitly disabled via build config 00:01:19.329 test-gpudev: explicitly disabled via build config 00:01:19.329 test-mldev: explicitly disabled via build config 00:01:19.329 test-pipeline: explicitly disabled via build config 00:01:19.329 test-pmd: explicitly disabled via build config 00:01:19.329 test-regex: explicitly disabled via build config 00:01:19.329 test-sad: explicitly disabled via build config 00:01:19.329 test-security-perf: explicitly disabled via build config 00:01:19.329 00:01:19.329 libs: 00:01:19.329 metrics: explicitly disabled via build config 00:01:19.329 acl: explicitly disabled via build config 00:01:19.329 bbdev: explicitly disabled via build config 00:01:19.329 bitratestats: explicitly disabled via build config 00:01:19.329 bpf: explicitly disabled via build config 00:01:19.329 cfgfile: explicitly disabled via build config 00:01:19.329 distributor: explicitly disabled via build config 00:01:19.329 efd: explicitly disabled via build config 00:01:19.329 eventdev: explicitly disabled via build config 00:01:19.329 dispatcher: explicitly disabled via build config 00:01:19.329 gpudev: explicitly disabled via build config 00:01:19.329 gro: explicitly disabled via build config 00:01:19.329 gso: explicitly disabled via build config 00:01:19.329 ip_frag: explicitly disabled via build config 00:01:19.329 jobstats: explicitly disabled via build config 00:01:19.329 latencystats: explicitly disabled via build config 00:01:19.329 lpm: explicitly disabled via build config 00:01:19.329 member: explicitly disabled via build config 00:01:19.329 pcapng: explicitly disabled via build config 00:01:19.329 rawdev: explicitly disabled via build config 00:01:19.329 regexdev: explicitly disabled via build config 00:01:19.329 mldev: explicitly disabled via build config 00:01:19.329 rib: explicitly disabled via build config 00:01:19.329 sched: explicitly disabled via build config 00:01:19.329 stack: explicitly disabled via build config 00:01:19.329 ipsec: explicitly disabled via build config 00:01:19.329 pdcp: explicitly disabled via build config 00:01:19.329 fib: explicitly disabled via build config 00:01:19.329 port: explicitly disabled via build config 00:01:19.329 pdump: explicitly disabled via build config 00:01:19.329 table: explicitly disabled via build config 00:01:19.329 pipeline: explicitly disabled via build config 00:01:19.329 graph: explicitly disabled via build config 00:01:19.329 node: explicitly disabled via build config 00:01:19.329 00:01:19.329 drivers: 00:01:19.329 common/cpt: not in enabled drivers build config 00:01:19.329 common/dpaax: not in enabled drivers build config 00:01:19.329 common/iavf: not in enabled drivers build config 00:01:19.329 common/idpf: not in enabled drivers build config 00:01:19.329 common/mvep: not in enabled drivers build config 00:01:19.329 common/octeontx: not in enabled drivers build config 00:01:19.329 bus/auxiliary: not in enabled drivers build config 00:01:19.329 bus/cdx: not in enabled drivers build config 00:01:19.329 bus/dpaa: not in enabled drivers build config 00:01:19.329 bus/fslmc: not in enabled drivers build config 00:01:19.329 bus/ifpga: not in enabled drivers build config 00:01:19.329 bus/platform: not in enabled drivers build config 00:01:19.330 bus/vmbus: not in enabled drivers build config 00:01:19.330 common/cnxk: not in enabled drivers build config 00:01:19.330 common/mlx5: not in enabled drivers build config 00:01:19.330 common/nfp: not in enabled drivers build config 00:01:19.330 common/qat: not in enabled drivers build config 00:01:19.330 common/sfc_efx: not in enabled drivers build config 00:01:19.330 mempool/bucket: not in enabled drivers build config 00:01:19.330 mempool/cnxk: not in enabled drivers build config 00:01:19.330 mempool/dpaa: not in enabled drivers build config 00:01:19.330 mempool/dpaa2: not in enabled drivers build config 00:01:19.330 mempool/octeontx: not in enabled drivers build config 00:01:19.330 mempool/stack: not in enabled drivers build config 00:01:19.330 dma/cnxk: not in enabled drivers build config 00:01:19.330 dma/dpaa: not in enabled drivers build config 00:01:19.330 dma/dpaa2: not in enabled drivers build config 00:01:19.330 dma/hisilicon: not in enabled drivers build config 00:01:19.330 dma/idxd: not in enabled drivers build config 00:01:19.330 dma/ioat: not in enabled drivers build config 00:01:19.330 dma/skeleton: not in enabled drivers build config 00:01:19.330 net/af_packet: not in enabled drivers build config 00:01:19.330 net/af_xdp: not in enabled drivers build config 00:01:19.330 net/ark: not in enabled drivers build config 00:01:19.330 net/atlantic: not in enabled drivers build config 00:01:19.330 net/avp: not in enabled drivers build config 00:01:19.330 net/axgbe: not in enabled drivers build config 00:01:19.330 net/bnx2x: not in enabled drivers build config 00:01:19.330 net/bnxt: not in enabled drivers build config 00:01:19.330 net/bonding: not in enabled drivers build config 00:01:19.330 net/cnxk: not in enabled drivers build config 00:01:19.330 net/cpfl: not in enabled drivers build config 00:01:19.330 net/cxgbe: not in enabled drivers build config 00:01:19.330 net/dpaa: not in enabled drivers build config 00:01:19.330 net/dpaa2: not in enabled drivers build config 00:01:19.330 net/e1000: not in enabled drivers build config 00:01:19.330 net/ena: not in enabled drivers build config 00:01:19.330 net/enetc: not in enabled drivers build config 00:01:19.330 net/enetfec: not in enabled drivers build config 00:01:19.330 net/enic: not in enabled drivers build config 00:01:19.330 net/failsafe: not in enabled drivers build config 00:01:19.330 net/fm10k: not in enabled drivers build config 00:01:19.330 net/gve: not in enabled drivers build config 00:01:19.330 net/hinic: not in enabled drivers build config 00:01:19.330 net/hns3: not in enabled drivers build config 00:01:19.330 net/i40e: not in enabled drivers build config 00:01:19.330 net/iavf: not in enabled drivers build config 00:01:19.330 net/ice: not in enabled drivers build config 00:01:19.330 net/idpf: not in enabled drivers build config 00:01:19.330 net/igc: not in enabled drivers build config 00:01:19.330 net/ionic: not in enabled drivers build config 00:01:19.330 net/ipn3ke: not in enabled drivers build config 00:01:19.330 net/ixgbe: not in enabled drivers build config 00:01:19.330 net/mana: not in enabled drivers build config 00:01:19.330 net/memif: not in enabled drivers build config 00:01:19.330 net/mlx4: not in enabled drivers build config 00:01:19.330 net/mlx5: not in enabled drivers build config 00:01:19.330 net/mvneta: not in enabled drivers build config 00:01:19.330 net/mvpp2: not in enabled drivers build config 00:01:19.330 net/netvsc: not in enabled drivers build config 00:01:19.330 net/nfb: not in enabled drivers build config 00:01:19.330 net/nfp: not in enabled drivers build config 00:01:19.330 net/ngbe: not in enabled drivers build config 00:01:19.330 net/null: not in enabled drivers build config 00:01:19.330 net/octeontx: not in enabled drivers build config 00:01:19.330 net/octeon_ep: not in enabled drivers build config 00:01:19.330 net/pcap: not in enabled drivers build config 00:01:19.330 net/pfe: not in enabled drivers build config 00:01:19.330 net/qede: not in enabled drivers build config 00:01:19.330 net/ring: not in enabled drivers build config 00:01:19.330 net/sfc: not in enabled drivers build config 00:01:19.330 net/softnic: not in enabled drivers build config 00:01:19.330 net/tap: not in enabled drivers build config 00:01:19.330 net/thunderx: not in enabled drivers build config 00:01:19.330 net/txgbe: not in enabled drivers build config 00:01:19.330 net/vdev_netvsc: not in enabled drivers build config 00:01:19.330 net/vhost: not in enabled drivers build config 00:01:19.330 net/virtio: not in enabled drivers build config 00:01:19.330 net/vmxnet3: not in enabled drivers build config 00:01:19.330 raw/*: missing internal dependency, "rawdev" 00:01:19.330 crypto/armv8: not in enabled drivers build config 00:01:19.330 crypto/bcmfs: not in enabled drivers build config 00:01:19.330 crypto/caam_jr: not in enabled drivers build config 00:01:19.330 crypto/ccp: not in enabled drivers build config 00:01:19.330 crypto/cnxk: not in enabled drivers build config 00:01:19.330 crypto/dpaa_sec: not in enabled drivers build config 00:01:19.330 crypto/dpaa2_sec: not in enabled drivers build config 00:01:19.330 crypto/ipsec_mb: not in enabled drivers build config 00:01:19.330 crypto/mlx5: not in enabled drivers build config 00:01:19.330 crypto/mvsam: not in enabled drivers build config 00:01:19.330 crypto/nitrox: not in enabled drivers build config 00:01:19.330 crypto/null: not in enabled drivers build config 00:01:19.330 crypto/octeontx: not in enabled drivers build config 00:01:19.330 crypto/openssl: not in enabled drivers build config 00:01:19.330 crypto/scheduler: not in enabled drivers build config 00:01:19.330 crypto/uadk: not in enabled drivers build config 00:01:19.330 crypto/virtio: not in enabled drivers build config 00:01:19.330 compress/isal: not in enabled drivers build config 00:01:19.330 compress/mlx5: not in enabled drivers build config 00:01:19.330 compress/octeontx: not in enabled drivers build config 00:01:19.330 compress/zlib: not in enabled drivers build config 00:01:19.330 regex/*: missing internal dependency, "regexdev" 00:01:19.330 ml/*: missing internal dependency, "mldev" 00:01:19.330 vdpa/ifc: not in enabled drivers build config 00:01:19.330 vdpa/mlx5: not in enabled drivers build config 00:01:19.330 vdpa/nfp: not in enabled drivers build config 00:01:19.330 vdpa/sfc: not in enabled drivers build config 00:01:19.330 event/*: missing internal dependency, "eventdev" 00:01:19.330 baseband/*: missing internal dependency, "bbdev" 00:01:19.330 gpu/*: missing internal dependency, "gpudev" 00:01:19.330 00:01:19.330 00:01:19.589 Build targets in project: 85 00:01:19.589 00:01:19.589 DPDK 23.11.0 00:01:19.589 00:01:19.589 User defined options 00:01:19.589 buildtype : debug 00:01:19.589 default_library : shared 00:01:19.589 libdir : lib 00:01:19.589 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:19.589 b_sanitize : address 00:01:19.589 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:19.589 c_link_args : 00:01:19.589 cpu_instruction_set: native 00:01:19.589 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:01:19.589 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:01:19.589 enable_docs : false 00:01:19.589 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:19.589 enable_kmods : false 00:01:19.589 tests : false 00:01:19.589 00:01:19.589 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:20.170 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:20.170 [1/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:20.170 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:20.170 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:20.170 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:20.170 [5/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:20.170 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:20.170 [7/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:20.170 [8/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:20.170 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:20.170 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:20.170 [11/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:20.170 [12/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:20.170 [13/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:20.170 [14/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:20.170 [15/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:20.170 [16/265] Linking static target lib/librte_kvargs.a 00:01:20.170 [17/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:20.170 [18/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:20.170 [19/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:20.170 [20/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:20.433 [21/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:20.433 [22/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:20.433 [23/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:20.433 [24/265] Linking static target lib/librte_log.a 00:01:20.433 [25/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:20.433 [26/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:20.433 [27/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:20.433 [28/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:20.433 [29/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:20.433 [30/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:20.433 [31/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:20.433 [32/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:20.433 [33/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:20.433 [34/265] Linking static target lib/librte_pci.a 00:01:20.433 [35/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:20.433 [36/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:20.433 [37/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:20.433 [38/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:20.433 [39/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:20.433 [40/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:20.699 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:20.699 [42/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:20.699 [43/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:20.699 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:20.699 [45/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:20.699 [46/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:20.699 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:20.699 [48/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:20.699 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:20.699 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:20.699 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:20.700 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:20.700 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:20.700 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:20.700 [55/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:20.700 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:20.700 [57/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:20.700 [58/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:20.700 [59/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.700 [60/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:20.700 [61/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:20.700 [62/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:20.700 [63/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:20.700 [64/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:20.700 [65/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:20.700 [66/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:20.700 [67/265] Linking static target lib/librte_meter.a 00:01:20.700 [68/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:20.700 [69/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:20.700 [70/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.700 [71/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:20.700 [72/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:20.700 [73/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:20.700 [74/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:20.700 [75/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:20.700 [76/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:20.700 [77/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:20.700 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:20.700 [79/265] Linking static target lib/librte_telemetry.a 00:01:20.700 [80/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:20.700 [81/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:20.700 [82/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:20.700 [83/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:20.700 [84/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:20.700 [85/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:20.700 [86/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:20.700 [87/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:20.700 [88/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:20.700 [89/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:20.963 [90/265] Linking static target lib/librte_ring.a 00:01:20.963 [91/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:20.963 [92/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:20.963 [93/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:20.963 [94/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:20.963 [95/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:20.963 [96/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:20.963 [97/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:20.963 [98/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:20.963 [99/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:20.963 [100/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:20.963 [101/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:20.963 [102/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:20.963 [103/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:20.963 [104/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:20.963 [105/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:20.963 [106/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:20.963 [107/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:20.963 [108/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:20.963 [109/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:20.963 [110/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:20.963 [111/265] Linking static target lib/librte_cmdline.a 00:01:20.963 [112/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:20.963 [113/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:20.963 [114/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:20.963 [115/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:20.963 [116/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:20.963 [117/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:20.963 [118/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:20.963 [119/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:20.963 [120/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:20.963 [121/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:20.963 [122/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:20.963 [123/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:20.963 [124/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:20.963 [125/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:20.963 [126/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:20.963 [127/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:20.963 [128/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:20.963 [129/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:20.963 [130/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:20.963 [131/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:20.963 [132/265] Linking static target lib/librte_timer.a 00:01:20.963 [133/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:20.963 [134/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:20.963 [135/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:20.963 [136/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:20.963 [137/265] Linking static target lib/librte_dmadev.a 00:01:20.963 [138/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.963 [139/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:20.963 [140/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:20.963 [141/265] Linking static target lib/librte_power.a 00:01:20.963 [142/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:20.963 [143/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:20.963 [144/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:20.963 [145/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:20.963 [146/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:20.963 [147/265] Linking static target lib/librte_mempool.a 00:01:20.963 [148/265] Linking target lib/librte_log.so.24.0 00:01:20.963 [149/265] Linking static target lib/librte_net.a 00:01:20.963 [150/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:20.963 [151/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.221 [152/265] Linking static target lib/librte_rcu.a 00:01:21.221 [153/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:21.221 [154/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:21.221 [155/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:21.221 [156/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:21.221 [157/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:21.221 [158/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.221 [159/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:21.221 [160/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:21.221 [161/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:21.221 [162/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:21.222 [163/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:21.222 [164/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:21.222 [165/265] Linking static target lib/librte_reorder.a 00:01:21.222 [166/265] Linking static target lib/librte_compressdev.a 00:01:21.222 [167/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:21.222 [168/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:21.222 [169/265] Linking static target lib/librte_eal.a 00:01:21.222 [170/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:21.222 [171/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:21.222 [172/265] Linking target lib/librte_kvargs.so.24.0 00:01:21.222 [173/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:21.222 [174/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:21.222 [175/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:21.222 [176/265] Linking static target lib/librte_security.a 00:01:21.222 [177/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:21.222 [178/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:21.222 [179/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:21.222 [180/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:21.222 [181/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:21.481 [182/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:21.481 [183/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.481 [184/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:21.481 [185/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:21.481 [186/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:21.481 [187/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:21.481 [188/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.481 [189/265] Linking static target drivers/librte_bus_vdev.a 00:01:21.481 [190/265] Linking target lib/librte_telemetry.so.24.0 00:01:21.481 [191/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:21.481 [192/265] Linking static target lib/librte_mbuf.a 00:01:21.481 [193/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.481 [194/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.481 [195/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:21.481 [196/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:21.481 [197/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:21.481 [198/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.481 [199/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:21.481 [200/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:21.481 [201/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:21.481 [202/265] Linking static target drivers/librte_bus_pci.a 00:01:21.481 [203/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:21.481 [204/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:21.481 [205/265] Linking static target lib/librte_hash.a 00:01:21.481 [206/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:21.740 [207/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:21.740 [208/265] Linking static target drivers/librte_mempool_ring.a 00:01:21.740 [209/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.740 [210/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.740 [211/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:21.740 [212/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:21.740 [213/265] Linking static target lib/librte_cryptodev.a 00:01:21.999 [214/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.999 [215/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.999 [216/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.999 [217/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.258 [218/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.258 [219/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.258 [220/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.517 [221/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.517 [222/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:22.517 [223/265] Linking static target lib/librte_ethdev.a 00:01:23.454 [224/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:24.024 [225/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.930 [226/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:25.930 [227/265] Linking static target lib/librte_vhost.a 00:01:28.464 [228/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.103 [229/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.640 [230/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.898 [231/265] Linking target lib/librte_eal.so.24.0 00:01:33.898 [232/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:34.157 [233/265] Linking target lib/librte_timer.so.24.0 00:01:34.157 [234/265] Linking target drivers/librte_bus_vdev.so.24.0 00:01:34.157 [235/265] Linking target lib/librte_meter.so.24.0 00:01:34.157 [236/265] Linking target lib/librte_ring.so.24.0 00:01:34.157 [237/265] Linking target lib/librte_pci.so.24.0 00:01:34.157 [238/265] Linking target lib/librte_dmadev.so.24.0 00:01:34.157 [239/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:34.157 [240/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:34.157 [241/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:34.157 [242/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:34.157 [243/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:34.157 [244/265] Linking target lib/librte_rcu.so.24.0 00:01:34.157 [245/265] Linking target lib/librte_mempool.so.24.0 00:01:34.157 [246/265] Linking target drivers/librte_bus_pci.so.24.0 00:01:34.416 [247/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:34.416 [248/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:34.416 [249/265] Linking target lib/librte_mbuf.so.24.0 00:01:34.416 [250/265] Linking target drivers/librte_mempool_ring.so.24.0 00:01:34.416 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:34.675 [252/265] Linking target lib/librte_compressdev.so.24.0 00:01:34.675 [253/265] Linking target lib/librte_net.so.24.0 00:01:34.675 [254/265] Linking target lib/librte_cryptodev.so.24.0 00:01:34.675 [255/265] Linking target lib/librte_reorder.so.24.0 00:01:34.675 [256/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:34.675 [257/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:34.675 [258/265] Linking target lib/librte_hash.so.24.0 00:01:34.675 [259/265] Linking target lib/librte_cmdline.so.24.0 00:01:34.675 [260/265] Linking target lib/librte_security.so.24.0 00:01:34.675 [261/265] Linking target lib/librte_ethdev.so.24.0 00:01:34.934 [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:34.934 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:34.934 [264/265] Linking target lib/librte_power.so.24.0 00:01:34.934 [265/265] Linking target lib/librte_vhost.so.24.0 00:01:34.934 INFO: autodetecting backend as ninja 00:01:34.934 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 112 00:01:36.313 CC lib/ut/ut.o 00:01:36.313 CC lib/ut_mock/mock.o 00:01:36.313 CC lib/log/log.o 00:01:36.313 CC lib/log/log_flags.o 00:01:36.313 CC lib/log/log_deprecated.o 00:01:36.313 LIB libspdk_ut.a 00:01:36.313 LIB libspdk_ut_mock.a 00:01:36.313 LIB libspdk_log.a 00:01:36.313 SO libspdk_ut.so.2.0 00:01:36.313 SO libspdk_ut_mock.so.6.0 00:01:36.313 SO libspdk_log.so.7.0 00:01:36.313 SYMLINK libspdk_ut.so 00:01:36.313 SYMLINK libspdk_ut_mock.so 00:01:36.313 SYMLINK libspdk_log.so 00:01:36.879 CC lib/dma/dma.o 00:01:36.879 CXX lib/trace_parser/trace.o 00:01:36.879 CC lib/util/base64.o 00:01:36.879 CC lib/util/bit_array.o 00:01:36.879 CC lib/util/crc32.o 00:01:36.879 CC lib/util/cpuset.o 00:01:36.879 CC lib/util/crc16.o 00:01:36.879 CC lib/util/crc32c.o 00:01:36.879 CC lib/util/crc32_ieee.o 00:01:36.879 CC lib/util/crc64.o 00:01:36.879 CC lib/ioat/ioat.o 00:01:36.879 CC lib/util/dif.o 00:01:36.879 CC lib/util/fd.o 00:01:36.879 CC lib/util/file.o 00:01:36.879 CC lib/util/hexlify.o 00:01:36.879 CC lib/util/iov.o 00:01:36.879 CC lib/util/strerror_tls.o 00:01:36.879 CC lib/util/math.o 00:01:36.879 CC lib/util/pipe.o 00:01:36.879 CC lib/util/string.o 00:01:36.879 CC lib/util/uuid.o 00:01:36.879 CC lib/util/fd_group.o 00:01:36.879 CC lib/util/xor.o 00:01:36.879 CC lib/util/zipf.o 00:01:36.879 LIB libspdk_dma.a 00:01:36.879 CC lib/vfio_user/host/vfio_user_pci.o 00:01:36.879 CC lib/vfio_user/host/vfio_user.o 00:01:36.879 SO libspdk_dma.so.4.0 00:01:37.137 SYMLINK libspdk_dma.so 00:01:37.137 LIB libspdk_ioat.a 00:01:37.137 SO libspdk_ioat.so.7.0 00:01:37.137 SYMLINK libspdk_ioat.so 00:01:37.137 LIB libspdk_vfio_user.a 00:01:37.137 SO libspdk_vfio_user.so.5.0 00:01:37.396 SYMLINK libspdk_vfio_user.so 00:01:37.396 LIB libspdk_util.a 00:01:37.396 SO libspdk_util.so.9.0 00:01:37.396 SYMLINK libspdk_util.so 00:01:37.396 LIB libspdk_trace_parser.a 00:01:37.655 SO libspdk_trace_parser.so.5.0 00:01:37.655 SYMLINK libspdk_trace_parser.so 00:01:37.913 CC lib/rdma/common.o 00:01:37.913 CC lib/json/json_parse.o 00:01:37.913 CC lib/rdma/rdma_verbs.o 00:01:37.913 CC lib/json/json_write.o 00:01:37.913 CC lib/json/json_util.o 00:01:37.913 CC lib/env_dpdk/env.o 00:01:37.913 CC lib/env_dpdk/memory.o 00:01:37.913 CC lib/env_dpdk/init.o 00:01:37.913 CC lib/env_dpdk/pci.o 00:01:37.913 CC lib/env_dpdk/threads.o 00:01:37.913 CC lib/env_dpdk/pci_ioat.o 00:01:37.913 CC lib/env_dpdk/pci_virtio.o 00:01:37.913 CC lib/env_dpdk/pci_vmd.o 00:01:37.914 CC lib/env_dpdk/pci_idxd.o 00:01:37.914 CC lib/env_dpdk/pci_dpdk.o 00:01:37.914 CC lib/vmd/led.o 00:01:37.914 CC lib/env_dpdk/pci_event.o 00:01:37.914 CC lib/vmd/vmd.o 00:01:37.914 CC lib/env_dpdk/sigbus_handler.o 00:01:37.914 CC lib/conf/conf.o 00:01:37.914 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:37.914 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:37.914 CC lib/idxd/idxd.o 00:01:37.914 CC lib/idxd/idxd_user.o 00:01:38.172 LIB libspdk_conf.a 00:01:38.172 LIB libspdk_rdma.a 00:01:38.172 LIB libspdk_json.a 00:01:38.172 SO libspdk_conf.so.6.0 00:01:38.172 SO libspdk_rdma.so.6.0 00:01:38.172 SO libspdk_json.so.6.0 00:01:38.172 SYMLINK libspdk_conf.so 00:01:38.172 SYMLINK libspdk_rdma.so 00:01:38.172 SYMLINK libspdk_json.so 00:01:38.431 LIB libspdk_idxd.a 00:01:38.431 SO libspdk_idxd.so.12.0 00:01:38.431 LIB libspdk_vmd.a 00:01:38.431 SO libspdk_vmd.so.6.0 00:01:38.431 SYMLINK libspdk_idxd.so 00:01:38.689 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:38.689 CC lib/jsonrpc/jsonrpc_server.o 00:01:38.689 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:38.689 CC lib/jsonrpc/jsonrpc_client.o 00:01:38.689 SYMLINK libspdk_vmd.so 00:01:38.948 LIB libspdk_jsonrpc.a 00:01:38.948 SO libspdk_jsonrpc.so.6.0 00:01:38.948 SYMLINK libspdk_jsonrpc.so 00:01:39.206 LIB libspdk_env_dpdk.a 00:01:39.206 SO libspdk_env_dpdk.so.14.0 00:01:39.206 CC lib/rpc/rpc.o 00:01:39.206 SYMLINK libspdk_env_dpdk.so 00:01:39.466 LIB libspdk_rpc.a 00:01:39.466 SO libspdk_rpc.so.6.0 00:01:39.724 SYMLINK libspdk_rpc.so 00:01:39.983 CC lib/trace/trace.o 00:01:39.983 CC lib/trace/trace_flags.o 00:01:39.983 CC lib/trace/trace_rpc.o 00:01:39.983 CC lib/keyring/keyring.o 00:01:39.983 CC lib/keyring/keyring_rpc.o 00:01:39.983 CC lib/notify/notify.o 00:01:39.983 CC lib/notify/notify_rpc.o 00:01:40.242 LIB libspdk_notify.a 00:01:40.242 LIB libspdk_trace.a 00:01:40.242 SO libspdk_notify.so.6.0 00:01:40.242 LIB libspdk_keyring.a 00:01:40.242 SO libspdk_trace.so.10.0 00:01:40.242 SO libspdk_keyring.so.1.0 00:01:40.242 SYMLINK libspdk_notify.so 00:01:40.242 SYMLINK libspdk_trace.so 00:01:40.242 SYMLINK libspdk_keyring.so 00:01:40.501 CC lib/thread/thread.o 00:01:40.501 CC lib/thread/iobuf.o 00:01:40.759 CC lib/sock/sock.o 00:01:40.759 CC lib/sock/sock_rpc.o 00:01:41.018 LIB libspdk_sock.a 00:01:41.018 SO libspdk_sock.so.9.0 00:01:41.018 SYMLINK libspdk_sock.so 00:01:41.586 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:41.586 CC lib/nvme/nvme_fabric.o 00:01:41.586 CC lib/nvme/nvme_ctrlr.o 00:01:41.586 CC lib/nvme/nvme_ns_cmd.o 00:01:41.586 CC lib/nvme/nvme_ns.o 00:01:41.586 CC lib/nvme/nvme_pcie_common.o 00:01:41.586 CC lib/nvme/nvme.o 00:01:41.586 CC lib/nvme/nvme_pcie.o 00:01:41.586 CC lib/nvme/nvme_qpair.o 00:01:41.586 CC lib/nvme/nvme_quirks.o 00:01:41.586 CC lib/nvme/nvme_transport.o 00:01:41.586 CC lib/nvme/nvme_discovery.o 00:01:41.586 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:41.586 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:41.586 CC lib/nvme/nvme_tcp.o 00:01:41.586 CC lib/nvme/nvme_opal.o 00:01:41.586 CC lib/nvme/nvme_io_msg.o 00:01:41.586 CC lib/nvme/nvme_poll_group.o 00:01:41.586 CC lib/nvme/nvme_zns.o 00:01:41.586 CC lib/nvme/nvme_stubs.o 00:01:41.586 CC lib/nvme/nvme_auth.o 00:01:41.586 CC lib/nvme/nvme_cuse.o 00:01:41.586 CC lib/nvme/nvme_vfio_user.o 00:01:41.586 CC lib/nvme/nvme_rdma.o 00:01:41.901 LIB libspdk_thread.a 00:01:41.901 SO libspdk_thread.so.10.0 00:01:42.161 SYMLINK libspdk_thread.so 00:01:42.420 CC lib/init/subsystem.o 00:01:42.420 CC lib/init/json_config.o 00:01:42.420 CC lib/init/subsystem_rpc.o 00:01:42.420 CC lib/init/rpc.o 00:01:42.420 CC lib/blob/zeroes.o 00:01:42.420 CC lib/blob/blobstore.o 00:01:42.420 CC lib/blob/request.o 00:01:42.420 CC lib/blob/blob_bs_dev.o 00:01:42.420 CC lib/virtio/virtio.o 00:01:42.420 CC lib/virtio/virtio_vhost_user.o 00:01:42.420 CC lib/virtio/virtio_vfio_user.o 00:01:42.420 CC lib/virtio/virtio_pci.o 00:01:42.420 CC lib/vfu_tgt/tgt_endpoint.o 00:01:42.420 CC lib/vfu_tgt/tgt_rpc.o 00:01:42.420 CC lib/accel/accel.o 00:01:42.420 CC lib/accel/accel_rpc.o 00:01:42.420 CC lib/accel/accel_sw.o 00:01:42.678 LIB libspdk_init.a 00:01:42.678 SO libspdk_init.so.5.0 00:01:42.678 LIB libspdk_vfu_tgt.a 00:01:42.678 LIB libspdk_virtio.a 00:01:42.678 SO libspdk_vfu_tgt.so.3.0 00:01:42.678 SYMLINK libspdk_init.so 00:01:42.678 SO libspdk_virtio.so.7.0 00:01:42.937 SYMLINK libspdk_vfu_tgt.so 00:01:42.937 SYMLINK libspdk_virtio.so 00:01:43.195 CC lib/event/app.o 00:01:43.195 CC lib/event/reactor.o 00:01:43.195 CC lib/event/log_rpc.o 00:01:43.195 CC lib/event/app_rpc.o 00:01:43.195 CC lib/event/scheduler_static.o 00:01:43.453 LIB libspdk_accel.a 00:01:43.453 SO libspdk_accel.so.15.0 00:01:43.453 LIB libspdk_nvme.a 00:01:43.453 SYMLINK libspdk_accel.so 00:01:43.453 LIB libspdk_event.a 00:01:43.453 SO libspdk_nvme.so.13.0 00:01:43.711 SO libspdk_event.so.13.0 00:01:43.711 SYMLINK libspdk_event.so 00:01:43.711 CC lib/bdev/bdev.o 00:01:43.711 CC lib/bdev/bdev_rpc.o 00:01:43.711 CC lib/bdev/bdev_zone.o 00:01:43.711 CC lib/bdev/part.o 00:01:43.711 CC lib/bdev/scsi_nvme.o 00:01:43.969 SYMLINK libspdk_nvme.so 00:01:45.393 LIB libspdk_blob.a 00:01:45.393 SO libspdk_blob.so.11.0 00:01:45.393 SYMLINK libspdk_blob.so 00:01:45.650 CC lib/lvol/lvol.o 00:01:45.650 CC lib/blobfs/blobfs.o 00:01:45.650 CC lib/blobfs/tree.o 00:01:46.216 LIB libspdk_bdev.a 00:01:46.216 SO libspdk_bdev.so.15.0 00:01:46.216 SYMLINK libspdk_bdev.so 00:01:46.216 LIB libspdk_blobfs.a 00:01:46.216 LIB libspdk_lvol.a 00:01:46.474 SO libspdk_blobfs.so.10.0 00:01:46.474 SO libspdk_lvol.so.10.0 00:01:46.474 SYMLINK libspdk_lvol.so 00:01:46.474 SYMLINK libspdk_blobfs.so 00:01:46.732 CC lib/nbd/nbd.o 00:01:46.732 CC lib/nbd/nbd_rpc.o 00:01:46.732 CC lib/scsi/lun.o 00:01:46.732 CC lib/scsi/port.o 00:01:46.732 CC lib/scsi/dev.o 00:01:46.732 CC lib/nvmf/ctrlr.o 00:01:46.732 CC lib/nvmf/ctrlr_bdev.o 00:01:46.732 CC lib/nvmf/ctrlr_discovery.o 00:01:46.732 CC lib/ublk/ublk.o 00:01:46.732 CC lib/scsi/scsi.o 00:01:46.732 CC lib/ublk/ublk_rpc.o 00:01:46.732 CC lib/nvmf/subsystem.o 00:01:46.732 CC lib/scsi/scsi_bdev.o 00:01:46.732 CC lib/scsi/scsi_pr.o 00:01:46.732 CC lib/nvmf/nvmf.o 00:01:46.732 CC lib/scsi/scsi_rpc.o 00:01:46.732 CC lib/nvmf/nvmf_rpc.o 00:01:46.732 CC lib/scsi/task.o 00:01:46.732 CC lib/nvmf/transport.o 00:01:46.732 CC lib/nvmf/tcp.o 00:01:46.732 CC lib/nvmf/vfio_user.o 00:01:46.732 CC lib/nvmf/rdma.o 00:01:46.732 CC lib/ftl/ftl_core.o 00:01:46.732 CC lib/ftl/ftl_init.o 00:01:46.732 CC lib/ftl/ftl_layout.o 00:01:46.732 CC lib/ftl/ftl_debug.o 00:01:46.732 CC lib/ftl/ftl_io.o 00:01:46.732 CC lib/ftl/ftl_l2p_flat.o 00:01:46.732 CC lib/ftl/ftl_sb.o 00:01:46.732 CC lib/ftl/ftl_l2p.o 00:01:46.732 CC lib/ftl/ftl_nv_cache.o 00:01:46.732 CC lib/ftl/ftl_band.o 00:01:46.732 CC lib/ftl/ftl_writer.o 00:01:46.732 CC lib/ftl/ftl_band_ops.o 00:01:46.732 CC lib/ftl/ftl_l2p_cache.o 00:01:46.732 CC lib/ftl/ftl_rq.o 00:01:46.732 CC lib/ftl/ftl_reloc.o 00:01:46.732 CC lib/ftl/ftl_p2l.o 00:01:46.732 CC lib/ftl/mngt/ftl_mngt.o 00:01:46.732 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:46.732 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:46.732 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:46.732 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:46.732 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:46.732 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:46.732 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:46.732 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:46.732 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:46.732 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:46.732 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:46.732 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:46.732 CC lib/ftl/utils/ftl_conf.o 00:01:46.732 CC lib/ftl/utils/ftl_mempool.o 00:01:46.732 CC lib/ftl/utils/ftl_md.o 00:01:46.732 CC lib/ftl/utils/ftl_bitmap.o 00:01:46.732 CC lib/ftl/utils/ftl_property.o 00:01:46.732 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:46.732 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:46.732 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:46.732 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:46.732 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:46.732 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:46.732 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:46.732 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:46.732 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:46.732 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:46.732 CC lib/ftl/base/ftl_base_dev.o 00:01:46.732 CC lib/ftl/ftl_trace.o 00:01:46.732 CC lib/ftl/base/ftl_base_bdev.o 00:01:47.299 LIB libspdk_nbd.a 00:01:47.299 SO libspdk_nbd.so.7.0 00:01:47.299 LIB libspdk_scsi.a 00:01:47.299 SYMLINK libspdk_nbd.so 00:01:47.299 SO libspdk_scsi.so.9.0 00:01:47.299 LIB libspdk_ublk.a 00:01:47.557 SO libspdk_ublk.so.3.0 00:01:47.557 SYMLINK libspdk_scsi.so 00:01:47.557 SYMLINK libspdk_ublk.so 00:01:47.557 LIB libspdk_ftl.a 00:01:47.815 CC lib/vhost/vhost.o 00:01:47.815 CC lib/vhost/vhost_rpc.o 00:01:47.815 CC lib/vhost/vhost_blk.o 00:01:47.815 CC lib/vhost/vhost_scsi.o 00:01:47.815 CC lib/vhost/rte_vhost_user.o 00:01:47.815 CC lib/iscsi/conn.o 00:01:47.815 CC lib/iscsi/init_grp.o 00:01:47.815 CC lib/iscsi/iscsi.o 00:01:47.815 CC lib/iscsi/md5.o 00:01:47.815 CC lib/iscsi/param.o 00:01:47.815 CC lib/iscsi/portal_grp.o 00:01:47.815 CC lib/iscsi/tgt_node.o 00:01:47.815 CC lib/iscsi/iscsi_subsystem.o 00:01:47.815 CC lib/iscsi/iscsi_rpc.o 00:01:47.815 CC lib/iscsi/task.o 00:01:47.815 SO libspdk_ftl.so.9.0 00:01:48.073 SYMLINK libspdk_ftl.so 00:01:48.638 LIB libspdk_vhost.a 00:01:48.638 SO libspdk_vhost.so.8.0 00:01:48.638 LIB libspdk_nvmf.a 00:01:48.896 SO libspdk_nvmf.so.18.0 00:01:48.896 SYMLINK libspdk_vhost.so 00:01:49.154 LIB libspdk_iscsi.a 00:01:49.154 SYMLINK libspdk_nvmf.so 00:01:49.154 SO libspdk_iscsi.so.8.0 00:01:49.413 SYMLINK libspdk_iscsi.so 00:01:49.673 CC module/env_dpdk/env_dpdk_rpc.o 00:01:49.932 CC module/vfu_device/vfu_virtio.o 00:01:49.932 CC module/vfu_device/vfu_virtio_blk.o 00:01:49.932 CC module/vfu_device/vfu_virtio_scsi.o 00:01:49.932 CC module/vfu_device/vfu_virtio_rpc.o 00:01:49.932 LIB libspdk_env_dpdk_rpc.a 00:01:49.932 CC module/sock/posix/posix.o 00:01:49.932 CC module/accel/dsa/accel_dsa.o 00:01:49.932 CC module/accel/dsa/accel_dsa_rpc.o 00:01:49.932 CC module/accel/iaa/accel_iaa.o 00:01:49.932 CC module/accel/ioat/accel_ioat.o 00:01:49.932 CC module/accel/iaa/accel_iaa_rpc.o 00:01:49.932 CC module/accel/ioat/accel_ioat_rpc.o 00:01:49.932 SO libspdk_env_dpdk_rpc.so.6.0 00:01:49.932 CC module/keyring/file/keyring.o 00:01:49.932 CC module/keyring/file/keyring_rpc.o 00:01:49.932 CC module/blob/bdev/blob_bdev.o 00:01:49.932 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:49.932 CC module/scheduler/gscheduler/gscheduler.o 00:01:49.932 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:49.932 CC module/accel/error/accel_error.o 00:01:49.932 CC module/accel/error/accel_error_rpc.o 00:01:49.932 SYMLINK libspdk_env_dpdk_rpc.so 00:01:50.190 LIB libspdk_keyring_file.a 00:01:50.190 LIB libspdk_scheduler_dpdk_governor.a 00:01:50.190 LIB libspdk_scheduler_gscheduler.a 00:01:50.190 SO libspdk_keyring_file.so.1.0 00:01:50.190 LIB libspdk_accel_iaa.a 00:01:50.190 LIB libspdk_accel_ioat.a 00:01:50.190 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:50.190 LIB libspdk_accel_error.a 00:01:50.190 SO libspdk_scheduler_gscheduler.so.4.0 00:01:50.190 LIB libspdk_accel_dsa.a 00:01:50.190 LIB libspdk_scheduler_dynamic.a 00:01:50.190 SO libspdk_accel_iaa.so.3.0 00:01:50.190 SO libspdk_accel_ioat.so.6.0 00:01:50.190 SYMLINK libspdk_keyring_file.so 00:01:50.190 SO libspdk_accel_error.so.2.0 00:01:50.190 LIB libspdk_blob_bdev.a 00:01:50.190 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:50.190 SO libspdk_accel_dsa.so.5.0 00:01:50.190 SO libspdk_scheduler_dynamic.so.4.0 00:01:50.190 SYMLINK libspdk_scheduler_gscheduler.so 00:01:50.190 SYMLINK libspdk_accel_iaa.so 00:01:50.190 SYMLINK libspdk_accel_ioat.so 00:01:50.190 SO libspdk_blob_bdev.so.11.0 00:01:50.190 SYMLINK libspdk_accel_error.so 00:01:50.449 SYMLINK libspdk_accel_dsa.so 00:01:50.449 SYMLINK libspdk_scheduler_dynamic.so 00:01:50.449 SYMLINK libspdk_blob_bdev.so 00:01:50.449 LIB libspdk_vfu_device.a 00:01:50.449 SO libspdk_vfu_device.so.3.0 00:01:50.709 SYMLINK libspdk_vfu_device.so 00:01:50.709 LIB libspdk_sock_posix.a 00:01:50.709 SO libspdk_sock_posix.so.6.0 00:01:50.709 SYMLINK libspdk_sock_posix.so 00:01:50.967 CC module/blobfs/bdev/blobfs_bdev.o 00:01:50.967 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:50.967 CC module/bdev/error/vbdev_error.o 00:01:50.967 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:50.967 CC module/bdev/lvol/vbdev_lvol.o 00:01:50.967 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:50.967 CC module/bdev/error/vbdev_error_rpc.o 00:01:50.967 CC module/bdev/nvme/bdev_nvme.o 00:01:50.967 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:50.967 CC module/bdev/nvme/nvme_rpc.o 00:01:50.967 CC module/bdev/iscsi/bdev_iscsi.o 00:01:50.967 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:50.967 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:50.967 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:50.967 CC module/bdev/null/bdev_null.o 00:01:50.968 CC module/bdev/gpt/gpt.o 00:01:50.968 CC module/bdev/delay/vbdev_delay.o 00:01:50.968 CC module/bdev/nvme/bdev_mdns_client.o 00:01:50.968 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:50.968 CC module/bdev/nvme/vbdev_opal.o 00:01:50.968 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:50.968 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:50.968 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:50.968 CC module/bdev/malloc/bdev_malloc.o 00:01:50.968 CC module/bdev/gpt/vbdev_gpt.o 00:01:50.968 CC module/bdev/passthru/vbdev_passthru.o 00:01:50.968 CC module/bdev/null/bdev_null_rpc.o 00:01:50.968 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:50.968 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:50.968 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:50.968 CC module/bdev/ftl/bdev_ftl.o 00:01:50.968 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:50.968 CC module/bdev/aio/bdev_aio.o 00:01:50.968 CC module/bdev/aio/bdev_aio_rpc.o 00:01:50.968 CC module/bdev/raid/bdev_raid.o 00:01:50.968 CC module/bdev/raid/bdev_raid_rpc.o 00:01:50.968 CC module/bdev/raid/bdev_raid_sb.o 00:01:50.968 CC module/bdev/raid/raid0.o 00:01:50.968 CC module/bdev/raid/raid1.o 00:01:50.968 CC module/bdev/raid/concat.o 00:01:50.968 CC module/bdev/split/vbdev_split.o 00:01:50.968 CC module/bdev/split/vbdev_split_rpc.o 00:01:51.226 LIB libspdk_blobfs_bdev.a 00:01:51.226 SO libspdk_blobfs_bdev.so.6.0 00:01:51.226 LIB libspdk_bdev_split.a 00:01:51.226 LIB libspdk_bdev_null.a 00:01:51.226 LIB libspdk_bdev_gpt.a 00:01:51.226 LIB libspdk_bdev_error.a 00:01:51.226 SYMLINK libspdk_blobfs_bdev.so 00:01:51.226 LIB libspdk_bdev_ftl.a 00:01:51.226 SO libspdk_bdev_split.so.6.0 00:01:51.226 SO libspdk_bdev_gpt.so.6.0 00:01:51.226 SO libspdk_bdev_null.so.6.0 00:01:51.226 LIB libspdk_bdev_zone_block.a 00:01:51.226 SO libspdk_bdev_error.so.6.0 00:01:51.226 LIB libspdk_bdev_passthru.a 00:01:51.226 SO libspdk_bdev_ftl.so.6.0 00:01:51.226 SO libspdk_bdev_passthru.so.6.0 00:01:51.226 SO libspdk_bdev_zone_block.so.6.0 00:01:51.226 LIB libspdk_bdev_aio.a 00:01:51.226 LIB libspdk_bdev_delay.a 00:01:51.226 LIB libspdk_bdev_malloc.a 00:01:51.226 LIB libspdk_bdev_iscsi.a 00:01:51.226 SYMLINK libspdk_bdev_split.so 00:01:51.226 SYMLINK libspdk_bdev_error.so 00:01:51.226 SYMLINK libspdk_bdev_null.so 00:01:51.226 SYMLINK libspdk_bdev_gpt.so 00:01:51.226 SO libspdk_bdev_aio.so.6.0 00:01:51.226 SO libspdk_bdev_iscsi.so.6.0 00:01:51.226 SO libspdk_bdev_delay.so.6.0 00:01:51.485 SYMLINK libspdk_bdev_ftl.so 00:01:51.485 SO libspdk_bdev_malloc.so.6.0 00:01:51.485 SYMLINK libspdk_bdev_passthru.so 00:01:51.485 SYMLINK libspdk_bdev_zone_block.so 00:01:51.485 SYMLINK libspdk_bdev_aio.so 00:01:51.485 LIB libspdk_bdev_lvol.a 00:01:51.485 SYMLINK libspdk_bdev_iscsi.so 00:01:51.485 SYMLINK libspdk_bdev_delay.so 00:01:51.485 SYMLINK libspdk_bdev_malloc.so 00:01:51.485 SO libspdk_bdev_lvol.so.6.0 00:01:51.485 LIB libspdk_bdev_virtio.a 00:01:51.485 SO libspdk_bdev_virtio.so.6.0 00:01:51.485 SYMLINK libspdk_bdev_lvol.so 00:01:51.485 SYMLINK libspdk_bdev_virtio.so 00:01:51.744 LIB libspdk_bdev_raid.a 00:01:52.004 SO libspdk_bdev_raid.so.6.0 00:01:52.004 SYMLINK libspdk_bdev_raid.so 00:01:52.941 LIB libspdk_bdev_nvme.a 00:01:52.942 SO libspdk_bdev_nvme.so.7.0 00:01:52.942 SYMLINK libspdk_bdev_nvme.so 00:01:53.878 CC module/event/subsystems/scheduler/scheduler.o 00:01:53.879 CC module/event/subsystems/sock/sock.o 00:01:53.879 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:53.879 CC module/event/subsystems/vmd/vmd.o 00:01:53.879 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:53.879 CC module/event/subsystems/iobuf/iobuf.o 00:01:53.879 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:53.879 CC module/event/subsystems/keyring/keyring.o 00:01:53.879 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:01:53.879 LIB libspdk_event_scheduler.a 00:01:53.879 LIB libspdk_event_sock.a 00:01:53.879 SO libspdk_event_scheduler.so.4.0 00:01:53.879 SO libspdk_event_sock.so.5.0 00:01:53.879 LIB libspdk_event_vhost_blk.a 00:01:53.879 LIB libspdk_event_vmd.a 00:01:53.879 LIB libspdk_event_iobuf.a 00:01:53.879 LIB libspdk_event_keyring.a 00:01:53.879 LIB libspdk_event_vfu_tgt.a 00:01:53.879 SYMLINK libspdk_event_scheduler.so 00:01:53.879 SO libspdk_event_vhost_blk.so.3.0 00:01:53.879 SYMLINK libspdk_event_sock.so 00:01:53.879 SO libspdk_event_iobuf.so.3.0 00:01:53.879 SO libspdk_event_vmd.so.6.0 00:01:53.879 SO libspdk_event_vfu_tgt.so.3.0 00:01:53.879 SO libspdk_event_keyring.so.1.0 00:01:53.879 SYMLINK libspdk_event_vhost_blk.so 00:01:54.138 SYMLINK libspdk_event_iobuf.so 00:01:54.138 SYMLINK libspdk_event_vfu_tgt.so 00:01:54.138 SYMLINK libspdk_event_vmd.so 00:01:54.138 SYMLINK libspdk_event_keyring.so 00:01:54.398 CC module/event/subsystems/accel/accel.o 00:01:54.398 LIB libspdk_event_accel.a 00:01:54.657 SO libspdk_event_accel.so.6.0 00:01:54.657 SYMLINK libspdk_event_accel.so 00:01:54.917 CC module/event/subsystems/bdev/bdev.o 00:01:55.176 LIB libspdk_event_bdev.a 00:01:55.176 SO libspdk_event_bdev.so.6.0 00:01:55.176 SYMLINK libspdk_event_bdev.so 00:01:55.744 CC module/event/subsystems/scsi/scsi.o 00:01:55.744 CC module/event/subsystems/ublk/ublk.o 00:01:55.744 CC module/event/subsystems/nbd/nbd.o 00:01:55.744 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:55.744 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:55.744 LIB libspdk_event_scsi.a 00:01:55.744 LIB libspdk_event_nbd.a 00:01:55.744 LIB libspdk_event_ublk.a 00:01:55.744 SO libspdk_event_scsi.so.6.0 00:01:55.744 SO libspdk_event_ublk.so.3.0 00:01:55.744 SO libspdk_event_nbd.so.6.0 00:01:55.744 LIB libspdk_event_nvmf.a 00:01:55.744 SYMLINK libspdk_event_scsi.so 00:01:55.744 SYMLINK libspdk_event_ublk.so 00:01:55.744 SYMLINK libspdk_event_nbd.so 00:01:56.003 SO libspdk_event_nvmf.so.6.0 00:01:56.003 SYMLINK libspdk_event_nvmf.so 00:01:56.262 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:56.262 CC module/event/subsystems/iscsi/iscsi.o 00:01:56.262 LIB libspdk_event_vhost_scsi.a 00:01:56.262 SO libspdk_event_vhost_scsi.so.3.0 00:01:56.262 LIB libspdk_event_iscsi.a 00:01:56.262 SO libspdk_event_iscsi.so.6.0 00:01:56.262 SYMLINK libspdk_event_vhost_scsi.so 00:01:56.521 SYMLINK libspdk_event_iscsi.so 00:01:56.521 SO libspdk.so.6.0 00:01:56.521 SYMLINK libspdk.so 00:01:57.103 CC app/spdk_nvme_identify/identify.o 00:01:57.103 CXX app/trace/trace.o 00:01:57.103 CC app/trace_record/trace_record.o 00:01:57.103 TEST_HEADER include/spdk/accel.h 00:01:57.103 CC test/rpc_client/rpc_client_test.o 00:01:57.104 TEST_HEADER include/spdk/accel_module.h 00:01:57.104 TEST_HEADER include/spdk/assert.h 00:01:57.104 CC app/spdk_lspci/spdk_lspci.o 00:01:57.104 CC app/spdk_nvme_perf/perf.o 00:01:57.104 TEST_HEADER include/spdk/barrier.h 00:01:57.104 TEST_HEADER include/spdk/bdev.h 00:01:57.104 TEST_HEADER include/spdk/base64.h 00:01:57.104 CC app/spdk_top/spdk_top.o 00:01:57.104 TEST_HEADER include/spdk/bdev_module.h 00:01:57.104 TEST_HEADER include/spdk/bdev_zone.h 00:01:57.104 TEST_HEADER include/spdk/bit_array.h 00:01:57.104 TEST_HEADER include/spdk/bit_pool.h 00:01:57.104 TEST_HEADER include/spdk/blob_bdev.h 00:01:57.104 CC app/spdk_nvme_discover/discovery_aer.o 00:01:57.104 TEST_HEADER include/spdk/blobfs.h 00:01:57.104 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:57.104 TEST_HEADER include/spdk/blob.h 00:01:57.104 TEST_HEADER include/spdk/conf.h 00:01:57.104 TEST_HEADER include/spdk/config.h 00:01:57.104 TEST_HEADER include/spdk/crc16.h 00:01:57.104 TEST_HEADER include/spdk/cpuset.h 00:01:57.104 TEST_HEADER include/spdk/crc64.h 00:01:57.104 TEST_HEADER include/spdk/crc32.h 00:01:57.104 TEST_HEADER include/spdk/dif.h 00:01:57.104 TEST_HEADER include/spdk/dma.h 00:01:57.104 TEST_HEADER include/spdk/env_dpdk.h 00:01:57.104 TEST_HEADER include/spdk/endian.h 00:01:57.104 TEST_HEADER include/spdk/event.h 00:01:57.104 TEST_HEADER include/spdk/env.h 00:01:57.104 TEST_HEADER include/spdk/fd_group.h 00:01:57.104 TEST_HEADER include/spdk/fd.h 00:01:57.104 TEST_HEADER include/spdk/file.h 00:01:57.104 TEST_HEADER include/spdk/gpt_spec.h 00:01:57.104 TEST_HEADER include/spdk/ftl.h 00:01:57.104 TEST_HEADER include/spdk/hexlify.h 00:01:57.104 TEST_HEADER include/spdk/idxd.h 00:01:57.104 TEST_HEADER include/spdk/histogram_data.h 00:01:57.104 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:57.104 CC app/iscsi_tgt/iscsi_tgt.o 00:01:57.104 TEST_HEADER include/spdk/init.h 00:01:57.104 TEST_HEADER include/spdk/idxd_spec.h 00:01:57.104 TEST_HEADER include/spdk/ioat.h 00:01:57.104 TEST_HEADER include/spdk/ioat_spec.h 00:01:57.104 TEST_HEADER include/spdk/iscsi_spec.h 00:01:57.104 TEST_HEADER include/spdk/jsonrpc.h 00:01:57.104 TEST_HEADER include/spdk/json.h 00:01:57.104 TEST_HEADER include/spdk/keyring_module.h 00:01:57.104 TEST_HEADER include/spdk/keyring.h 00:01:57.104 TEST_HEADER include/spdk/likely.h 00:01:57.104 TEST_HEADER include/spdk/log.h 00:01:57.104 TEST_HEADER include/spdk/memory.h 00:01:57.104 TEST_HEADER include/spdk/lvol.h 00:01:57.104 TEST_HEADER include/spdk/nbd.h 00:01:57.104 TEST_HEADER include/spdk/mmio.h 00:01:57.104 TEST_HEADER include/spdk/notify.h 00:01:57.104 TEST_HEADER include/spdk/nvme.h 00:01:57.104 TEST_HEADER include/spdk/nvme_intel.h 00:01:57.104 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:57.104 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:57.104 TEST_HEADER include/spdk/nvme_spec.h 00:01:57.104 TEST_HEADER include/spdk/nvme_zns.h 00:01:57.104 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:57.104 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:57.104 TEST_HEADER include/spdk/nvmf.h 00:01:57.104 TEST_HEADER include/spdk/nvmf_spec.h 00:01:57.104 CC app/nvmf_tgt/nvmf_main.o 00:01:57.104 TEST_HEADER include/spdk/opal.h 00:01:57.104 TEST_HEADER include/spdk/nvmf_transport.h 00:01:57.104 TEST_HEADER include/spdk/pci_ids.h 00:01:57.104 TEST_HEADER include/spdk/opal_spec.h 00:01:57.104 TEST_HEADER include/spdk/queue.h 00:01:57.104 TEST_HEADER include/spdk/pipe.h 00:01:57.104 CC app/spdk_dd/spdk_dd.o 00:01:57.104 TEST_HEADER include/spdk/reduce.h 00:01:57.104 TEST_HEADER include/spdk/scheduler.h 00:01:57.104 TEST_HEADER include/spdk/scsi.h 00:01:57.104 TEST_HEADER include/spdk/rpc.h 00:01:57.104 TEST_HEADER include/spdk/scsi_spec.h 00:01:57.104 TEST_HEADER include/spdk/sock.h 00:01:57.104 TEST_HEADER include/spdk/stdinc.h 00:01:57.104 TEST_HEADER include/spdk/thread.h 00:01:57.104 TEST_HEADER include/spdk/string.h 00:01:57.104 TEST_HEADER include/spdk/trace.h 00:01:57.104 TEST_HEADER include/spdk/trace_parser.h 00:01:57.104 TEST_HEADER include/spdk/tree.h 00:01:57.104 TEST_HEADER include/spdk/util.h 00:01:57.104 TEST_HEADER include/spdk/ublk.h 00:01:57.104 TEST_HEADER include/spdk/uuid.h 00:01:57.104 TEST_HEADER include/spdk/version.h 00:01:57.104 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:57.104 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:57.104 CC app/vhost/vhost.o 00:01:57.104 TEST_HEADER include/spdk/vhost.h 00:01:57.104 TEST_HEADER include/spdk/vmd.h 00:01:57.104 TEST_HEADER include/spdk/xor.h 00:01:57.104 TEST_HEADER include/spdk/zipf.h 00:01:57.104 CXX test/cpp_headers/accel.o 00:01:57.104 CXX test/cpp_headers/accel_module.o 00:01:57.104 CXX test/cpp_headers/assert.o 00:01:57.104 CXX test/cpp_headers/barrier.o 00:01:57.104 CXX test/cpp_headers/base64.o 00:01:57.104 CXX test/cpp_headers/bdev.o 00:01:57.104 CXX test/cpp_headers/bdev_module.o 00:01:57.104 CXX test/cpp_headers/bdev_zone.o 00:01:57.104 CXX test/cpp_headers/bit_array.o 00:01:57.104 CXX test/cpp_headers/bit_pool.o 00:01:57.104 CXX test/cpp_headers/blob_bdev.o 00:01:57.104 CXX test/cpp_headers/blobfs_bdev.o 00:01:57.104 CXX test/cpp_headers/blobfs.o 00:01:57.104 CXX test/cpp_headers/blob.o 00:01:57.104 CXX test/cpp_headers/conf.o 00:01:57.104 CXX test/cpp_headers/cpuset.o 00:01:57.104 CXX test/cpp_headers/config.o 00:01:57.104 CXX test/cpp_headers/crc16.o 00:01:57.104 CXX test/cpp_headers/crc32.o 00:01:57.104 CXX test/cpp_headers/crc64.o 00:01:57.104 CXX test/cpp_headers/dif.o 00:01:57.104 CXX test/cpp_headers/dma.o 00:01:57.104 CXX test/cpp_headers/endian.o 00:01:57.104 CXX test/cpp_headers/env_dpdk.o 00:01:57.104 CXX test/cpp_headers/env.o 00:01:57.104 CXX test/cpp_headers/event.o 00:01:57.104 CXX test/cpp_headers/fd_group.o 00:01:57.104 CXX test/cpp_headers/fd.o 00:01:57.104 CXX test/cpp_headers/file.o 00:01:57.104 CXX test/cpp_headers/ftl.o 00:01:57.104 CXX test/cpp_headers/gpt_spec.o 00:01:57.104 CXX test/cpp_headers/hexlify.o 00:01:57.104 CXX test/cpp_headers/idxd.o 00:01:57.104 CXX test/cpp_headers/idxd_spec.o 00:01:57.104 CXX test/cpp_headers/histogram_data.o 00:01:57.104 CXX test/cpp_headers/init.o 00:01:57.104 CXX test/cpp_headers/ioat.o 00:01:57.104 CC app/spdk_tgt/spdk_tgt.o 00:01:57.104 CC examples/accel/perf/accel_perf.o 00:01:57.104 CC examples/idxd/perf/perf.o 00:01:57.104 CC examples/ioat/verify/verify.o 00:01:57.104 CC examples/ioat/perf/perf.o 00:01:57.382 CC examples/sock/hello_world/hello_sock.o 00:01:57.382 CC examples/nvme/hello_world/hello_world.o 00:01:57.382 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:57.382 CXX test/cpp_headers/ioat_spec.o 00:01:57.382 CC examples/vmd/lsvmd/lsvmd.o 00:01:57.382 CC examples/vmd/led/led.o 00:01:57.382 CC examples/nvme/abort/abort.o 00:01:57.382 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:57.382 CC examples/nvme/reconnect/reconnect.o 00:01:57.382 CC examples/nvme/arbitration/arbitration.o 00:01:57.382 CC test/env/vtophys/vtophys.o 00:01:57.382 CC examples/util/zipf/zipf.o 00:01:57.382 CC test/event/event_perf/event_perf.o 00:01:57.382 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:57.382 CC test/env/memory/memory_ut.o 00:01:57.382 CC test/event/reactor/reactor.o 00:01:57.382 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:57.382 CC test/env/pci/pci_ut.o 00:01:57.382 CC examples/nvme/hotplug/hotplug.o 00:01:57.382 CC test/nvme/sgl/sgl.o 00:01:57.382 CC test/app/histogram_perf/histogram_perf.o 00:01:57.382 CC test/event/reactor_perf/reactor_perf.o 00:01:57.382 CC examples/thread/thread/thread_ex.o 00:01:57.382 CC app/fio/nvme/fio_plugin.o 00:01:57.382 CC test/app/jsoncat/jsoncat.o 00:01:57.382 CC test/app/stub/stub.o 00:01:57.382 CC test/nvme/e2edp/nvme_dp.o 00:01:57.382 CC test/event/app_repeat/app_repeat.o 00:01:57.382 CC test/nvme/overhead/overhead.o 00:01:57.382 CC test/nvme/aer/aer.o 00:01:57.382 CC test/nvme/reset/reset.o 00:01:57.382 CC examples/nvmf/nvmf/nvmf.o 00:01:57.382 CC test/accel/dif/dif.o 00:01:57.382 CC examples/bdev/hello_world/hello_bdev.o 00:01:57.382 CC test/nvme/simple_copy/simple_copy.o 00:01:57.382 CC test/nvme/compliance/nvme_compliance.o 00:01:57.382 CC test/thread/poller_perf/poller_perf.o 00:01:57.382 CC test/nvme/fused_ordering/fused_ordering.o 00:01:57.382 CC examples/blob/hello_world/hello_blob.o 00:01:57.382 CC test/nvme/err_injection/err_injection.o 00:01:57.382 CC test/nvme/startup/startup.o 00:01:57.382 CC test/nvme/connect_stress/connect_stress.o 00:01:57.382 CC test/nvme/boot_partition/boot_partition.o 00:01:57.382 CC test/nvme/reserve/reserve.o 00:01:57.382 CC test/nvme/cuse/cuse.o 00:01:57.382 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:57.382 CC test/bdev/bdevio/bdevio.o 00:01:57.382 CC examples/bdev/bdevperf/bdevperf.o 00:01:57.382 CC test/blobfs/mkfs/mkfs.o 00:01:57.382 CC examples/blob/cli/blobcli.o 00:01:57.382 CC test/event/scheduler/scheduler.o 00:01:57.382 CC test/nvme/fdp/fdp.o 00:01:57.382 CC test/dma/test_dma/test_dma.o 00:01:57.382 CC test/app/bdev_svc/bdev_svc.o 00:01:57.382 CC app/fio/bdev/fio_plugin.o 00:01:57.382 LINK spdk_lspci 00:01:57.648 LINK rpc_client_test 00:01:57.648 LINK nvmf_tgt 00:01:57.648 CC test/lvol/esnap/esnap.o 00:01:57.648 LINK spdk_nvme_discover 00:01:57.648 LINK iscsi_tgt 00:01:57.648 CC test/env/mem_callbacks/mem_callbacks.o 00:01:57.648 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:57.648 LINK interrupt_tgt 00:01:57.648 LINK vhost 00:01:57.926 LINK vtophys 00:01:57.926 LINK lsvmd 00:01:57.926 LINK histogram_perf 00:01:57.926 LINK zipf 00:01:57.926 LINK spdk_trace_record 00:01:57.926 LINK jsoncat 00:01:57.926 LINK env_dpdk_post_init 00:01:57.926 LINK reactor_perf 00:01:57.926 CXX test/cpp_headers/iscsi_spec.o 00:01:57.926 LINK event_perf 00:01:57.926 LINK poller_perf 00:01:57.926 LINK reactor 00:01:57.926 LINK led 00:01:57.926 LINK pmr_persistence 00:01:57.926 CXX test/cpp_headers/json.o 00:01:57.926 CXX test/cpp_headers/jsonrpc.o 00:01:57.926 LINK boot_partition 00:01:57.926 CXX test/cpp_headers/keyring.o 00:01:57.926 CXX test/cpp_headers/keyring_module.o 00:01:57.926 LINK spdk_tgt 00:01:57.926 LINK cmb_copy 00:01:57.926 CXX test/cpp_headers/likely.o 00:01:57.926 CXX test/cpp_headers/log.o 00:01:57.926 LINK app_repeat 00:01:57.926 CXX test/cpp_headers/lvol.o 00:01:57.926 CXX test/cpp_headers/mmio.o 00:01:57.926 CXX test/cpp_headers/memory.o 00:01:57.926 CXX test/cpp_headers/nbd.o 00:01:57.926 CXX test/cpp_headers/notify.o 00:01:57.926 CXX test/cpp_headers/nvme.o 00:01:57.926 CXX test/cpp_headers/nvme_intel.o 00:01:57.926 CXX test/cpp_headers/nvme_ocssd.o 00:01:57.926 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:57.926 CXX test/cpp_headers/nvme_zns.o 00:01:57.926 CXX test/cpp_headers/nvme_spec.o 00:01:57.926 CXX test/cpp_headers/nvmf_cmd.o 00:01:57.926 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:57.926 CXX test/cpp_headers/nvmf.o 00:01:57.926 LINK stub 00:01:57.926 CXX test/cpp_headers/nvmf_spec.o 00:01:57.926 CXX test/cpp_headers/nvmf_transport.o 00:01:57.926 CXX test/cpp_headers/opal.o 00:01:57.926 CXX test/cpp_headers/opal_spec.o 00:01:57.926 CXX test/cpp_headers/pci_ids.o 00:01:57.926 CXX test/cpp_headers/pipe.o 00:01:57.926 CXX test/cpp_headers/queue.o 00:01:57.926 CXX test/cpp_headers/reduce.o 00:01:57.926 LINK startup 00:01:57.926 LINK mkfs 00:01:57.926 CXX test/cpp_headers/rpc.o 00:01:57.926 CXX test/cpp_headers/scheduler.o 00:01:57.926 CXX test/cpp_headers/scsi.o 00:01:57.926 LINK doorbell_aers 00:01:57.926 LINK bdev_svc 00:01:57.926 CXX test/cpp_headers/scsi_spec.o 00:01:57.926 LINK hello_world 00:01:57.926 LINK connect_stress 00:01:57.926 LINK reserve 00:01:57.926 CXX test/cpp_headers/sock.o 00:01:57.926 CXX test/cpp_headers/stdinc.o 00:01:57.926 LINK fused_ordering 00:01:57.926 CXX test/cpp_headers/string.o 00:01:57.926 LINK err_injection 00:01:57.926 LINK verify 00:01:57.926 LINK ioat_perf 00:01:57.926 CXX test/cpp_headers/thread.o 00:01:57.926 CXX test/cpp_headers/trace.o 00:01:57.926 LINK hello_blob 00:01:57.926 LINK hotplug 00:01:57.926 LINK scheduler 00:01:57.926 LINK reset 00:01:58.225 LINK hello_bdev 00:01:58.225 LINK overhead 00:01:58.225 LINK sgl 00:01:58.225 LINK simple_copy 00:01:58.225 LINK hello_sock 00:01:58.225 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:58.225 CXX test/cpp_headers/trace_parser.o 00:01:58.225 LINK nvme_dp 00:01:58.225 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:58.225 LINK thread 00:01:58.225 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:58.225 LINK spdk_dd 00:01:58.225 LINK nvmf 00:01:58.225 LINK aer 00:01:58.225 LINK arbitration 00:01:58.225 CXX test/cpp_headers/tree.o 00:01:58.225 CXX test/cpp_headers/util.o 00:01:58.225 CXX test/cpp_headers/ublk.o 00:01:58.225 CXX test/cpp_headers/uuid.o 00:01:58.225 CXX test/cpp_headers/version.o 00:01:58.225 CXX test/cpp_headers/vfio_user_pci.o 00:01:58.225 LINK nvme_compliance 00:01:58.225 CXX test/cpp_headers/vfio_user_spec.o 00:01:58.225 LINK reconnect 00:01:58.225 LINK spdk_trace 00:01:58.225 CXX test/cpp_headers/vhost.o 00:01:58.225 CXX test/cpp_headers/vmd.o 00:01:58.225 CXX test/cpp_headers/xor.o 00:01:58.225 LINK idxd_perf 00:01:58.225 CXX test/cpp_headers/zipf.o 00:01:58.225 LINK bdevio 00:01:58.225 LINK fdp 00:01:58.225 LINK abort 00:01:58.484 LINK test_dma 00:01:58.484 LINK pci_ut 00:01:58.484 LINK dif 00:01:58.484 LINK accel_perf 00:01:58.484 LINK nvme_manage 00:01:58.484 LINK blobcli 00:01:58.484 LINK spdk_bdev 00:01:58.742 LINK nvme_fuzz 00:01:58.742 LINK spdk_nvme 00:01:58.742 LINK mem_callbacks 00:01:58.742 LINK spdk_top 00:01:58.742 LINK vhost_fuzz 00:01:58.742 LINK spdk_nvme_perf 00:01:58.742 LINK spdk_nvme_identify 00:01:59.001 LINK memory_ut 00:01:59.001 LINK bdevperf 00:01:59.001 LINK cuse 00:01:59.936 LINK iscsi_fuzz 00:02:02.465 LINK esnap 00:02:02.725 00:02:02.725 real 0m51.571s 00:02:02.725 user 7m5.546s 00:02:02.725 sys 4m17.915s 00:02:02.725 11:37:53 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:02:02.725 11:37:53 -- common/autotest_common.sh@10 -- $ set +x 00:02:02.725 ************************************ 00:02:02.725 END TEST make 00:02:02.725 ************************************ 00:02:02.725 11:37:53 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:02.725 11:37:53 -- pm/common@30 -- $ signal_monitor_resources TERM 00:02:02.725 11:37:53 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:02:02.725 11:37:53 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:02.725 11:37:53 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:02.725 11:37:53 -- pm/common@45 -- $ pid=2174421 00:02:02.725 11:37:53 -- pm/common@52 -- $ sudo kill -TERM 2174421 00:02:02.725 11:37:53 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:02.725 11:37:53 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:02.725 11:37:53 -- pm/common@45 -- $ pid=2174424 00:02:02.725 11:37:53 -- pm/common@52 -- $ sudo kill -TERM 2174424 00:02:02.725 11:37:53 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:02.725 11:37:53 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:02.725 11:37:53 -- pm/common@45 -- $ pid=2174423 00:02:02.725 11:37:53 -- pm/common@52 -- $ sudo kill -TERM 2174423 00:02:02.725 11:37:53 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:02.725 11:37:53 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:02.725 11:37:53 -- pm/common@45 -- $ pid=2174426 00:02:02.725 11:37:53 -- pm/common@52 -- $ sudo kill -TERM 2174426 00:02:02.983 11:37:53 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:02.983 11:37:53 -- nvmf/common.sh@7 -- # uname -s 00:02:02.983 11:37:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:02.984 11:37:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:02.984 11:37:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:02.984 11:37:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:02.984 11:37:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:02.984 11:37:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:02.984 11:37:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:02.984 11:37:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:02.984 11:37:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:02.984 11:37:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:02.984 11:37:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:02:02.984 11:37:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:02:02.984 11:37:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:02.984 11:37:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:02.984 11:37:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:02.984 11:37:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:02.984 11:37:53 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:02.984 11:37:53 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:02.984 11:37:53 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:02.984 11:37:53 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:02.984 11:37:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:02.984 11:37:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:02.984 11:37:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:02.984 11:37:53 -- paths/export.sh@5 -- # export PATH 00:02:02.984 11:37:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:02.984 11:37:53 -- nvmf/common.sh@47 -- # : 0 00:02:02.984 11:37:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:02.984 11:37:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:02.984 11:37:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:02.984 11:37:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:02.984 11:37:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:02.984 11:37:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:02.984 11:37:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:02.984 11:37:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:02.984 11:37:53 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:02.984 11:37:53 -- spdk/autotest.sh@32 -- # uname -s 00:02:02.984 11:37:53 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:02.984 11:37:53 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:02.984 11:37:53 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:02.984 11:37:53 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:02.984 11:37:53 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:02.984 11:37:53 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:02.984 11:37:53 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:02.984 11:37:53 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:02.984 11:37:53 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:02.984 11:37:53 -- spdk/autotest.sh@48 -- # udevadm_pid=2235308 00:02:02.984 11:37:53 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:02.984 11:37:53 -- pm/common@17 -- # local monitor 00:02:02.984 11:37:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:02.984 11:37:53 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=2235310 00:02:02.984 11:37:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:02.984 11:37:53 -- pm/common@21 -- # date +%s 00:02:02.984 11:37:53 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=2235313 00:02:02.984 11:37:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:02.984 11:37:53 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=2235316 00:02:02.984 11:37:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:02.984 11:37:53 -- pm/common@21 -- # date +%s 00:02:02.984 11:37:53 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=2235319 00:02:02.984 11:37:53 -- pm/common@26 -- # sleep 1 00:02:02.984 11:37:53 -- pm/common@21 -- # date +%s 00:02:02.984 11:37:53 -- pm/common@21 -- # date +%s 00:02:02.984 11:37:53 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713433073 00:02:02.984 11:37:53 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713433073 00:02:02.984 11:37:53 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713433073 00:02:02.984 11:37:53 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713433073 00:02:02.984 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713433073_collect-cpu-temp.pm.log 00:02:02.984 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713433073_collect-vmstat.pm.log 00:02:02.984 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713433073_collect-bmc-pm.bmc.pm.log 00:02:02.984 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713433073_collect-cpu-load.pm.log 00:02:03.919 11:37:54 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:03.919 11:37:54 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:03.919 11:37:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:02:03.919 11:37:54 -- common/autotest_common.sh@10 -- # set +x 00:02:03.919 11:37:54 -- spdk/autotest.sh@59 -- # create_test_list 00:02:03.919 11:37:54 -- common/autotest_common.sh@734 -- # xtrace_disable 00:02:03.919 11:37:54 -- common/autotest_common.sh@10 -- # set +x 00:02:03.919 11:37:54 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:03.919 11:37:54 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:04.175 11:37:54 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:04.175 11:37:54 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:04.175 11:37:54 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:04.175 11:37:54 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:04.175 11:37:54 -- common/autotest_common.sh@1441 -- # uname 00:02:04.175 11:37:54 -- common/autotest_common.sh@1441 -- # '[' Linux = FreeBSD ']' 00:02:04.175 11:37:54 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:04.175 11:37:54 -- common/autotest_common.sh@1461 -- # uname 00:02:04.175 11:37:54 -- common/autotest_common.sh@1461 -- # [[ Linux = FreeBSD ]] 00:02:04.175 11:37:54 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:04.175 11:37:54 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:04.175 11:37:54 -- spdk/autotest.sh@72 -- # hash lcov 00:02:04.175 11:37:54 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:04.175 11:37:54 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:04.175 --rc lcov_branch_coverage=1 00:02:04.175 --rc lcov_function_coverage=1 00:02:04.175 --rc genhtml_branch_coverage=1 00:02:04.175 --rc genhtml_function_coverage=1 00:02:04.175 --rc genhtml_legend=1 00:02:04.175 --rc geninfo_all_blocks=1 00:02:04.175 ' 00:02:04.175 11:37:54 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:04.175 --rc lcov_branch_coverage=1 00:02:04.175 --rc lcov_function_coverage=1 00:02:04.175 --rc genhtml_branch_coverage=1 00:02:04.176 --rc genhtml_function_coverage=1 00:02:04.176 --rc genhtml_legend=1 00:02:04.176 --rc geninfo_all_blocks=1 00:02:04.176 ' 00:02:04.176 11:37:54 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:04.176 --rc lcov_branch_coverage=1 00:02:04.176 --rc lcov_function_coverage=1 00:02:04.176 --rc genhtml_branch_coverage=1 00:02:04.176 --rc genhtml_function_coverage=1 00:02:04.176 --rc genhtml_legend=1 00:02:04.176 --rc geninfo_all_blocks=1 00:02:04.176 --no-external' 00:02:04.176 11:37:54 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:04.176 --rc lcov_branch_coverage=1 00:02:04.176 --rc lcov_function_coverage=1 00:02:04.176 --rc genhtml_branch_coverage=1 00:02:04.176 --rc genhtml_function_coverage=1 00:02:04.176 --rc genhtml_legend=1 00:02:04.176 --rc geninfo_all_blocks=1 00:02:04.176 --no-external' 00:02:04.176 11:37:54 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:04.176 lcov: LCOV version 1.14 00:02:04.176 11:37:54 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:14.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:14.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:14.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:14.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:14.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:14.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:14.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:14.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:26.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:26.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:26.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:26.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:26.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:26.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:26.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:26.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:26.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:26.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:26.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:26.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:26.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:26.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:26.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:26.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:26.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:26.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:26.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:26.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:26.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:26.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:26.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:26.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:26.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:26.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:26.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:26.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:26.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:26.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:26.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:26.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:26.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:26.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:26.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:26.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:26.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:26.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:26.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:26.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:26.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:26.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:26.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:26.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:26.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:26.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:26.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:26.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:26.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:26.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:26.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:26.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:26.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:26.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:26.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:26.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:26.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:26.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:26.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:26.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:26.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:26.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:26.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:26.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:26.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:26.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:26.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:26.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:26.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:26.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:26.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:26.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:26.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:26.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:26.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:26.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:26.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:26.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:26.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:26.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:26.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:26.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:26.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:26.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:26.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:26.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:26.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:26.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:26.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:26.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:26.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:26.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:26.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:26.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:26.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:26.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:26.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:26.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:26.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:26.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:26.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:26.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:26.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:26.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:26.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:26.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:26.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:26.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:26.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:26.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:26.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:26.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:26.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:26.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:26.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:26.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:26.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:26.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:26.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:26.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:26.606 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:26.606 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:26.606 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:26.606 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:26.606 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:26.606 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:26.606 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:26.606 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:26.606 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:26.606 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:26.606 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:26.606 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:26.606 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:26.606 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:26.606 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:26.606 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:26.606 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:26.606 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:26.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:26.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:26.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:26.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:26.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:26.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:26.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:26.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:26.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:26.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:26.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:26.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:26.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:26.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:26.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:26.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:26.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:26.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:26.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:26.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:26.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:26.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:26.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:26.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:26.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:26.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:26.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:26.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:26.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:26.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:26.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:26.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:26.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:26.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:26.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:26.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:26.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:26.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:27.984 11:38:18 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:27.984 11:38:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:02:27.984 11:38:18 -- common/autotest_common.sh@10 -- # set +x 00:02:27.984 11:38:18 -- spdk/autotest.sh@91 -- # rm -f 00:02:27.984 11:38:18 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:31.298 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:31.298 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:31.298 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:31.298 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:31.298 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:31.298 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:31.298 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:31.298 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:31.298 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:31.298 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:31.298 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:31.298 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:31.298 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:31.298 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:31.557 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:31.557 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:31.557 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:02:31.557 11:38:21 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:31.557 11:38:21 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:02:31.557 11:38:21 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:02:31.557 11:38:21 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:02:31.557 11:38:21 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:31.557 11:38:21 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:02:31.557 11:38:21 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:02:31.557 11:38:21 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:31.557 11:38:21 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:31.557 11:38:21 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:31.557 11:38:21 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:31.557 11:38:21 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:31.557 11:38:21 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:31.557 11:38:21 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:31.557 11:38:21 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:31.557 No valid GPT data, bailing 00:02:31.557 11:38:21 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:31.557 11:38:22 -- scripts/common.sh@391 -- # pt= 00:02:31.557 11:38:22 -- scripts/common.sh@392 -- # return 1 00:02:31.557 11:38:22 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:31.557 1+0 records in 00:02:31.557 1+0 records out 00:02:31.557 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00237968 s, 441 MB/s 00:02:31.557 11:38:22 -- spdk/autotest.sh@118 -- # sync 00:02:31.557 11:38:22 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:31.557 11:38:22 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:31.557 11:38:22 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:38.126 11:38:28 -- spdk/autotest.sh@124 -- # uname -s 00:02:38.126 11:38:28 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:38.126 11:38:28 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:38.126 11:38:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:38.126 11:38:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:38.126 11:38:28 -- common/autotest_common.sh@10 -- # set +x 00:02:38.126 ************************************ 00:02:38.126 START TEST setup.sh 00:02:38.126 ************************************ 00:02:38.126 11:38:28 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:38.385 * Looking for test storage... 00:02:38.385 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:38.385 11:38:28 -- setup/test-setup.sh@10 -- # uname -s 00:02:38.385 11:38:28 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:38.385 11:38:28 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:38.385 11:38:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:38.385 11:38:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:38.385 11:38:28 -- common/autotest_common.sh@10 -- # set +x 00:02:38.385 ************************************ 00:02:38.385 START TEST acl 00:02:38.385 ************************************ 00:02:38.385 11:38:28 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:38.644 * Looking for test storage... 00:02:38.644 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:38.644 11:38:29 -- setup/acl.sh@10 -- # get_zoned_devs 00:02:38.644 11:38:29 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:02:38.644 11:38:29 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:02:38.644 11:38:29 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:02:38.644 11:38:29 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:38.644 11:38:29 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:02:38.644 11:38:29 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:02:38.644 11:38:29 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:38.644 11:38:29 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:38.644 11:38:29 -- setup/acl.sh@12 -- # devs=() 00:02:38.644 11:38:29 -- setup/acl.sh@12 -- # declare -a devs 00:02:38.644 11:38:29 -- setup/acl.sh@13 -- # drivers=() 00:02:38.644 11:38:29 -- setup/acl.sh@13 -- # declare -A drivers 00:02:38.644 11:38:29 -- setup/acl.sh@51 -- # setup reset 00:02:38.644 11:38:29 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:38.644 11:38:29 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:42.832 11:38:32 -- setup/acl.sh@52 -- # collect_setup_devs 00:02:42.832 11:38:32 -- setup/acl.sh@16 -- # local dev driver 00:02:42.832 11:38:32 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:42.832 11:38:32 -- setup/acl.sh@15 -- # setup output status 00:02:42.832 11:38:32 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:42.832 11:38:32 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:46.123 Hugepages 00:02:46.123 node hugesize free / total 00:02:46.123 11:38:35 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:46.123 11:38:35 -- setup/acl.sh@19 -- # continue 00:02:46.123 11:38:35 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.123 11:38:35 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:46.123 11:38:35 -- setup/acl.sh@19 -- # continue 00:02:46.123 11:38:35 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.123 11:38:36 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:46.123 11:38:36 -- setup/acl.sh@19 -- # continue 00:02:46.123 11:38:36 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.123 00:02:46.123 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:46.123 11:38:36 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:46.123 11:38:36 -- setup/acl.sh@19 -- # continue 00:02:46.123 11:38:36 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.123 11:38:36 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:46.123 11:38:36 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.123 11:38:36 -- setup/acl.sh@20 -- # continue 00:02:46.123 11:38:36 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.123 11:38:36 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:46.123 11:38:36 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.123 11:38:36 -- setup/acl.sh@20 -- # continue 00:02:46.123 11:38:36 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.123 11:38:36 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:46.123 11:38:36 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.123 11:38:36 -- setup/acl.sh@20 -- # continue 00:02:46.123 11:38:36 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.123 11:38:36 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:46.123 11:38:36 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.123 11:38:36 -- setup/acl.sh@20 -- # continue 00:02:46.123 11:38:36 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.123 11:38:36 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:46.123 11:38:36 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.123 11:38:36 -- setup/acl.sh@20 -- # continue 00:02:46.123 11:38:36 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.123 11:38:36 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:46.123 11:38:36 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.123 11:38:36 -- setup/acl.sh@20 -- # continue 00:02:46.123 11:38:36 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.123 11:38:36 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:46.123 11:38:36 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.123 11:38:36 -- setup/acl.sh@20 -- # continue 00:02:46.123 11:38:36 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.123 11:38:36 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:46.123 11:38:36 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.123 11:38:36 -- setup/acl.sh@20 -- # continue 00:02:46.123 11:38:36 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.123 11:38:36 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:46.123 11:38:36 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.123 11:38:36 -- setup/acl.sh@20 -- # continue 00:02:46.123 11:38:36 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.123 11:38:36 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:46.123 11:38:36 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.123 11:38:36 -- setup/acl.sh@20 -- # continue 00:02:46.123 11:38:36 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.123 11:38:36 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:46.123 11:38:36 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.123 11:38:36 -- setup/acl.sh@20 -- # continue 00:02:46.123 11:38:36 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.123 11:38:36 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:46.123 11:38:36 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.123 11:38:36 -- setup/acl.sh@20 -- # continue 00:02:46.123 11:38:36 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.123 11:38:36 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:46.123 11:38:36 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.123 11:38:36 -- setup/acl.sh@20 -- # continue 00:02:46.123 11:38:36 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.123 11:38:36 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:46.123 11:38:36 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.123 11:38:36 -- setup/acl.sh@20 -- # continue 00:02:46.123 11:38:36 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.123 11:38:36 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:46.123 11:38:36 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.123 11:38:36 -- setup/acl.sh@20 -- # continue 00:02:46.123 11:38:36 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.123 11:38:36 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:46.123 11:38:36 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.123 11:38:36 -- setup/acl.sh@20 -- # continue 00:02:46.123 11:38:36 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.123 11:38:36 -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:02:46.123 11:38:36 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:46.123 11:38:36 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:02:46.123 11:38:36 -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:46.123 11:38:36 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:46.123 11:38:36 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.123 11:38:36 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:46.123 11:38:36 -- setup/acl.sh@54 -- # run_test denied denied 00:02:46.123 11:38:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:46.123 11:38:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:46.123 11:38:36 -- common/autotest_common.sh@10 -- # set +x 00:02:46.123 ************************************ 00:02:46.123 START TEST denied 00:02:46.123 ************************************ 00:02:46.123 11:38:36 -- common/autotest_common.sh@1111 -- # denied 00:02:46.123 11:38:36 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:02:46.123 11:38:36 -- setup/acl.sh@38 -- # setup output config 00:02:46.123 11:38:36 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:02:46.123 11:38:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:46.123 11:38:36 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:50.313 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:02:50.313 11:38:39 -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:02:50.313 11:38:39 -- setup/acl.sh@28 -- # local dev driver 00:02:50.313 11:38:39 -- setup/acl.sh@30 -- # for dev in "$@" 00:02:50.313 11:38:39 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:02:50.313 11:38:39 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:02:50.313 11:38:39 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:50.313 11:38:39 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:50.313 11:38:39 -- setup/acl.sh@41 -- # setup reset 00:02:50.313 11:38:39 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:50.313 11:38:39 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:54.578 00:02:54.578 real 0m7.952s 00:02:54.578 user 0m2.520s 00:02:54.578 sys 0m4.769s 00:02:54.578 11:38:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:54.578 11:38:44 -- common/autotest_common.sh@10 -- # set +x 00:02:54.578 ************************************ 00:02:54.578 END TEST denied 00:02:54.578 ************************************ 00:02:54.578 11:38:44 -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:54.578 11:38:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:54.578 11:38:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:54.578 11:38:44 -- common/autotest_common.sh@10 -- # set +x 00:02:54.578 ************************************ 00:02:54.578 START TEST allowed 00:02:54.578 ************************************ 00:02:54.578 11:38:44 -- common/autotest_common.sh@1111 -- # allowed 00:02:54.578 11:38:44 -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:02:54.578 11:38:44 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:02:54.578 11:38:44 -- setup/acl.sh@45 -- # setup output config 00:02:54.578 11:38:44 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:54.578 11:38:44 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:58.770 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:02:58.770 11:38:49 -- setup/acl.sh@47 -- # verify 00:02:58.770 11:38:49 -- setup/acl.sh@28 -- # local dev driver 00:02:58.770 11:38:49 -- setup/acl.sh@48 -- # setup reset 00:02:58.770 11:38:49 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:58.770 11:38:49 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:02.976 00:03:02.976 real 0m8.523s 00:03:02.976 user 0m2.478s 00:03:02.976 sys 0m4.632s 00:03:02.976 11:38:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:02.976 11:38:53 -- common/autotest_common.sh@10 -- # set +x 00:03:02.976 ************************************ 00:03:02.976 END TEST allowed 00:03:02.976 ************************************ 00:03:02.976 00:03:02.976 real 0m24.211s 00:03:02.976 user 0m7.785s 00:03:02.976 sys 0m14.583s 00:03:02.976 11:38:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:02.976 11:38:53 -- common/autotest_common.sh@10 -- # set +x 00:03:02.976 ************************************ 00:03:02.976 END TEST acl 00:03:02.976 ************************************ 00:03:02.976 11:38:53 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:02.976 11:38:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:02.976 11:38:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:02.976 11:38:53 -- common/autotest_common.sh@10 -- # set +x 00:03:02.976 ************************************ 00:03:02.976 START TEST hugepages 00:03:02.976 ************************************ 00:03:02.976 11:38:53 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:02.976 * Looking for test storage... 00:03:02.976 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:02.976 11:38:53 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:02.976 11:38:53 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:02.976 11:38:53 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:02.976 11:38:53 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:02.976 11:38:53 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:02.976 11:38:53 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:02.976 11:38:53 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:02.976 11:38:53 -- setup/common.sh@18 -- # local node= 00:03:02.976 11:38:53 -- setup/common.sh@19 -- # local var val 00:03:02.976 11:38:53 -- setup/common.sh@20 -- # local mem_f mem 00:03:02.976 11:38:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.976 11:38:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:02.976 11:38:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:02.976 11:38:53 -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.976 11:38:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.976 11:38:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 38421440 kB' 'MemAvailable: 42489504 kB' 'Buffers: 2696 kB' 'Cached: 13563528 kB' 'SwapCached: 0 kB' 'Active: 10413412 kB' 'Inactive: 3660008 kB' 'Active(anon): 9846260 kB' 'Inactive(anon): 0 kB' 'Active(file): 567152 kB' 'Inactive(file): 3660008 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 510648 kB' 'Mapped: 205780 kB' 'Shmem: 9339064 kB' 'KReclaimable: 488612 kB' 'Slab: 1124416 kB' 'SReclaimable: 488612 kB' 'SUnreclaim: 635804 kB' 'KernelStack: 21984 kB' 'PageTables: 9368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36439060 kB' 'Committed_AS: 11195856 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216424 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3208564 kB' 'DirectMap2M: 14303232 kB' 'DirectMap1G: 51380224 kB' 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # continue 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # continue 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # continue 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # continue 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # continue 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # continue 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # continue 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # continue 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # continue 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # continue 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # continue 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # continue 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # continue 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # continue 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # continue 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # continue 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # continue 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # continue 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # continue 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # continue 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # continue 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # continue 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # continue 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # continue 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # continue 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # continue 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # continue 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # continue 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # continue 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # continue 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # continue 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # continue 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # continue 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.976 11:38:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.976 11:38:53 -- setup/common.sh@32 -- # continue 00:03:02.977 11:38:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.977 11:38:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.977 11:38:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.977 11:38:53 -- setup/common.sh@32 -- # continue 00:03:02.977 11:38:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.977 11:38:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.977 11:38:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.977 11:38:53 -- setup/common.sh@32 -- # continue 00:03:02.977 11:38:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.977 11:38:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.977 11:38:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.977 11:38:53 -- setup/common.sh@32 -- # continue 00:03:02.977 11:38:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.977 11:38:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.977 11:38:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.977 11:38:53 -- setup/common.sh@32 -- # continue 00:03:02.977 11:38:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.977 11:38:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.977 11:38:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.977 11:38:53 -- setup/common.sh@32 -- # continue 00:03:02.977 11:38:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.977 11:38:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.977 11:38:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.977 11:38:53 -- setup/common.sh@32 -- # continue 00:03:02.977 11:38:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.977 11:38:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.977 11:38:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.977 11:38:53 -- setup/common.sh@32 -- # continue 00:03:02.977 11:38:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.977 11:38:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.977 11:38:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.977 11:38:53 -- setup/common.sh@32 -- # continue 00:03:02.977 11:38:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.977 11:38:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.977 11:38:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.977 11:38:53 -- setup/common.sh@32 -- # continue 00:03:02.977 11:38:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.977 11:38:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.977 11:38:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.977 11:38:53 -- setup/common.sh@32 -- # continue 00:03:02.977 11:38:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.977 11:38:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.977 11:38:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.977 11:38:53 -- setup/common.sh@32 -- # continue 00:03:02.977 11:38:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.977 11:38:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.977 11:38:53 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.977 11:38:53 -- setup/common.sh@32 -- # continue 00:03:02.977 11:38:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.977 11:38:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.977 11:38:53 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.977 11:38:53 -- setup/common.sh@32 -- # continue 00:03:02.977 11:38:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.977 11:38:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.977 11:38:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.977 11:38:53 -- setup/common.sh@32 -- # continue 00:03:02.977 11:38:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.977 11:38:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.977 11:38:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.977 11:38:53 -- setup/common.sh@32 -- # continue 00:03:02.977 11:38:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.977 11:38:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.977 11:38:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.977 11:38:53 -- setup/common.sh@32 -- # continue 00:03:02.977 11:38:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.977 11:38:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.977 11:38:53 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.977 11:38:53 -- setup/common.sh@32 -- # continue 00:03:02.977 11:38:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.977 11:38:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.977 11:38:53 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.977 11:38:53 -- setup/common.sh@32 -- # continue 00:03:02.977 11:38:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.977 11:38:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.977 11:38:53 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.977 11:38:53 -- setup/common.sh@33 -- # echo 2048 00:03:02.977 11:38:53 -- setup/common.sh@33 -- # return 0 00:03:02.977 11:38:53 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:02.977 11:38:53 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:02.977 11:38:53 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:02.977 11:38:53 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:02.977 11:38:53 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:02.977 11:38:53 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:02.977 11:38:53 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:02.977 11:38:53 -- setup/hugepages.sh@207 -- # get_nodes 00:03:02.977 11:38:53 -- setup/hugepages.sh@27 -- # local node 00:03:02.977 11:38:53 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:02.977 11:38:53 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:02.977 11:38:53 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:02.977 11:38:53 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:02.977 11:38:53 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:02.977 11:38:53 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:02.977 11:38:53 -- setup/hugepages.sh@208 -- # clear_hp 00:03:02.977 11:38:53 -- setup/hugepages.sh@37 -- # local node hp 00:03:02.977 11:38:53 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:02.977 11:38:53 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:02.977 11:38:53 -- setup/hugepages.sh@41 -- # echo 0 00:03:02.977 11:38:53 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:02.977 11:38:53 -- setup/hugepages.sh@41 -- # echo 0 00:03:02.977 11:38:53 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:02.977 11:38:53 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:02.977 11:38:53 -- setup/hugepages.sh@41 -- # echo 0 00:03:02.977 11:38:53 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:02.977 11:38:53 -- setup/hugepages.sh@41 -- # echo 0 00:03:02.977 11:38:53 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:02.977 11:38:53 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:02.977 11:38:53 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:02.977 11:38:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:02.977 11:38:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:02.977 11:38:53 -- common/autotest_common.sh@10 -- # set +x 00:03:03.239 ************************************ 00:03:03.239 START TEST default_setup 00:03:03.239 ************************************ 00:03:03.239 11:38:53 -- common/autotest_common.sh@1111 -- # default_setup 00:03:03.239 11:38:53 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:03.239 11:38:53 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:03.239 11:38:53 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:03.239 11:38:53 -- setup/hugepages.sh@51 -- # shift 00:03:03.239 11:38:53 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:03.239 11:38:53 -- setup/hugepages.sh@52 -- # local node_ids 00:03:03.239 11:38:53 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:03.239 11:38:53 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:03.239 11:38:53 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:03.239 11:38:53 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:03.239 11:38:53 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:03.239 11:38:53 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:03.239 11:38:53 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:03.239 11:38:53 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:03.239 11:38:53 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:03.239 11:38:53 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:03.239 11:38:53 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:03.239 11:38:53 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:03.239 11:38:53 -- setup/hugepages.sh@73 -- # return 0 00:03:03.239 11:38:53 -- setup/hugepages.sh@137 -- # setup output 00:03:03.239 11:38:53 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:03.239 11:38:53 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:06.527 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:06.527 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:06.527 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:06.527 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:06.527 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:06.527 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:06.527 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:06.527 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:06.527 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:06.527 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:06.527 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:06.527 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:06.527 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:06.527 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:06.527 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:06.527 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:07.911 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:07.911 11:38:58 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:07.911 11:38:58 -- setup/hugepages.sh@89 -- # local node 00:03:07.911 11:38:58 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:07.911 11:38:58 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:07.911 11:38:58 -- setup/hugepages.sh@92 -- # local surp 00:03:07.911 11:38:58 -- setup/hugepages.sh@93 -- # local resv 00:03:07.911 11:38:58 -- setup/hugepages.sh@94 -- # local anon 00:03:07.911 11:38:58 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:07.911 11:38:58 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:07.911 11:38:58 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:07.911 11:38:58 -- setup/common.sh@18 -- # local node= 00:03:07.911 11:38:58 -- setup/common.sh@19 -- # local var val 00:03:07.911 11:38:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:07.911 11:38:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.911 11:38:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.911 11:38:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.911 11:38:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.911 11:38:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.911 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.911 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.912 11:38:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 40584540 kB' 'MemAvailable: 44652604 kB' 'Buffers: 2696 kB' 'Cached: 13563656 kB' 'SwapCached: 0 kB' 'Active: 10426900 kB' 'Inactive: 3660008 kB' 'Active(anon): 9859748 kB' 'Inactive(anon): 0 kB' 'Active(file): 567152 kB' 'Inactive(file): 3660008 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523628 kB' 'Mapped: 206068 kB' 'Shmem: 9339192 kB' 'KReclaimable: 488612 kB' 'Slab: 1123344 kB' 'SReclaimable: 488612 kB' 'SUnreclaim: 634732 kB' 'KernelStack: 22144 kB' 'PageTables: 8952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11206932 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216408 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3208564 kB' 'DirectMap2M: 14303232 kB' 'DirectMap1G: 51380224 kB' 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.912 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.912 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.913 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.913 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.913 11:38:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.913 11:38:58 -- setup/common.sh@33 -- # echo 0 00:03:07.913 11:38:58 -- setup/common.sh@33 -- # return 0 00:03:07.913 11:38:58 -- setup/hugepages.sh@97 -- # anon=0 00:03:07.913 11:38:58 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:07.913 11:38:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:07.913 11:38:58 -- setup/common.sh@18 -- # local node= 00:03:07.913 11:38:58 -- setup/common.sh@19 -- # local var val 00:03:07.913 11:38:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:07.913 11:38:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.913 11:38:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.913 11:38:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.913 11:38:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.913 11:38:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.913 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.913 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.913 11:38:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 40589596 kB' 'MemAvailable: 44657660 kB' 'Buffers: 2696 kB' 'Cached: 13563664 kB' 'SwapCached: 0 kB' 'Active: 10426900 kB' 'Inactive: 3660008 kB' 'Active(anon): 9859748 kB' 'Inactive(anon): 0 kB' 'Active(file): 567152 kB' 'Inactive(file): 3660008 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523964 kB' 'Mapped: 205972 kB' 'Shmem: 9339200 kB' 'KReclaimable: 488612 kB' 'Slab: 1123332 kB' 'SReclaimable: 488612 kB' 'SUnreclaim: 634720 kB' 'KernelStack: 22208 kB' 'PageTables: 9308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11208460 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216360 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3208564 kB' 'DirectMap2M: 14303232 kB' 'DirectMap1G: 51380224 kB' 00:03:07.913 11:38:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.913 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.913 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.913 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.913 11:38:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.913 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.913 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.913 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.913 11:38:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.913 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.913 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.913 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.913 11:38:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.913 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.913 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.913 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.913 11:38:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.913 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.913 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.913 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.913 11:38:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.913 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.913 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.913 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.913 11:38:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.913 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.913 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.913 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.913 11:38:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.913 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.913 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.913 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.913 11:38:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.913 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.913 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.913 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.913 11:38:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.913 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.913 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.913 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.913 11:38:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.913 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.913 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.913 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.913 11:38:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.913 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.913 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.913 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.913 11:38:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.913 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.913 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.913 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.913 11:38:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.913 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.913 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.913 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.913 11:38:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.913 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.913 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.913 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.913 11:38:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.913 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.913 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.913 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.913 11:38:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.913 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.913 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.913 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.913 11:38:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.913 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.913 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.913 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.913 11:38:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.913 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.913 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.913 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.913 11:38:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.913 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.913 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.913 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.913 11:38:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.913 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.913 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.913 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.913 11:38:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.913 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.913 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.913 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.913 11:38:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.913 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.913 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.913 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.913 11:38:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.913 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.913 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.913 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.913 11:38:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.913 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.913 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.913 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.913 11:38:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.913 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.913 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.913 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.913 11:38:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.913 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.913 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.913 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.913 11:38:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.913 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.913 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.913 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.913 11:38:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.913 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.913 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.913 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.913 11:38:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.913 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.913 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.913 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.913 11:38:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.913 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.914 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.914 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.914 11:38:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.914 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.914 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.914 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.914 11:38:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.914 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.914 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.914 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.914 11:38:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.914 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.914 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.914 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.914 11:38:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.914 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.914 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.914 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.914 11:38:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.914 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.914 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.914 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.914 11:38:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.914 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.914 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.914 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.914 11:38:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.914 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.914 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.914 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.914 11:38:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.914 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.914 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.914 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.914 11:38:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.914 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.914 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.914 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.914 11:38:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.914 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.914 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.914 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.914 11:38:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.914 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.914 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.914 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.914 11:38:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.914 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.914 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.914 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.914 11:38:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.914 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.914 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.914 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.914 11:38:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.914 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.914 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.914 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.914 11:38:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.914 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.914 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.914 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.914 11:38:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.914 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.914 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.914 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.914 11:38:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.914 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.914 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.914 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.914 11:38:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.914 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.914 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.914 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.914 11:38:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.914 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.914 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.914 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.914 11:38:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.914 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.914 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.914 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.914 11:38:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.914 11:38:58 -- setup/common.sh@33 -- # echo 0 00:03:07.914 11:38:58 -- setup/common.sh@33 -- # return 0 00:03:07.914 11:38:58 -- setup/hugepages.sh@99 -- # surp=0 00:03:07.914 11:38:58 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:07.914 11:38:58 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:07.914 11:38:58 -- setup/common.sh@18 -- # local node= 00:03:07.914 11:38:58 -- setup/common.sh@19 -- # local var val 00:03:07.914 11:38:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:07.914 11:38:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.914 11:38:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.914 11:38:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.914 11:38:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.914 11:38:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.914 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.914 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.914 11:38:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 40592140 kB' 'MemAvailable: 44660204 kB' 'Buffers: 2696 kB' 'Cached: 13563668 kB' 'SwapCached: 0 kB' 'Active: 10427204 kB' 'Inactive: 3660008 kB' 'Active(anon): 9860052 kB' 'Inactive(anon): 0 kB' 'Active(file): 567152 kB' 'Inactive(file): 3660008 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524288 kB' 'Mapped: 205972 kB' 'Shmem: 9339204 kB' 'KReclaimable: 488612 kB' 'Slab: 1123300 kB' 'SReclaimable: 488612 kB' 'SUnreclaim: 634688 kB' 'KernelStack: 22384 kB' 'PageTables: 9620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11208344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216392 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3208564 kB' 'DirectMap2M: 14303232 kB' 'DirectMap1G: 51380224 kB' 00:03:07.914 11:38:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.914 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.914 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.914 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.914 11:38:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.914 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.914 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.914 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.914 11:38:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.914 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.914 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.914 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.914 11:38:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.914 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.914 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.914 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.914 11:38:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.914 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.914 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.914 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.914 11:38:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.914 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.914 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.914 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.914 11:38:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.914 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.914 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.914 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.914 11:38:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.914 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.914 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.914 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.914 11:38:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.914 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.914 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.914 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.914 11:38:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.914 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.914 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.914 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.914 11:38:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.914 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.915 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.915 11:38:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.915 11:38:58 -- setup/common.sh@33 -- # echo 0 00:03:07.915 11:38:58 -- setup/common.sh@33 -- # return 0 00:03:07.915 11:38:58 -- setup/hugepages.sh@100 -- # resv=0 00:03:07.916 11:38:58 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:07.916 nr_hugepages=1024 00:03:07.916 11:38:58 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:07.916 resv_hugepages=0 00:03:07.916 11:38:58 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:07.916 surplus_hugepages=0 00:03:07.916 11:38:58 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:07.916 anon_hugepages=0 00:03:07.916 11:38:58 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:07.916 11:38:58 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:07.916 11:38:58 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:07.916 11:38:58 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:07.916 11:38:58 -- setup/common.sh@18 -- # local node= 00:03:07.916 11:38:58 -- setup/common.sh@19 -- # local var val 00:03:07.916 11:38:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:07.916 11:38:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.916 11:38:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.916 11:38:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.916 11:38:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.916 11:38:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.916 11:38:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 40591884 kB' 'MemAvailable: 44659948 kB' 'Buffers: 2696 kB' 'Cached: 13563684 kB' 'SwapCached: 0 kB' 'Active: 10431744 kB' 'Inactive: 3660008 kB' 'Active(anon): 9864592 kB' 'Inactive(anon): 0 kB' 'Active(file): 567152 kB' 'Inactive(file): 3660008 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528824 kB' 'Mapped: 206476 kB' 'Shmem: 9339220 kB' 'KReclaimable: 488612 kB' 'Slab: 1123236 kB' 'SReclaimable: 488612 kB' 'SUnreclaim: 634624 kB' 'KernelStack: 22528 kB' 'PageTables: 10044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11213148 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216360 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3208564 kB' 'DirectMap2M: 14303232 kB' 'DirectMap1G: 51380224 kB' 00:03:07.916 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.916 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.916 11:38:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.916 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.916 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.916 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.916 11:38:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.916 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.916 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.916 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.916 11:38:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.916 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.916 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.916 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.916 11:38:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.916 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.916 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.916 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.916 11:38:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.916 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.916 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.916 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.916 11:38:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.916 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.916 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.916 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.916 11:38:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.916 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.916 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.916 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.916 11:38:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.916 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.916 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.916 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.916 11:38:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.916 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.916 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.916 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.916 11:38:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.916 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.916 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.916 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.916 11:38:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.916 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.916 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.916 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.916 11:38:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.916 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.916 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.916 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.916 11:38:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.916 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.916 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.916 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.916 11:38:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.916 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.916 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.916 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.916 11:38:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.916 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.916 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.916 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.916 11:38:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.916 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.916 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.916 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.916 11:38:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.916 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.916 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.916 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.916 11:38:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.916 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.916 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.916 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.916 11:38:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.916 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.916 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.916 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.916 11:38:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.916 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.916 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.916 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.916 11:38:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.916 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.916 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.916 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.916 11:38:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.916 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.916 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.916 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.916 11:38:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.916 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.916 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.916 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.917 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.917 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.917 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.917 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.917 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.917 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.917 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.917 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.917 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.917 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.917 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.917 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.917 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.917 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.917 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.917 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.917 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.917 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.917 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.917 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.917 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.917 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.917 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.917 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.917 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.917 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.917 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.917 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.917 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.917 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.917 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.917 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.917 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.917 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.917 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.917 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.917 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.917 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.917 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.917 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.917 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.917 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.917 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.917 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.917 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.917 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.917 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.917 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.917 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.917 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.917 11:38:58 -- setup/common.sh@33 -- # echo 1024 00:03:07.917 11:38:58 -- setup/common.sh@33 -- # return 0 00:03:07.917 11:38:58 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:07.917 11:38:58 -- setup/hugepages.sh@112 -- # get_nodes 00:03:07.917 11:38:58 -- setup/hugepages.sh@27 -- # local node 00:03:07.917 11:38:58 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:07.917 11:38:58 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:07.917 11:38:58 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:07.917 11:38:58 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:07.917 11:38:58 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:07.917 11:38:58 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:07.917 11:38:58 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:07.917 11:38:58 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:07.917 11:38:58 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:07.917 11:38:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:07.917 11:38:58 -- setup/common.sh@18 -- # local node=0 00:03:07.917 11:38:58 -- setup/common.sh@19 -- # local var val 00:03:07.917 11:38:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:07.917 11:38:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.917 11:38:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:07.917 11:38:58 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:07.917 11:38:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.917 11:38:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.917 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.917 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.917 11:38:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 18514404 kB' 'MemUsed: 14124736 kB' 'SwapCached: 0 kB' 'Active: 7001664 kB' 'Inactive: 3286456 kB' 'Active(anon): 6675868 kB' 'Inactive(anon): 0 kB' 'Active(file): 325796 kB' 'Inactive(file): 3286456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9922296 kB' 'Mapped: 155008 kB' 'AnonPages: 369008 kB' 'Shmem: 6310044 kB' 'KernelStack: 12984 kB' 'PageTables: 6692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 324156 kB' 'Slab: 640684 kB' 'SReclaimable: 324156 kB' 'SUnreclaim: 316528 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.917 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.917 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.917 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.917 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.917 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.917 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.917 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.917 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.917 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.917 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.917 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.917 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.917 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.918 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.918 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.918 11:38:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.918 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.918 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.918 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.918 11:38:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.918 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.918 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.918 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.918 11:38:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.918 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.918 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.918 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.918 11:38:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.918 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.918 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.918 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.918 11:38:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.918 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.918 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.918 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.918 11:38:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.918 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.918 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.918 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.918 11:38:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.918 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.918 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.918 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.918 11:38:58 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.918 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.918 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.918 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.918 11:38:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.918 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.918 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.918 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.918 11:38:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.918 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.918 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.918 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.918 11:38:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.918 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.918 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.918 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.918 11:38:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.918 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.918 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.918 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.918 11:38:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.918 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.918 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.918 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.918 11:38:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.918 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.918 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.918 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.918 11:38:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.918 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.918 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.918 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.918 11:38:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.918 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.918 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.918 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.918 11:38:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.918 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.918 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.918 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.918 11:38:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.918 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.918 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.918 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.918 11:38:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.918 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.918 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.918 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.918 11:38:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.918 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.918 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.918 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.918 11:38:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.918 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.918 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.918 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.918 11:38:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.918 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.918 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.918 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.918 11:38:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.918 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.918 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.918 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.918 11:38:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.918 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.918 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.918 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.918 11:38:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.918 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.918 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.918 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.918 11:38:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.918 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.918 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.918 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.918 11:38:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.918 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.918 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.918 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.918 11:38:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.918 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.918 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.918 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.918 11:38:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.918 11:38:58 -- setup/common.sh@32 -- # continue 00:03:07.918 11:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.918 11:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.918 11:38:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.918 11:38:58 -- setup/common.sh@33 -- # echo 0 00:03:07.918 11:38:58 -- setup/common.sh@33 -- # return 0 00:03:07.918 11:38:58 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:07.918 11:38:58 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:07.918 11:38:58 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:07.918 11:38:58 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:07.918 11:38:58 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:07.918 node0=1024 expecting 1024 00:03:07.918 11:38:58 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:07.918 00:03:07.918 real 0m4.787s 00:03:07.918 user 0m1.111s 00:03:07.918 sys 0m2.049s 00:03:07.918 11:38:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:07.918 11:38:58 -- common/autotest_common.sh@10 -- # set +x 00:03:07.918 ************************************ 00:03:07.918 END TEST default_setup 00:03:07.918 ************************************ 00:03:08.178 11:38:58 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:08.178 11:38:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:08.178 11:38:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:08.178 11:38:58 -- common/autotest_common.sh@10 -- # set +x 00:03:08.178 ************************************ 00:03:08.178 START TEST per_node_1G_alloc 00:03:08.178 ************************************ 00:03:08.178 11:38:58 -- common/autotest_common.sh@1111 -- # per_node_1G_alloc 00:03:08.178 11:38:58 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:08.178 11:38:58 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:08.178 11:38:58 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:08.178 11:38:58 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:08.178 11:38:58 -- setup/hugepages.sh@51 -- # shift 00:03:08.178 11:38:58 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:08.178 11:38:58 -- setup/hugepages.sh@52 -- # local node_ids 00:03:08.178 11:38:58 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:08.178 11:38:58 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:08.178 11:38:58 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:08.178 11:38:58 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:08.178 11:38:58 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:08.178 11:38:58 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:08.178 11:38:58 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:08.178 11:38:58 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:08.178 11:38:58 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:08.178 11:38:58 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:08.178 11:38:58 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:08.178 11:38:58 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:08.178 11:38:58 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:08.178 11:38:58 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:08.178 11:38:58 -- setup/hugepages.sh@73 -- # return 0 00:03:08.178 11:38:58 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:08.178 11:38:58 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:08.178 11:38:58 -- setup/hugepages.sh@146 -- # setup output 00:03:08.178 11:38:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:08.178 11:38:58 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:11.475 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:11.475 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:11.475 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:11.475 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:11.475 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:11.475 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:11.475 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:11.475 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:11.475 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:11.475 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:11.475 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:11.475 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:11.475 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:11.475 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:11.475 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:11.475 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:11.475 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:11.475 11:39:01 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:11.475 11:39:01 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:11.475 11:39:01 -- setup/hugepages.sh@89 -- # local node 00:03:11.475 11:39:01 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:11.475 11:39:01 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:11.475 11:39:01 -- setup/hugepages.sh@92 -- # local surp 00:03:11.475 11:39:01 -- setup/hugepages.sh@93 -- # local resv 00:03:11.475 11:39:01 -- setup/hugepages.sh@94 -- # local anon 00:03:11.475 11:39:01 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:11.475 11:39:01 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:11.475 11:39:01 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:11.475 11:39:01 -- setup/common.sh@18 -- # local node= 00:03:11.475 11:39:01 -- setup/common.sh@19 -- # local var val 00:03:11.475 11:39:01 -- setup/common.sh@20 -- # local mem_f mem 00:03:11.475 11:39:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.475 11:39:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.475 11:39:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.475 11:39:01 -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.475 11:39:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.475 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.475 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.475 11:39:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 40548968 kB' 'MemAvailable: 44617032 kB' 'Buffers: 2696 kB' 'Cached: 13563780 kB' 'SwapCached: 0 kB' 'Active: 10429516 kB' 'Inactive: 3660008 kB' 'Active(anon): 9862364 kB' 'Inactive(anon): 0 kB' 'Active(file): 567152 kB' 'Inactive(file): 3660008 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525956 kB' 'Mapped: 206572 kB' 'Shmem: 9339316 kB' 'KReclaimable: 488612 kB' 'Slab: 1123396 kB' 'SReclaimable: 488612 kB' 'SUnreclaim: 634784 kB' 'KernelStack: 22208 kB' 'PageTables: 9560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11212172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216632 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3208564 kB' 'DirectMap2M: 14303232 kB' 'DirectMap1G: 51380224 kB' 00:03:11.475 11:39:01 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.475 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.475 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.475 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.475 11:39:01 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.475 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.475 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.475 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.475 11:39:01 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.475 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.475 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.475 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.475 11:39:01 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.475 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.475 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.475 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.475 11:39:01 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.475 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.475 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.475 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.475 11:39:01 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.475 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.475 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.475 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.475 11:39:01 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.475 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.475 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.475 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.475 11:39:01 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.475 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.475 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.475 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.475 11:39:01 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.475 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.475 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.475 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.475 11:39:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.475 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.475 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.475 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.475 11:39:01 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.475 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.475 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.475 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.475 11:39:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.475 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.475 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.475 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.475 11:39:01 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.475 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.475 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.475 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.475 11:39:01 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.475 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.475 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.475 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.475 11:39:01 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.475 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.475 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.475 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.475 11:39:01 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.475 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.475 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.475 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.476 11:39:01 -- setup/common.sh@33 -- # echo 0 00:03:11.476 11:39:01 -- setup/common.sh@33 -- # return 0 00:03:11.476 11:39:01 -- setup/hugepages.sh@97 -- # anon=0 00:03:11.476 11:39:01 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:11.476 11:39:01 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:11.476 11:39:01 -- setup/common.sh@18 -- # local node= 00:03:11.476 11:39:01 -- setup/common.sh@19 -- # local var val 00:03:11.476 11:39:01 -- setup/common.sh@20 -- # local mem_f mem 00:03:11.476 11:39:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.476 11:39:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.476 11:39:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.476 11:39:01 -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.476 11:39:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.476 11:39:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 40543644 kB' 'MemAvailable: 44611708 kB' 'Buffers: 2696 kB' 'Cached: 13563780 kB' 'SwapCached: 0 kB' 'Active: 10433628 kB' 'Inactive: 3660008 kB' 'Active(anon): 9866476 kB' 'Inactive(anon): 0 kB' 'Active(file): 567152 kB' 'Inactive(file): 3660008 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530044 kB' 'Mapped: 206564 kB' 'Shmem: 9339316 kB' 'KReclaimable: 488612 kB' 'Slab: 1123392 kB' 'SReclaimable: 488612 kB' 'SUnreclaim: 634780 kB' 'KernelStack: 22192 kB' 'PageTables: 9492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11216408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216552 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3208564 kB' 'DirectMap2M: 14303232 kB' 'DirectMap1G: 51380224 kB' 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.476 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.476 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.477 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.477 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.478 11:39:01 -- setup/common.sh@33 -- # echo 0 00:03:11.478 11:39:01 -- setup/common.sh@33 -- # return 0 00:03:11.478 11:39:01 -- setup/hugepages.sh@99 -- # surp=0 00:03:11.478 11:39:01 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:11.478 11:39:01 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:11.478 11:39:01 -- setup/common.sh@18 -- # local node= 00:03:11.478 11:39:01 -- setup/common.sh@19 -- # local var val 00:03:11.478 11:39:01 -- setup/common.sh@20 -- # local mem_f mem 00:03:11.478 11:39:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.478 11:39:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.478 11:39:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.478 11:39:01 -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.478 11:39:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.478 11:39:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 40541964 kB' 'MemAvailable: 44610028 kB' 'Buffers: 2696 kB' 'Cached: 13563792 kB' 'SwapCached: 0 kB' 'Active: 10431856 kB' 'Inactive: 3660008 kB' 'Active(anon): 9864704 kB' 'Inactive(anon): 0 kB' 'Active(file): 567152 kB' 'Inactive(file): 3660008 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528688 kB' 'Mapped: 205896 kB' 'Shmem: 9339328 kB' 'KReclaimable: 488612 kB' 'Slab: 1123400 kB' 'SReclaimable: 488612 kB' 'SUnreclaim: 634788 kB' 'KernelStack: 22160 kB' 'PageTables: 9448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11208608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216540 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3208564 kB' 'DirectMap2M: 14303232 kB' 'DirectMap1G: 51380224 kB' 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.478 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.478 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.479 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.479 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.479 11:39:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.479 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.479 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.479 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.479 11:39:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.479 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.479 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.479 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.479 11:39:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.479 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.479 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.479 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.479 11:39:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.479 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.479 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.479 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.479 11:39:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.479 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.479 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.479 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.479 11:39:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.479 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.479 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.479 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.479 11:39:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.479 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.479 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.479 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.479 11:39:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.479 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.479 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.479 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.479 11:39:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.479 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.479 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.479 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.479 11:39:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.479 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.479 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.479 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.479 11:39:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.479 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.479 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.479 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.479 11:39:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.479 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.479 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.479 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.479 11:39:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.479 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.479 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.479 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.479 11:39:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.479 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.479 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.479 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.479 11:39:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.479 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.479 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.479 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.479 11:39:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.479 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.479 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.479 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.479 11:39:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.479 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.479 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.479 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.479 11:39:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.479 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.479 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.479 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.479 11:39:01 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.479 11:39:01 -- setup/common.sh@33 -- # echo 0 00:03:11.479 11:39:01 -- setup/common.sh@33 -- # return 0 00:03:11.479 11:39:01 -- setup/hugepages.sh@100 -- # resv=0 00:03:11.479 11:39:01 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:11.479 nr_hugepages=1024 00:03:11.479 11:39:01 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:11.479 resv_hugepages=0 00:03:11.479 11:39:01 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:11.479 surplus_hugepages=0 00:03:11.479 11:39:01 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:11.479 anon_hugepages=0 00:03:11.479 11:39:01 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:11.479 11:39:01 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:11.479 11:39:01 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:11.479 11:39:01 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:11.479 11:39:01 -- setup/common.sh@18 -- # local node= 00:03:11.479 11:39:01 -- setup/common.sh@19 -- # local var val 00:03:11.479 11:39:01 -- setup/common.sh@20 -- # local mem_f mem 00:03:11.479 11:39:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.479 11:39:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.479 11:39:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.479 11:39:01 -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.479 11:39:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.479 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.479 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.479 11:39:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 40538476 kB' 'MemAvailable: 44606540 kB' 'Buffers: 2696 kB' 'Cached: 13563808 kB' 'SwapCached: 0 kB' 'Active: 10431912 kB' 'Inactive: 3660008 kB' 'Active(anon): 9864760 kB' 'Inactive(anon): 0 kB' 'Active(file): 567152 kB' 'Inactive(file): 3660008 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528776 kB' 'Mapped: 205572 kB' 'Shmem: 9339344 kB' 'KReclaimable: 488612 kB' 'Slab: 1123388 kB' 'SReclaimable: 488612 kB' 'SUnreclaim: 634776 kB' 'KernelStack: 22128 kB' 'PageTables: 9216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11209668 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216472 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3208564 kB' 'DirectMap2M: 14303232 kB' 'DirectMap1G: 51380224 kB' 00:03:11.479 11:39:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.479 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.479 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.479 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.479 11:39:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.479 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.479 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.479 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.479 11:39:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.479 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.479 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.479 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.479 11:39:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.479 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.479 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.479 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.479 11:39:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.479 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.479 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.479 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.479 11:39:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.480 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.480 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.481 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.481 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.481 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.481 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.481 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.481 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.481 11:39:01 -- setup/common.sh@33 -- # echo 1024 00:03:11.481 11:39:01 -- setup/common.sh@33 -- # return 0 00:03:11.481 11:39:01 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:11.481 11:39:01 -- setup/hugepages.sh@112 -- # get_nodes 00:03:11.481 11:39:01 -- setup/hugepages.sh@27 -- # local node 00:03:11.481 11:39:01 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:11.481 11:39:01 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:11.481 11:39:01 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:11.481 11:39:01 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:11.481 11:39:01 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:11.481 11:39:01 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:11.481 11:39:01 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:11.481 11:39:01 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:11.481 11:39:01 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:11.481 11:39:01 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:11.481 11:39:01 -- setup/common.sh@18 -- # local node=0 00:03:11.481 11:39:01 -- setup/common.sh@19 -- # local var val 00:03:11.481 11:39:01 -- setup/common.sh@20 -- # local mem_f mem 00:03:11.481 11:39:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.481 11:39:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:11.481 11:39:01 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:11.481 11:39:01 -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.481 11:39:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.481 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.481 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.481 11:39:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 19505336 kB' 'MemUsed: 13133804 kB' 'SwapCached: 0 kB' 'Active: 7002128 kB' 'Inactive: 3286456 kB' 'Active(anon): 6676332 kB' 'Inactive(anon): 0 kB' 'Active(file): 325796 kB' 'Inactive(file): 3286456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9922368 kB' 'Mapped: 153820 kB' 'AnonPages: 369400 kB' 'Shmem: 6310116 kB' 'KernelStack: 12648 kB' 'PageTables: 6016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 324156 kB' 'Slab: 641040 kB' 'SReclaimable: 324156 kB' 'SUnreclaim: 316884 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.481 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.481 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.481 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.481 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.481 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.481 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.481 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.481 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.481 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.481 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.481 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.481 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.481 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.481 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.481 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.481 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.481 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.481 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.481 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.481 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.481 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.481 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.481 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.481 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.481 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.481 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.481 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.481 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.481 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.481 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.481 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.481 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.481 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.481 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.481 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.481 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.481 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.481 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.481 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.481 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.481 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.481 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.481 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.481 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.481 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.481 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.481 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.481 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.481 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.481 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.481 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.481 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.481 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.481 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.481 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.481 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.481 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.482 11:39:01 -- setup/common.sh@33 -- # echo 0 00:03:11.482 11:39:01 -- setup/common.sh@33 -- # return 0 00:03:11.482 11:39:01 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:11.482 11:39:01 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:11.482 11:39:01 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:11.482 11:39:01 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:11.482 11:39:01 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:11.482 11:39:01 -- setup/common.sh@18 -- # local node=1 00:03:11.482 11:39:01 -- setup/common.sh@19 -- # local var val 00:03:11.482 11:39:01 -- setup/common.sh@20 -- # local mem_f mem 00:03:11.482 11:39:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.482 11:39:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:11.482 11:39:01 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:11.482 11:39:01 -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.482 11:39:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.482 11:39:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656076 kB' 'MemFree: 21025328 kB' 'MemUsed: 6630748 kB' 'SwapCached: 0 kB' 'Active: 3429024 kB' 'Inactive: 373552 kB' 'Active(anon): 3187668 kB' 'Inactive(anon): 0 kB' 'Active(file): 241356 kB' 'Inactive(file): 373552 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3644152 kB' 'Mapped: 51724 kB' 'AnonPages: 158680 kB' 'Shmem: 3029244 kB' 'KernelStack: 9480 kB' 'PageTables: 3224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 164456 kB' 'Slab: 482332 kB' 'SReclaimable: 164456 kB' 'SUnreclaim: 317876 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.482 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.482 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.483 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.483 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.483 11:39:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.483 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.483 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.483 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.483 11:39:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.483 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.483 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.483 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.483 11:39:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.483 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.483 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.483 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.483 11:39:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.483 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.483 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.483 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.483 11:39:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.483 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.483 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.483 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.483 11:39:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.483 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.483 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.483 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.483 11:39:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.483 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.483 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.483 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.483 11:39:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.483 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.483 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.483 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.483 11:39:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.483 11:39:01 -- setup/common.sh@32 -- # continue 00:03:11.483 11:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.483 11:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.483 11:39:01 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.483 11:39:01 -- setup/common.sh@33 -- # echo 0 00:03:11.483 11:39:01 -- setup/common.sh@33 -- # return 0 00:03:11.483 11:39:01 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:11.483 11:39:01 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:11.483 11:39:01 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:11.483 11:39:01 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:11.483 11:39:01 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:11.483 node0=512 expecting 512 00:03:11.483 11:39:01 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:11.483 11:39:01 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:11.483 11:39:01 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:11.483 11:39:01 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:11.483 node1=512 expecting 512 00:03:11.483 11:39:01 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:11.483 00:03:11.483 real 0m3.346s 00:03:11.483 user 0m1.254s 00:03:11.483 sys 0m2.117s 00:03:11.483 11:39:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:11.483 11:39:01 -- common/autotest_common.sh@10 -- # set +x 00:03:11.483 ************************************ 00:03:11.483 END TEST per_node_1G_alloc 00:03:11.483 ************************************ 00:03:11.743 11:39:02 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:11.743 11:39:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:11.743 11:39:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:11.743 11:39:02 -- common/autotest_common.sh@10 -- # set +x 00:03:11.743 ************************************ 00:03:11.743 START TEST even_2G_alloc 00:03:11.743 ************************************ 00:03:11.743 11:39:02 -- common/autotest_common.sh@1111 -- # even_2G_alloc 00:03:11.743 11:39:02 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:11.743 11:39:02 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:11.743 11:39:02 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:11.743 11:39:02 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:11.743 11:39:02 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:11.743 11:39:02 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:11.743 11:39:02 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:11.743 11:39:02 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:11.743 11:39:02 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:11.743 11:39:02 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:11.743 11:39:02 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:11.743 11:39:02 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:11.743 11:39:02 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:11.743 11:39:02 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:11.743 11:39:02 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:11.743 11:39:02 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:11.743 11:39:02 -- setup/hugepages.sh@83 -- # : 512 00:03:11.743 11:39:02 -- setup/hugepages.sh@84 -- # : 1 00:03:11.743 11:39:02 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:11.743 11:39:02 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:11.743 11:39:02 -- setup/hugepages.sh@83 -- # : 0 00:03:11.743 11:39:02 -- setup/hugepages.sh@84 -- # : 0 00:03:11.743 11:39:02 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:11.743 11:39:02 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:11.743 11:39:02 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:11.743 11:39:02 -- setup/hugepages.sh@153 -- # setup output 00:03:11.743 11:39:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:11.743 11:39:02 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:15.037 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:15.037 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:15.037 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:15.037 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:15.037 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:15.037 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:15.037 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:15.037 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:15.037 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:15.037 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:15.037 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:15.037 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:15.037 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:15.037 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:15.037 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:15.037 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:15.037 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:15.037 11:39:05 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:15.037 11:39:05 -- setup/hugepages.sh@89 -- # local node 00:03:15.037 11:39:05 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:15.037 11:39:05 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:15.037 11:39:05 -- setup/hugepages.sh@92 -- # local surp 00:03:15.037 11:39:05 -- setup/hugepages.sh@93 -- # local resv 00:03:15.037 11:39:05 -- setup/hugepages.sh@94 -- # local anon 00:03:15.037 11:39:05 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:15.037 11:39:05 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:15.037 11:39:05 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:15.037 11:39:05 -- setup/common.sh@18 -- # local node= 00:03:15.037 11:39:05 -- setup/common.sh@19 -- # local var val 00:03:15.037 11:39:05 -- setup/common.sh@20 -- # local mem_f mem 00:03:15.037 11:39:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.037 11:39:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.037 11:39:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.037 11:39:05 -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.037 11:39:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.037 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.037 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.037 11:39:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 40547904 kB' 'MemAvailable: 44615968 kB' 'Buffers: 2696 kB' 'Cached: 13563904 kB' 'SwapCached: 0 kB' 'Active: 10431044 kB' 'Inactive: 3660008 kB' 'Active(anon): 9863892 kB' 'Inactive(anon): 0 kB' 'Active(file): 567152 kB' 'Inactive(file): 3660008 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527308 kB' 'Mapped: 205496 kB' 'Shmem: 9339440 kB' 'KReclaimable: 488612 kB' 'Slab: 1123164 kB' 'SReclaimable: 488612 kB' 'SUnreclaim: 634552 kB' 'KernelStack: 22096 kB' 'PageTables: 9132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11207076 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216568 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3208564 kB' 'DirectMap2M: 14303232 kB' 'DirectMap1G: 51380224 kB' 00:03:15.037 11:39:05 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.037 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.037 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.037 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.037 11:39:05 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.037 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.037 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.037 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.037 11:39:05 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.037 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.037 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.037 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.037 11:39:05 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.037 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.037 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.037 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.037 11:39:05 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.037 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.037 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.037 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.037 11:39:05 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.037 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.037 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.037 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.037 11:39:05 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.037 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.037 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.037 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.037 11:39:05 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.037 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.037 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.037 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.037 11:39:05 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.037 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.037 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.037 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.037 11:39:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.037 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.037 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.037 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.037 11:39:05 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.037 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.037 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.037 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.037 11:39:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.037 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.037 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.037 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.037 11:39:05 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.037 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.037 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.037 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.037 11:39:05 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.037 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.037 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.037 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.037 11:39:05 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.037 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.037 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.037 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.037 11:39:05 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.037 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.037 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.037 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.037 11:39:05 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.037 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.037 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.037 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.037 11:39:05 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.037 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.037 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.037 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.037 11:39:05 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.037 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.037 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.037 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.037 11:39:05 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.037 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.037 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.037 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.037 11:39:05 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.037 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.037 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.037 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.037 11:39:05 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.037 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.038 11:39:05 -- setup/common.sh@33 -- # echo 0 00:03:15.038 11:39:05 -- setup/common.sh@33 -- # return 0 00:03:15.038 11:39:05 -- setup/hugepages.sh@97 -- # anon=0 00:03:15.038 11:39:05 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:15.038 11:39:05 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.038 11:39:05 -- setup/common.sh@18 -- # local node= 00:03:15.038 11:39:05 -- setup/common.sh@19 -- # local var val 00:03:15.038 11:39:05 -- setup/common.sh@20 -- # local mem_f mem 00:03:15.038 11:39:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.038 11:39:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.038 11:39:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.038 11:39:05 -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.038 11:39:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.038 11:39:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 40551712 kB' 'MemAvailable: 44619776 kB' 'Buffers: 2696 kB' 'Cached: 13563908 kB' 'SwapCached: 0 kB' 'Active: 10434224 kB' 'Inactive: 3660008 kB' 'Active(anon): 9867072 kB' 'Inactive(anon): 0 kB' 'Active(file): 567152 kB' 'Inactive(file): 3660008 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530496 kB' 'Mapped: 205616 kB' 'Shmem: 9339444 kB' 'KReclaimable: 488612 kB' 'Slab: 1123148 kB' 'SReclaimable: 488612 kB' 'SUnreclaim: 634536 kB' 'KernelStack: 22112 kB' 'PageTables: 9184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11209608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216524 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3208564 kB' 'DirectMap2M: 14303232 kB' 'DirectMap1G: 51380224 kB' 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.038 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.038 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.039 11:39:05 -- setup/common.sh@33 -- # echo 0 00:03:15.039 11:39:05 -- setup/common.sh@33 -- # return 0 00:03:15.039 11:39:05 -- setup/hugepages.sh@99 -- # surp=0 00:03:15.039 11:39:05 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:15.039 11:39:05 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:15.039 11:39:05 -- setup/common.sh@18 -- # local node= 00:03:15.039 11:39:05 -- setup/common.sh@19 -- # local var val 00:03:15.039 11:39:05 -- setup/common.sh@20 -- # local mem_f mem 00:03:15.039 11:39:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.039 11:39:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.039 11:39:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.039 11:39:05 -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.039 11:39:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 11:39:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 40552360 kB' 'MemAvailable: 44620424 kB' 'Buffers: 2696 kB' 'Cached: 13563908 kB' 'SwapCached: 0 kB' 'Active: 10433288 kB' 'Inactive: 3660008 kB' 'Active(anon): 9866136 kB' 'Inactive(anon): 0 kB' 'Active(file): 567152 kB' 'Inactive(file): 3660008 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529944 kB' 'Mapped: 205572 kB' 'Shmem: 9339444 kB' 'KReclaimable: 488612 kB' 'Slab: 1123144 kB' 'SReclaimable: 488612 kB' 'SUnreclaim: 634532 kB' 'KernelStack: 22128 kB' 'PageTables: 9220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11209624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216524 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3208564 kB' 'DirectMap2M: 14303232 kB' 'DirectMap1G: 51380224 kB' 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.039 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.039 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.040 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.040 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.041 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.041 11:39:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.041 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.041 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.041 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.041 11:39:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.041 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.041 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.041 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.041 11:39:05 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.041 11:39:05 -- setup/common.sh@33 -- # echo 0 00:03:15.041 11:39:05 -- setup/common.sh@33 -- # return 0 00:03:15.041 11:39:05 -- setup/hugepages.sh@100 -- # resv=0 00:03:15.041 11:39:05 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:15.041 nr_hugepages=1024 00:03:15.041 11:39:05 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:15.041 resv_hugepages=0 00:03:15.041 11:39:05 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:15.041 surplus_hugepages=0 00:03:15.041 11:39:05 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:15.041 anon_hugepages=0 00:03:15.041 11:39:05 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:15.041 11:39:05 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:15.041 11:39:05 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:15.041 11:39:05 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:15.041 11:39:05 -- setup/common.sh@18 -- # local node= 00:03:15.041 11:39:05 -- setup/common.sh@19 -- # local var val 00:03:15.041 11:39:05 -- setup/common.sh@20 -- # local mem_f mem 00:03:15.041 11:39:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.041 11:39:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.041 11:39:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.041 11:39:05 -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.041 11:39:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.041 11:39:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 40551028 kB' 'MemAvailable: 44619092 kB' 'Buffers: 2696 kB' 'Cached: 13563932 kB' 'SwapCached: 0 kB' 'Active: 10428676 kB' 'Inactive: 3660008 kB' 'Active(anon): 9861524 kB' 'Inactive(anon): 0 kB' 'Active(file): 567152 kB' 'Inactive(file): 3660008 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525428 kB' 'Mapped: 204788 kB' 'Shmem: 9339468 kB' 'KReclaimable: 488612 kB' 'Slab: 1123132 kB' 'SReclaimable: 488612 kB' 'SUnreclaim: 634520 kB' 'KernelStack: 22144 kB' 'PageTables: 9276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11204608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216520 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3208564 kB' 'DirectMap2M: 14303232 kB' 'DirectMap1G: 51380224 kB' 00:03:15.041 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.041 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.389 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.389 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.389 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.389 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.389 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.389 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.389 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.389 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.389 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.389 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.389 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.389 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.389 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.389 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.389 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.389 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.389 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.389 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.389 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.389 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.389 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.389 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.389 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.389 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.389 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.389 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.389 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.389 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.389 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.389 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.389 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.389 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.389 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.389 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.389 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.389 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.389 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.389 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.389 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.389 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.389 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.389 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.389 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.389 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.389 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.389 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.389 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.389 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.389 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.389 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.389 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.389 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.389 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.389 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.389 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.389 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.389 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.389 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.389 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.389 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.389 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.389 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.389 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.389 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.389 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.390 11:39:05 -- setup/common.sh@33 -- # echo 1024 00:03:15.390 11:39:05 -- setup/common.sh@33 -- # return 0 00:03:15.390 11:39:05 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:15.390 11:39:05 -- setup/hugepages.sh@112 -- # get_nodes 00:03:15.390 11:39:05 -- setup/hugepages.sh@27 -- # local node 00:03:15.390 11:39:05 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:15.390 11:39:05 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:15.390 11:39:05 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:15.390 11:39:05 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:15.390 11:39:05 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:15.390 11:39:05 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:15.390 11:39:05 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:15.390 11:39:05 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:15.390 11:39:05 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:15.390 11:39:05 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.390 11:39:05 -- setup/common.sh@18 -- # local node=0 00:03:15.390 11:39:05 -- setup/common.sh@19 -- # local var val 00:03:15.390 11:39:05 -- setup/common.sh@20 -- # local mem_f mem 00:03:15.390 11:39:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.390 11:39:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:15.390 11:39:05 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:15.390 11:39:05 -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.390 11:39:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.390 11:39:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 19509312 kB' 'MemUsed: 13129828 kB' 'SwapCached: 0 kB' 'Active: 7007080 kB' 'Inactive: 3286456 kB' 'Active(anon): 6681284 kB' 'Inactive(anon): 0 kB' 'Active(file): 325796 kB' 'Inactive(file): 3286456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9922420 kB' 'Mapped: 153844 kB' 'AnonPages: 374308 kB' 'Shmem: 6310168 kB' 'KernelStack: 12648 kB' 'PageTables: 6120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 324156 kB' 'Slab: 640996 kB' 'SReclaimable: 324156 kB' 'SUnreclaim: 316840 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.390 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.390 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.391 11:39:05 -- setup/common.sh@33 -- # echo 0 00:03:15.391 11:39:05 -- setup/common.sh@33 -- # return 0 00:03:15.391 11:39:05 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:15.391 11:39:05 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:15.391 11:39:05 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:15.391 11:39:05 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:15.391 11:39:05 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.391 11:39:05 -- setup/common.sh@18 -- # local node=1 00:03:15.391 11:39:05 -- setup/common.sh@19 -- # local var val 00:03:15.391 11:39:05 -- setup/common.sh@20 -- # local mem_f mem 00:03:15.391 11:39:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.391 11:39:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:15.391 11:39:05 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:15.391 11:39:05 -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.391 11:39:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.391 11:39:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656076 kB' 'MemFree: 21041920 kB' 'MemUsed: 6614156 kB' 'SwapCached: 0 kB' 'Active: 3425916 kB' 'Inactive: 373552 kB' 'Active(anon): 3184560 kB' 'Inactive(anon): 0 kB' 'Active(file): 241356 kB' 'Inactive(file): 373552 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3644236 kB' 'Mapped: 51292 kB' 'AnonPages: 155344 kB' 'Shmem: 3029328 kB' 'KernelStack: 9464 kB' 'PageTables: 3088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 164456 kB' 'Slab: 482120 kB' 'SReclaimable: 164456 kB' 'SUnreclaim: 317664 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.391 11:39:05 -- setup/common.sh@32 -- # continue 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.391 11:39:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.392 11:39:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.392 11:39:05 -- setup/common.sh@33 -- # echo 0 00:03:15.392 11:39:05 -- setup/common.sh@33 -- # return 0 00:03:15.392 11:39:05 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:15.392 11:39:05 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:15.392 11:39:05 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:15.392 11:39:05 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:15.392 11:39:05 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:15.392 node0=512 expecting 512 00:03:15.392 11:39:05 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:15.392 11:39:05 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:15.392 11:39:05 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:15.392 11:39:05 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:15.392 node1=512 expecting 512 00:03:15.392 11:39:05 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:15.392 00:03:15.392 real 0m3.467s 00:03:15.392 user 0m1.347s 00:03:15.392 sys 0m2.173s 00:03:15.392 11:39:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:15.392 11:39:05 -- common/autotest_common.sh@10 -- # set +x 00:03:15.392 ************************************ 00:03:15.392 END TEST even_2G_alloc 00:03:15.392 ************************************ 00:03:15.392 11:39:05 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:15.392 11:39:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:15.392 11:39:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:15.392 11:39:05 -- common/autotest_common.sh@10 -- # set +x 00:03:15.392 ************************************ 00:03:15.392 START TEST odd_alloc 00:03:15.392 ************************************ 00:03:15.392 11:39:05 -- common/autotest_common.sh@1111 -- # odd_alloc 00:03:15.392 11:39:05 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:15.392 11:39:05 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:15.392 11:39:05 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:15.392 11:39:05 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:15.392 11:39:05 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:15.392 11:39:05 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:15.392 11:39:05 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:15.392 11:39:05 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:15.392 11:39:05 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:15.392 11:39:05 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:15.392 11:39:05 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:15.392 11:39:05 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:15.392 11:39:05 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:15.392 11:39:05 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:15.392 11:39:05 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:15.392 11:39:05 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:15.392 11:39:05 -- setup/hugepages.sh@83 -- # : 513 00:03:15.392 11:39:05 -- setup/hugepages.sh@84 -- # : 1 00:03:15.392 11:39:05 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:15.392 11:39:05 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:15.392 11:39:05 -- setup/hugepages.sh@83 -- # : 0 00:03:15.392 11:39:05 -- setup/hugepages.sh@84 -- # : 0 00:03:15.392 11:39:05 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:15.392 11:39:05 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:15.392 11:39:05 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:15.392 11:39:05 -- setup/hugepages.sh@160 -- # setup output 00:03:15.392 11:39:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:15.392 11:39:05 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:18.683 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:18.683 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:18.683 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:18.683 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:18.683 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:18.683 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:18.683 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:18.683 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:18.683 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:18.683 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:18.683 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:18.683 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:18.683 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:18.683 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:18.683 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:18.683 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:18.683 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:18.946 11:39:09 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:18.946 11:39:09 -- setup/hugepages.sh@89 -- # local node 00:03:18.946 11:39:09 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:18.946 11:39:09 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:18.946 11:39:09 -- setup/hugepages.sh@92 -- # local surp 00:03:18.946 11:39:09 -- setup/hugepages.sh@93 -- # local resv 00:03:18.946 11:39:09 -- setup/hugepages.sh@94 -- # local anon 00:03:18.946 11:39:09 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:18.946 11:39:09 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:18.946 11:39:09 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:18.946 11:39:09 -- setup/common.sh@18 -- # local node= 00:03:18.946 11:39:09 -- setup/common.sh@19 -- # local var val 00:03:18.946 11:39:09 -- setup/common.sh@20 -- # local mem_f mem 00:03:18.946 11:39:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.946 11:39:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.946 11:39:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.946 11:39:09 -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.946 11:39:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.946 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.946 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 11:39:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 40578404 kB' 'MemAvailable: 44646468 kB' 'Buffers: 2696 kB' 'Cached: 13564032 kB' 'SwapCached: 0 kB' 'Active: 10427400 kB' 'Inactive: 3660008 kB' 'Active(anon): 9860248 kB' 'Inactive(anon): 0 kB' 'Active(file): 567152 kB' 'Inactive(file): 3660008 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523556 kB' 'Mapped: 204960 kB' 'Shmem: 9339568 kB' 'KReclaimable: 488612 kB' 'Slab: 1122652 kB' 'SReclaimable: 488612 kB' 'SUnreclaim: 634040 kB' 'KernelStack: 22096 kB' 'PageTables: 8944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 11202796 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216632 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3208564 kB' 'DirectMap2M: 14303232 kB' 'DirectMap1G: 51380224 kB' 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.947 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.948 11:39:09 -- setup/common.sh@33 -- # echo 0 00:03:18.948 11:39:09 -- setup/common.sh@33 -- # return 0 00:03:18.948 11:39:09 -- setup/hugepages.sh@97 -- # anon=0 00:03:18.948 11:39:09 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:18.948 11:39:09 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.948 11:39:09 -- setup/common.sh@18 -- # local node= 00:03:18.948 11:39:09 -- setup/common.sh@19 -- # local var val 00:03:18.948 11:39:09 -- setup/common.sh@20 -- # local mem_f mem 00:03:18.948 11:39:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.948 11:39:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.948 11:39:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.948 11:39:09 -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.948 11:39:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 11:39:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 40577896 kB' 'MemAvailable: 44645960 kB' 'Buffers: 2696 kB' 'Cached: 13564036 kB' 'SwapCached: 0 kB' 'Active: 10427432 kB' 'Inactive: 3660008 kB' 'Active(anon): 9860280 kB' 'Inactive(anon): 0 kB' 'Active(file): 567152 kB' 'Inactive(file): 3660008 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523584 kB' 'Mapped: 204912 kB' 'Shmem: 9339572 kB' 'KReclaimable: 488612 kB' 'Slab: 1122652 kB' 'SReclaimable: 488612 kB' 'SUnreclaim: 634040 kB' 'KernelStack: 22224 kB' 'PageTables: 9096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 11202440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216632 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3208564 kB' 'DirectMap2M: 14303232 kB' 'DirectMap1G: 51380224 kB' 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.948 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.948 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.949 11:39:09 -- setup/common.sh@33 -- # echo 0 00:03:18.949 11:39:09 -- setup/common.sh@33 -- # return 0 00:03:18.949 11:39:09 -- setup/hugepages.sh@99 -- # surp=0 00:03:18.949 11:39:09 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:18.949 11:39:09 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:18.949 11:39:09 -- setup/common.sh@18 -- # local node= 00:03:18.949 11:39:09 -- setup/common.sh@19 -- # local var val 00:03:18.949 11:39:09 -- setup/common.sh@20 -- # local mem_f mem 00:03:18.949 11:39:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.949 11:39:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.949 11:39:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.949 11:39:09 -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.949 11:39:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 11:39:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 40576560 kB' 'MemAvailable: 44644624 kB' 'Buffers: 2696 kB' 'Cached: 13564048 kB' 'SwapCached: 0 kB' 'Active: 10427284 kB' 'Inactive: 3660008 kB' 'Active(anon): 9860132 kB' 'Inactive(anon): 0 kB' 'Active(file): 567152 kB' 'Inactive(file): 3660008 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523336 kB' 'Mapped: 204836 kB' 'Shmem: 9339584 kB' 'KReclaimable: 488612 kB' 'Slab: 1122552 kB' 'SReclaimable: 488612 kB' 'SUnreclaim: 633940 kB' 'KernelStack: 22320 kB' 'PageTables: 9076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 11199796 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216568 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3208564 kB' 'DirectMap2M: 14303232 kB' 'DirectMap1G: 51380224 kB' 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 11:39:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 11:39:09 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.950 11:39:09 -- setup/common.sh@33 -- # echo 0 00:03:18.950 11:39:09 -- setup/common.sh@33 -- # return 0 00:03:18.950 11:39:09 -- setup/hugepages.sh@100 -- # resv=0 00:03:18.950 11:39:09 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:18.950 nr_hugepages=1025 00:03:18.950 11:39:09 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:18.950 resv_hugepages=0 00:03:18.950 11:39:09 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:18.950 surplus_hugepages=0 00:03:18.950 11:39:09 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:18.950 anon_hugepages=0 00:03:18.951 11:39:09 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:18.951 11:39:09 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:18.951 11:39:09 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:18.951 11:39:09 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:18.951 11:39:09 -- setup/common.sh@18 -- # local node= 00:03:18.951 11:39:09 -- setup/common.sh@19 -- # local var val 00:03:18.951 11:39:09 -- setup/common.sh@20 -- # local mem_f mem 00:03:18.951 11:39:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.951 11:39:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.951 11:39:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.951 11:39:09 -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.951 11:39:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.951 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.951 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.951 11:39:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 40576776 kB' 'MemAvailable: 44644840 kB' 'Buffers: 2696 kB' 'Cached: 13564060 kB' 'SwapCached: 0 kB' 'Active: 10426192 kB' 'Inactive: 3660008 kB' 'Active(anon): 9859040 kB' 'Inactive(anon): 0 kB' 'Active(file): 567152 kB' 'Inactive(file): 3660008 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522760 kB' 'Mapped: 204816 kB' 'Shmem: 9339596 kB' 'KReclaimable: 488612 kB' 'Slab: 1122612 kB' 'SReclaimable: 488612 kB' 'SUnreclaim: 634000 kB' 'KernelStack: 22032 kB' 'PageTables: 8748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 11199944 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216552 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3208564 kB' 'DirectMap2M: 14303232 kB' 'DirectMap1G: 51380224 kB' 00:03:18.951 11:39:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.951 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.951 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.951 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.951 11:39:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.951 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.951 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.951 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.951 11:39:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.951 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.951 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.951 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.951 11:39:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.951 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.951 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.951 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.951 11:39:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.951 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.951 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.951 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.951 11:39:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.951 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.951 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.951 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.951 11:39:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.951 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.951 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.951 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.951 11:39:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.951 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.951 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.951 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.951 11:39:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.951 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.951 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.951 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.951 11:39:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.951 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.951 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.951 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.951 11:39:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.951 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.951 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.951 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.951 11:39:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.951 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.951 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.951 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.951 11:39:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.951 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.951 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.951 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.951 11:39:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.951 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.951 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.951 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.951 11:39:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.951 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.951 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.951 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.951 11:39:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.951 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.951 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.951 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.951 11:39:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.951 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.951 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.951 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.951 11:39:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.951 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.951 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.951 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.951 11:39:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.951 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.951 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.951 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.951 11:39:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.951 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.951 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.951 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.951 11:39:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.951 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.951 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.951 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.951 11:39:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.951 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.951 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.951 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.951 11:39:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.951 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.951 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.951 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.951 11:39:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.951 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.951 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.951 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.951 11:39:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.951 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.951 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.951 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.951 11:39:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.952 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.952 11:39:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.952 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.952 11:39:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.952 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.952 11:39:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.952 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.952 11:39:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.952 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.952 11:39:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.952 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.952 11:39:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.952 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.952 11:39:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.952 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.952 11:39:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.952 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.952 11:39:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.952 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.952 11:39:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.952 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.952 11:39:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.952 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.952 11:39:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.952 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.952 11:39:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.952 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.952 11:39:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.952 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.952 11:39:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.952 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.952 11:39:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.952 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.952 11:39:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.952 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.952 11:39:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.952 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.952 11:39:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.952 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.952 11:39:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.952 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.952 11:39:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.952 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.952 11:39:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.952 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.952 11:39:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.952 11:39:09 -- setup/common.sh@33 -- # echo 1025 00:03:18.952 11:39:09 -- setup/common.sh@33 -- # return 0 00:03:18.952 11:39:09 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:18.952 11:39:09 -- setup/hugepages.sh@112 -- # get_nodes 00:03:18.952 11:39:09 -- setup/hugepages.sh@27 -- # local node 00:03:18.952 11:39:09 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.952 11:39:09 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:18.952 11:39:09 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.952 11:39:09 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:18.952 11:39:09 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:18.952 11:39:09 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:18.952 11:39:09 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:18.952 11:39:09 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:18.952 11:39:09 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:18.952 11:39:09 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.952 11:39:09 -- setup/common.sh@18 -- # local node=0 00:03:18.952 11:39:09 -- setup/common.sh@19 -- # local var val 00:03:18.952 11:39:09 -- setup/common.sh@20 -- # local mem_f mem 00:03:18.952 11:39:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.952 11:39:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:18.952 11:39:09 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:18.952 11:39:09 -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.952 11:39:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.952 11:39:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 19530092 kB' 'MemUsed: 13109048 kB' 'SwapCached: 0 kB' 'Active: 7003056 kB' 'Inactive: 3286456 kB' 'Active(anon): 6677260 kB' 'Inactive(anon): 0 kB' 'Active(file): 325796 kB' 'Inactive(file): 3286456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9922416 kB' 'Mapped: 153872 kB' 'AnonPages: 370272 kB' 'Shmem: 6310164 kB' 'KernelStack: 12632 kB' 'PageTables: 5832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 324156 kB' 'Slab: 640696 kB' 'SReclaimable: 324156 kB' 'SUnreclaim: 316540 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:18.952 11:39:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.952 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.952 11:39:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.952 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.952 11:39:09 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.952 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.952 11:39:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.952 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.952 11:39:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.952 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.952 11:39:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.952 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.952 11:39:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.952 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.952 11:39:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.952 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.952 11:39:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.952 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.952 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.953 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.953 11:39:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.953 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.953 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.953 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.953 11:39:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.953 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.953 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.953 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.953 11:39:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.953 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.953 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.953 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.953 11:39:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.953 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.953 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.953 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.953 11:39:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.953 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.953 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.953 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.953 11:39:09 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.953 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.953 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.953 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.953 11:39:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.953 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.953 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.953 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.953 11:39:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.953 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.953 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.953 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.953 11:39:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.953 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.953 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.953 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.953 11:39:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.953 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.953 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.953 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.953 11:39:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.953 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.953 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.953 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.953 11:39:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.953 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.953 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.953 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.953 11:39:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.953 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.953 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.953 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.953 11:39:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.953 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.953 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.953 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.953 11:39:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.953 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.953 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.953 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.953 11:39:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.953 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.953 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.953 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.953 11:39:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.953 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.953 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.953 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.953 11:39:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.953 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.953 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.953 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.953 11:39:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.953 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.953 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.953 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.953 11:39:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.953 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.953 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.953 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.953 11:39:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.953 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.953 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.953 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.953 11:39:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.953 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.953 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.953 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.953 11:39:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.953 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.953 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.953 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.953 11:39:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.953 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.953 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.953 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.953 11:39:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.953 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.953 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.953 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.953 11:39:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.953 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.953 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.953 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.953 11:39:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.953 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.953 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.953 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.953 11:39:09 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.953 11:39:09 -- setup/common.sh@33 -- # echo 0 00:03:18.953 11:39:09 -- setup/common.sh@33 -- # return 0 00:03:18.953 11:39:09 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:18.953 11:39:09 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:18.953 11:39:09 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:18.953 11:39:09 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:18.953 11:39:09 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.953 11:39:09 -- setup/common.sh@18 -- # local node=1 00:03:18.953 11:39:09 -- setup/common.sh@19 -- # local var val 00:03:18.953 11:39:09 -- setup/common.sh@20 -- # local mem_f mem 00:03:18.953 11:39:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.953 11:39:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:18.953 11:39:09 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:18.953 11:39:09 -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.953 11:39:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.953 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.953 11:39:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656076 kB' 'MemFree: 21045928 kB' 'MemUsed: 6610148 kB' 'SwapCached: 0 kB' 'Active: 3423084 kB' 'Inactive: 373552 kB' 'Active(anon): 3181728 kB' 'Inactive(anon): 0 kB' 'Active(file): 241356 kB' 'Inactive(file): 373552 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3644356 kB' 'Mapped: 50944 kB' 'AnonPages: 152412 kB' 'Shmem: 3029448 kB' 'KernelStack: 9400 kB' 'PageTables: 2828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 164456 kB' 'Slab: 481916 kB' 'SReclaimable: 164456 kB' 'SUnreclaim: 317460 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:18.953 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.953 11:39:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.953 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.953 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.953 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.953 11:39:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.953 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.953 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.953 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.953 11:39:09 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.953 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.953 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.953 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.953 11:39:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.953 11:39:09 -- setup/common.sh@32 -- # continue 00:03:18.953 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.214 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.214 11:39:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.214 11:39:09 -- setup/common.sh@32 -- # continue 00:03:19.214 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.214 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.214 11:39:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.214 11:39:09 -- setup/common.sh@32 -- # continue 00:03:19.214 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.214 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.214 11:39:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.214 11:39:09 -- setup/common.sh@32 -- # continue 00:03:19.214 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.214 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.214 11:39:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.214 11:39:09 -- setup/common.sh@32 -- # continue 00:03:19.214 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.214 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.214 11:39:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.214 11:39:09 -- setup/common.sh@32 -- # continue 00:03:19.214 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.214 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.214 11:39:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.214 11:39:09 -- setup/common.sh@32 -- # continue 00:03:19.214 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.214 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.214 11:39:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.214 11:39:09 -- setup/common.sh@32 -- # continue 00:03:19.214 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.214 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.214 11:39:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.214 11:39:09 -- setup/common.sh@32 -- # continue 00:03:19.214 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.214 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.214 11:39:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.214 11:39:09 -- setup/common.sh@32 -- # continue 00:03:19.214 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.214 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.214 11:39:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.214 11:39:09 -- setup/common.sh@32 -- # continue 00:03:19.214 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.214 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.214 11:39:09 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.214 11:39:09 -- setup/common.sh@32 -- # continue 00:03:19.214 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.214 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.214 11:39:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.214 11:39:09 -- setup/common.sh@32 -- # continue 00:03:19.214 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.214 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.214 11:39:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.214 11:39:09 -- setup/common.sh@32 -- # continue 00:03:19.214 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.214 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.214 11:39:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.214 11:39:09 -- setup/common.sh@32 -- # continue 00:03:19.214 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.214 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.214 11:39:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.214 11:39:09 -- setup/common.sh@32 -- # continue 00:03:19.214 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.214 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.214 11:39:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.214 11:39:09 -- setup/common.sh@32 -- # continue 00:03:19.214 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.214 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.214 11:39:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.214 11:39:09 -- setup/common.sh@32 -- # continue 00:03:19.214 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.214 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.214 11:39:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.214 11:39:09 -- setup/common.sh@32 -- # continue 00:03:19.214 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.214 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.214 11:39:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.214 11:39:09 -- setup/common.sh@32 -- # continue 00:03:19.214 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.214 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.214 11:39:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.214 11:39:09 -- setup/common.sh@32 -- # continue 00:03:19.214 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.214 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.214 11:39:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.214 11:39:09 -- setup/common.sh@32 -- # continue 00:03:19.214 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.214 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.214 11:39:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.214 11:39:09 -- setup/common.sh@32 -- # continue 00:03:19.214 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.214 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.214 11:39:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.214 11:39:09 -- setup/common.sh@32 -- # continue 00:03:19.214 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.215 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.215 11:39:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.215 11:39:09 -- setup/common.sh@32 -- # continue 00:03:19.215 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.215 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.215 11:39:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.215 11:39:09 -- setup/common.sh@32 -- # continue 00:03:19.215 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.215 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.215 11:39:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.215 11:39:09 -- setup/common.sh@32 -- # continue 00:03:19.215 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.215 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.215 11:39:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.215 11:39:09 -- setup/common.sh@32 -- # continue 00:03:19.215 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.215 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.215 11:39:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.215 11:39:09 -- setup/common.sh@32 -- # continue 00:03:19.215 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.215 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.215 11:39:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.215 11:39:09 -- setup/common.sh@32 -- # continue 00:03:19.215 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.215 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.215 11:39:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.215 11:39:09 -- setup/common.sh@32 -- # continue 00:03:19.215 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.215 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.215 11:39:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.215 11:39:09 -- setup/common.sh@32 -- # continue 00:03:19.215 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.215 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.215 11:39:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.215 11:39:09 -- setup/common.sh@32 -- # continue 00:03:19.215 11:39:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.215 11:39:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.215 11:39:09 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.215 11:39:09 -- setup/common.sh@33 -- # echo 0 00:03:19.215 11:39:09 -- setup/common.sh@33 -- # return 0 00:03:19.215 11:39:09 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:19.215 11:39:09 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:19.215 11:39:09 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:19.215 11:39:09 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:19.215 11:39:09 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:19.215 node0=512 expecting 513 00:03:19.215 11:39:09 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:19.215 11:39:09 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:19.215 11:39:09 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:19.215 11:39:09 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:19.215 node1=513 expecting 512 00:03:19.215 11:39:09 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:19.215 00:03:19.215 real 0m3.638s 00:03:19.215 user 0m1.406s 00:03:19.215 sys 0m2.292s 00:03:19.215 11:39:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:19.215 11:39:09 -- common/autotest_common.sh@10 -- # set +x 00:03:19.215 ************************************ 00:03:19.215 END TEST odd_alloc 00:03:19.215 ************************************ 00:03:19.215 11:39:09 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:19.215 11:39:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:19.215 11:39:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:19.215 11:39:09 -- common/autotest_common.sh@10 -- # set +x 00:03:19.215 ************************************ 00:03:19.215 START TEST custom_alloc 00:03:19.215 ************************************ 00:03:19.215 11:39:09 -- common/autotest_common.sh@1111 -- # custom_alloc 00:03:19.215 11:39:09 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:19.215 11:39:09 -- setup/hugepages.sh@169 -- # local node 00:03:19.215 11:39:09 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:19.215 11:39:09 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:19.215 11:39:09 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:19.215 11:39:09 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:19.215 11:39:09 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:19.215 11:39:09 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:19.215 11:39:09 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:19.215 11:39:09 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:19.215 11:39:09 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:19.215 11:39:09 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:19.215 11:39:09 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:19.215 11:39:09 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:19.215 11:39:09 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:19.215 11:39:09 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:19.215 11:39:09 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:19.215 11:39:09 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:19.215 11:39:09 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:19.215 11:39:09 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:19.215 11:39:09 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:19.215 11:39:09 -- setup/hugepages.sh@83 -- # : 256 00:03:19.215 11:39:09 -- setup/hugepages.sh@84 -- # : 1 00:03:19.215 11:39:09 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:19.215 11:39:09 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:19.215 11:39:09 -- setup/hugepages.sh@83 -- # : 0 00:03:19.215 11:39:09 -- setup/hugepages.sh@84 -- # : 0 00:03:19.215 11:39:09 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:19.215 11:39:09 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:19.215 11:39:09 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:19.215 11:39:09 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:19.215 11:39:09 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:19.215 11:39:09 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:19.215 11:39:09 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:19.215 11:39:09 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:19.215 11:39:09 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:19.215 11:39:09 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:19.215 11:39:09 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:19.215 11:39:09 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:19.215 11:39:09 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:19.215 11:39:09 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:19.215 11:39:09 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:19.215 11:39:09 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:19.215 11:39:09 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:19.215 11:39:09 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:19.215 11:39:09 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:19.215 11:39:09 -- setup/hugepages.sh@78 -- # return 0 00:03:19.215 11:39:09 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:19.215 11:39:09 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:19.215 11:39:09 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:19.215 11:39:09 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:19.215 11:39:09 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:19.215 11:39:09 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:19.215 11:39:09 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:19.215 11:39:09 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:19.215 11:39:09 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:19.215 11:39:09 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:19.215 11:39:09 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:19.216 11:39:09 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:19.216 11:39:09 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:19.216 11:39:09 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:19.216 11:39:09 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:19.216 11:39:09 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:19.216 11:39:09 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:19.216 11:39:09 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:19.216 11:39:09 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:19.216 11:39:09 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:19.216 11:39:09 -- setup/hugepages.sh@78 -- # return 0 00:03:19.216 11:39:09 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:19.216 11:39:09 -- setup/hugepages.sh@187 -- # setup output 00:03:19.216 11:39:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:19.216 11:39:09 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:22.514 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:22.514 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:22.514 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:22.514 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:22.514 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:22.514 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:22.514 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:22.514 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:22.514 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:22.514 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:22.514 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:22.514 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:22.514 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:22.514 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:22.514 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:22.514 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:22.514 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:22.514 11:39:12 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:22.514 11:39:12 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:22.514 11:39:12 -- setup/hugepages.sh@89 -- # local node 00:03:22.514 11:39:12 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:22.514 11:39:12 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:22.514 11:39:12 -- setup/hugepages.sh@92 -- # local surp 00:03:22.514 11:39:12 -- setup/hugepages.sh@93 -- # local resv 00:03:22.514 11:39:12 -- setup/hugepages.sh@94 -- # local anon 00:03:22.514 11:39:12 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:22.514 11:39:12 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:22.514 11:39:12 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:22.514 11:39:12 -- setup/common.sh@18 -- # local node= 00:03:22.514 11:39:12 -- setup/common.sh@19 -- # local var val 00:03:22.514 11:39:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.514 11:39:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.514 11:39:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.514 11:39:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.514 11:39:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.514 11:39:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.514 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.514 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.514 11:39:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 39538544 kB' 'MemAvailable: 43606608 kB' 'Buffers: 2696 kB' 'Cached: 13564164 kB' 'SwapCached: 0 kB' 'Active: 10428484 kB' 'Inactive: 3660008 kB' 'Active(anon): 9861332 kB' 'Inactive(anon): 0 kB' 'Active(file): 567152 kB' 'Inactive(file): 3660008 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524444 kB' 'Mapped: 204928 kB' 'Shmem: 9339700 kB' 'KReclaimable: 488612 kB' 'Slab: 1122992 kB' 'SReclaimable: 488612 kB' 'SUnreclaim: 634380 kB' 'KernelStack: 22080 kB' 'PageTables: 8964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 11200916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216600 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3208564 kB' 'DirectMap2M: 14303232 kB' 'DirectMap1G: 51380224 kB' 00:03:22.514 11:39:12 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.514 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.514 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.514 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.514 11:39:12 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.514 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.514 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.514 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.514 11:39:12 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.514 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.514 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.514 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.514 11:39:12 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.514 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.514 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.514 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.514 11:39:12 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.514 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.514 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.514 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.514 11:39:12 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.514 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.514 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.514 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.514 11:39:12 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.514 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.514 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.514 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.514 11:39:12 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.514 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.514 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.514 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.514 11:39:12 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.514 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.514 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.514 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.514 11:39:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.514 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.514 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.514 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.514 11:39:12 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.514 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.514 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.514 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.514 11:39:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.514 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.514 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.514 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.514 11:39:12 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.514 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.514 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.514 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.514 11:39:12 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.514 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.514 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.514 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.514 11:39:12 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.514 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.514 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.514 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.514 11:39:12 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.514 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.514 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.514 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.514 11:39:12 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.514 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.514 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.514 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.514 11:39:12 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.514 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.514 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.514 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.514 11:39:12 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.514 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.514 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.514 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.515 11:39:12 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.515 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.515 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.515 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.515 11:39:12 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.515 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.515 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.515 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.515 11:39:12 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.515 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.515 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.515 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.515 11:39:12 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.515 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.515 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.515 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.515 11:39:12 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.515 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.515 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.515 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.515 11:39:12 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.515 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.515 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.515 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.515 11:39:12 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.515 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.515 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.515 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.515 11:39:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.515 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.515 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.515 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.515 11:39:12 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.515 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.515 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.515 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.515 11:39:12 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.515 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.515 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.515 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.515 11:39:12 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.515 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.515 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.515 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.515 11:39:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.515 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.515 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.515 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.515 11:39:12 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.515 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.515 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.515 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.515 11:39:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.515 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.515 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.515 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.515 11:39:12 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.515 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.515 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.515 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.515 11:39:12 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.515 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.515 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.515 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.515 11:39:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.515 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.515 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.515 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.515 11:39:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.515 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.515 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.515 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.515 11:39:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.515 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.515 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.515 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.515 11:39:12 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.515 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.515 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.515 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.515 11:39:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.515 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.515 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.515 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.515 11:39:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.515 11:39:12 -- setup/common.sh@33 -- # echo 0 00:03:22.515 11:39:12 -- setup/common.sh@33 -- # return 0 00:03:22.515 11:39:12 -- setup/hugepages.sh@97 -- # anon=0 00:03:22.515 11:39:12 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:22.515 11:39:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.515 11:39:12 -- setup/common.sh@18 -- # local node= 00:03:22.515 11:39:12 -- setup/common.sh@19 -- # local var val 00:03:22.515 11:39:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.515 11:39:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.515 11:39:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.515 11:39:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.515 11:39:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.515 11:39:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.515 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.515 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.515 11:39:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 39538588 kB' 'MemAvailable: 43606652 kB' 'Buffers: 2696 kB' 'Cached: 13564168 kB' 'SwapCached: 0 kB' 'Active: 10427764 kB' 'Inactive: 3660008 kB' 'Active(anon): 9860612 kB' 'Inactive(anon): 0 kB' 'Active(file): 567152 kB' 'Inactive(file): 3660008 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524156 kB' 'Mapped: 204844 kB' 'Shmem: 9339704 kB' 'KReclaimable: 488612 kB' 'Slab: 1122960 kB' 'SReclaimable: 488612 kB' 'SUnreclaim: 634348 kB' 'KernelStack: 22064 kB' 'PageTables: 8892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 11200928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216600 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3208564 kB' 'DirectMap2M: 14303232 kB' 'DirectMap1G: 51380224 kB' 00:03:22.515 11:39:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.515 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.515 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.515 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.515 11:39:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.515 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.515 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.515 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.515 11:39:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.515 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.515 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.515 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.515 11:39:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.515 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.515 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.515 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.515 11:39:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.515 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.515 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.515 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.515 11:39:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.515 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.515 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.515 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.515 11:39:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.515 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.515 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.515 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.515 11:39:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.515 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.515 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.515 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.515 11:39:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.515 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.515 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.515 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.515 11:39:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.515 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.515 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.516 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.516 11:39:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.516 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.516 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.516 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.516 11:39:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.516 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.516 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.516 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.516 11:39:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.516 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.516 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.516 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.516 11:39:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.516 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.516 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.516 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.516 11:39:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.516 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.516 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.516 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.516 11:39:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.516 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.516 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.516 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.516 11:39:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.516 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.516 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.516 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.516 11:39:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.516 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.516 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.516 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.516 11:39:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.516 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.516 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.516 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.516 11:39:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.516 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.516 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.516 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.516 11:39:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.516 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.516 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.516 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.516 11:39:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.516 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.516 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.516 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.516 11:39:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.516 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.516 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.516 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.516 11:39:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.516 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.516 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.516 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.516 11:39:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.516 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.516 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.516 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.516 11:39:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.516 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.516 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.516 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.516 11:39:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.516 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.516 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.516 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.516 11:39:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.516 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.516 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.516 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.516 11:39:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.516 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.516 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.516 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.516 11:39:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.516 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.516 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.516 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.516 11:39:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.516 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.516 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.516 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.516 11:39:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.516 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.516 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.516 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.516 11:39:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.516 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.516 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.516 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.516 11:39:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.516 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.516 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.516 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.516 11:39:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.516 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.516 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.516 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.516 11:39:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.516 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.516 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.516 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.516 11:39:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.516 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.516 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.516 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.516 11:39:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.516 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.516 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.516 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.516 11:39:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.516 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.516 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.516 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.516 11:39:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.516 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.516 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.516 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.516 11:39:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.516 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.516 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.516 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.517 11:39:12 -- setup/common.sh@33 -- # echo 0 00:03:22.517 11:39:12 -- setup/common.sh@33 -- # return 0 00:03:22.517 11:39:12 -- setup/hugepages.sh@99 -- # surp=0 00:03:22.517 11:39:12 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:22.517 11:39:12 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:22.517 11:39:12 -- setup/common.sh@18 -- # local node= 00:03:22.517 11:39:12 -- setup/common.sh@19 -- # local var val 00:03:22.517 11:39:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.517 11:39:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.517 11:39:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.517 11:39:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.517 11:39:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.517 11:39:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.517 11:39:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 39538960 kB' 'MemAvailable: 43607024 kB' 'Buffers: 2696 kB' 'Cached: 13564180 kB' 'SwapCached: 0 kB' 'Active: 10427756 kB' 'Inactive: 3660008 kB' 'Active(anon): 9860604 kB' 'Inactive(anon): 0 kB' 'Active(file): 567152 kB' 'Inactive(file): 3660008 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524156 kB' 'Mapped: 204844 kB' 'Shmem: 9339716 kB' 'KReclaimable: 488612 kB' 'Slab: 1122960 kB' 'SReclaimable: 488612 kB' 'SUnreclaim: 634348 kB' 'KernelStack: 22064 kB' 'PageTables: 8892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 11200944 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216600 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3208564 kB' 'DirectMap2M: 14303232 kB' 'DirectMap1G: 51380224 kB' 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.517 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.517 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.518 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.518 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.518 11:39:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.518 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.518 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.518 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.518 11:39:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.518 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.518 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.518 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.518 11:39:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.518 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.518 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.518 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.518 11:39:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.518 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.518 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.518 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.518 11:39:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.518 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.518 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.518 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.518 11:39:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.518 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.518 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.518 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.518 11:39:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.518 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.518 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.518 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.518 11:39:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.518 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.518 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.518 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.518 11:39:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.518 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.518 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.518 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.518 11:39:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.518 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.518 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.518 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.518 11:39:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.518 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.518 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.518 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.518 11:39:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.518 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.518 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.518 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.518 11:39:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.518 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.518 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.518 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.518 11:39:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.518 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.518 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.518 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.518 11:39:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.518 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.518 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.518 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.518 11:39:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.518 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.518 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.518 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.518 11:39:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.518 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.518 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.518 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.518 11:39:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.518 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.518 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.518 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.518 11:39:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.518 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.518 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.518 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.518 11:39:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.518 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.518 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.518 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.518 11:39:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.518 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.518 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.518 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.518 11:39:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.518 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.518 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.518 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.518 11:39:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.518 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.518 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.518 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.518 11:39:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.518 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.518 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.518 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.518 11:39:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.518 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.518 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.518 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.518 11:39:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.518 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.518 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.518 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.518 11:39:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.518 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.518 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.518 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.518 11:39:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.518 11:39:12 -- setup/common.sh@33 -- # echo 0 00:03:22.518 11:39:12 -- setup/common.sh@33 -- # return 0 00:03:22.518 11:39:12 -- setup/hugepages.sh@100 -- # resv=0 00:03:22.518 11:39:12 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:22.518 nr_hugepages=1536 00:03:22.518 11:39:12 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:22.518 resv_hugepages=0 00:03:22.518 11:39:12 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:22.518 surplus_hugepages=0 00:03:22.518 11:39:12 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:22.518 anon_hugepages=0 00:03:22.518 11:39:12 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:22.518 11:39:12 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:22.518 11:39:12 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:22.518 11:39:12 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:22.518 11:39:12 -- setup/common.sh@18 -- # local node= 00:03:22.518 11:39:12 -- setup/common.sh@19 -- # local var val 00:03:22.518 11:39:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.518 11:39:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.518 11:39:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.518 11:39:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.518 11:39:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.518 11:39:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.518 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.518 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.518 11:39:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 39538456 kB' 'MemAvailable: 43606520 kB' 'Buffers: 2696 kB' 'Cached: 13564180 kB' 'SwapCached: 0 kB' 'Active: 10427792 kB' 'Inactive: 3660008 kB' 'Active(anon): 9860640 kB' 'Inactive(anon): 0 kB' 'Active(file): 567152 kB' 'Inactive(file): 3660008 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524188 kB' 'Mapped: 204844 kB' 'Shmem: 9339716 kB' 'KReclaimable: 488612 kB' 'Slab: 1122960 kB' 'SReclaimable: 488612 kB' 'SUnreclaim: 634348 kB' 'KernelStack: 22080 kB' 'PageTables: 8940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 11200956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216600 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3208564 kB' 'DirectMap2M: 14303232 kB' 'DirectMap1G: 51380224 kB' 00:03:22.518 11:39:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.518 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.518 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.518 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.518 11:39:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.519 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.519 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.519 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.519 11:39:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.519 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.519 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.519 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.519 11:39:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.519 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.519 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.519 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.519 11:39:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.519 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.519 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.519 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.519 11:39:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.519 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.519 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.519 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.519 11:39:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.519 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.519 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.519 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.519 11:39:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.519 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.519 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.519 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.519 11:39:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.519 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.519 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.519 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.519 11:39:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.519 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.519 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.519 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.519 11:39:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.519 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.519 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.519 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.519 11:39:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.519 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.519 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.519 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.519 11:39:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.519 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.519 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.519 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.519 11:39:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.519 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.519 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.519 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.519 11:39:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.519 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.519 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.519 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.519 11:39:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.519 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.519 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.519 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.519 11:39:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.519 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.519 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.519 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.519 11:39:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.519 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.519 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.519 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.519 11:39:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.519 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.519 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.519 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.519 11:39:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.519 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.519 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.519 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.519 11:39:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.519 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.519 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.519 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.519 11:39:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.519 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.519 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.519 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.519 11:39:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.519 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.519 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.519 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.519 11:39:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.519 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.519 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.519 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.519 11:39:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.519 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.519 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.519 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.519 11:39:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.520 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.520 11:39:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.520 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.520 11:39:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.520 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.520 11:39:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.520 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.520 11:39:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.520 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.520 11:39:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.520 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.520 11:39:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.520 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.520 11:39:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.520 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.520 11:39:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.520 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.520 11:39:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.520 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.520 11:39:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.520 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.520 11:39:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.520 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.520 11:39:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.520 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.520 11:39:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.520 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.520 11:39:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.520 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.520 11:39:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.520 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.520 11:39:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.520 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.520 11:39:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.520 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.520 11:39:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.520 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.520 11:39:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.520 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.520 11:39:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.520 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.520 11:39:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.520 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.520 11:39:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.520 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.520 11:39:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.520 11:39:12 -- setup/common.sh@33 -- # echo 1536 00:03:22.520 11:39:12 -- setup/common.sh@33 -- # return 0 00:03:22.520 11:39:12 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:22.520 11:39:12 -- setup/hugepages.sh@112 -- # get_nodes 00:03:22.520 11:39:12 -- setup/hugepages.sh@27 -- # local node 00:03:22.520 11:39:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.520 11:39:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:22.520 11:39:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.520 11:39:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:22.520 11:39:12 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:22.520 11:39:12 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:22.520 11:39:12 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:22.520 11:39:12 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:22.520 11:39:12 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:22.520 11:39:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.520 11:39:12 -- setup/common.sh@18 -- # local node=0 00:03:22.520 11:39:12 -- setup/common.sh@19 -- # local var val 00:03:22.520 11:39:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.520 11:39:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.520 11:39:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:22.520 11:39:12 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:22.520 11:39:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.520 11:39:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.520 11:39:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 19532140 kB' 'MemUsed: 13107000 kB' 'SwapCached: 0 kB' 'Active: 7001980 kB' 'Inactive: 3286456 kB' 'Active(anon): 6676184 kB' 'Inactive(anon): 0 kB' 'Active(file): 325796 kB' 'Inactive(file): 3286456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9922436 kB' 'Mapped: 153900 kB' 'AnonPages: 369104 kB' 'Shmem: 6310184 kB' 'KernelStack: 12632 kB' 'PageTables: 5932 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 324156 kB' 'Slab: 640768 kB' 'SReclaimable: 324156 kB' 'SUnreclaim: 316612 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:22.520 11:39:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.520 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.520 11:39:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.520 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.520 11:39:12 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.520 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.520 11:39:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.520 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.520 11:39:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.520 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.520 11:39:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.520 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.520 11:39:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.520 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.520 11:39:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.520 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.520 11:39:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.520 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.520 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.521 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.521 11:39:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.521 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.521 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.521 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.521 11:39:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.521 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.521 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.521 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.521 11:39:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.521 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.521 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.521 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.521 11:39:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.521 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.521 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.521 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.521 11:39:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.521 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.521 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.521 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.521 11:39:12 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.521 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.521 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.521 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.521 11:39:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.521 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.521 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.521 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.521 11:39:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.521 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.521 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.521 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.521 11:39:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.521 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.521 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.521 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.521 11:39:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.521 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.521 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.521 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.521 11:39:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.521 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.521 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.521 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.521 11:39:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.521 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.521 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.521 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.521 11:39:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.521 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.521 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.521 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.521 11:39:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.521 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.521 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.521 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.521 11:39:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.521 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.521 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.521 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.521 11:39:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.521 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.521 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.521 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.521 11:39:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.521 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.521 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.521 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.521 11:39:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.521 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.521 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.521 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.521 11:39:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.521 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.521 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.521 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.521 11:39:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.521 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.521 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.521 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.521 11:39:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.521 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.521 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.521 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.521 11:39:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.521 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.521 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.521 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.521 11:39:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.521 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.521 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.521 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.521 11:39:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.521 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.521 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.521 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.521 11:39:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.521 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.521 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.521 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.521 11:39:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.521 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.521 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.521 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.521 11:39:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.521 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.521 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.521 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.521 11:39:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.521 11:39:12 -- setup/common.sh@33 -- # echo 0 00:03:22.521 11:39:12 -- setup/common.sh@33 -- # return 0 00:03:22.521 11:39:12 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:22.521 11:39:12 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:22.521 11:39:12 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:22.521 11:39:12 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:22.521 11:39:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.521 11:39:12 -- setup/common.sh@18 -- # local node=1 00:03:22.521 11:39:12 -- setup/common.sh@19 -- # local var val 00:03:22.521 11:39:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.521 11:39:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.521 11:39:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:22.521 11:39:12 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:22.521 11:39:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.521 11:39:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.521 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.521 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.522 11:39:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656076 kB' 'MemFree: 20010688 kB' 'MemUsed: 7645388 kB' 'SwapCached: 0 kB' 'Active: 3426060 kB' 'Inactive: 373552 kB' 'Active(anon): 3184704 kB' 'Inactive(anon): 0 kB' 'Active(file): 241356 kB' 'Inactive(file): 373552 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3644480 kB' 'Mapped: 50944 kB' 'AnonPages: 155324 kB' 'Shmem: 3029572 kB' 'KernelStack: 9464 kB' 'PageTables: 3108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 164456 kB' 'Slab: 482192 kB' 'SReclaimable: 164456 kB' 'SUnreclaim: 317736 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.522 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.522 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.523 11:39:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.523 11:39:12 -- setup/common.sh@32 -- # continue 00:03:22.523 11:39:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.523 11:39:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.523 11:39:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.523 11:39:12 -- setup/common.sh@33 -- # echo 0 00:03:22.523 11:39:12 -- setup/common.sh@33 -- # return 0 00:03:22.523 11:39:12 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:22.523 11:39:12 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:22.523 11:39:12 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:22.523 11:39:12 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:22.523 11:39:12 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:22.523 node0=512 expecting 512 00:03:22.523 11:39:12 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:22.523 11:39:12 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:22.523 11:39:12 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:22.523 11:39:12 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:22.523 node1=1024 expecting 1024 00:03:22.523 11:39:12 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:22.523 00:03:22.523 real 0m3.091s 00:03:22.523 user 0m1.065s 00:03:22.523 sys 0m1.946s 00:03:22.523 11:39:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:22.523 11:39:12 -- common/autotest_common.sh@10 -- # set +x 00:03:22.523 ************************************ 00:03:22.523 END TEST custom_alloc 00:03:22.523 ************************************ 00:03:22.523 11:39:12 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:22.523 11:39:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:22.523 11:39:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:22.523 11:39:12 -- common/autotest_common.sh@10 -- # set +x 00:03:22.523 ************************************ 00:03:22.523 START TEST no_shrink_alloc 00:03:22.523 ************************************ 00:03:22.523 11:39:13 -- common/autotest_common.sh@1111 -- # no_shrink_alloc 00:03:22.523 11:39:13 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:22.523 11:39:13 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:22.523 11:39:13 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:22.523 11:39:13 -- setup/hugepages.sh@51 -- # shift 00:03:22.523 11:39:13 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:22.523 11:39:13 -- setup/hugepages.sh@52 -- # local node_ids 00:03:22.523 11:39:13 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:22.523 11:39:13 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:22.523 11:39:13 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:22.523 11:39:13 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:22.523 11:39:13 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:22.523 11:39:13 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:22.523 11:39:13 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:22.523 11:39:13 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:22.523 11:39:13 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:22.523 11:39:13 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:22.523 11:39:13 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:22.523 11:39:13 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:22.523 11:39:13 -- setup/hugepages.sh@73 -- # return 0 00:03:22.523 11:39:13 -- setup/hugepages.sh@198 -- # setup output 00:03:22.523 11:39:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:22.523 11:39:13 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:25.838 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:25.838 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:25.838 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:25.838 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:25.838 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:25.838 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:25.838 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:25.838 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:25.838 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:25.838 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:25.838 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:25.838 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:25.838 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:25.838 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:25.838 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:25.838 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:25.838 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:25.838 11:39:16 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:25.838 11:39:16 -- setup/hugepages.sh@89 -- # local node 00:03:25.838 11:39:16 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:25.838 11:39:16 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:25.838 11:39:16 -- setup/hugepages.sh@92 -- # local surp 00:03:25.838 11:39:16 -- setup/hugepages.sh@93 -- # local resv 00:03:25.838 11:39:16 -- setup/hugepages.sh@94 -- # local anon 00:03:25.838 11:39:16 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:25.838 11:39:16 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:25.838 11:39:16 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:25.838 11:39:16 -- setup/common.sh@18 -- # local node= 00:03:25.838 11:39:16 -- setup/common.sh@19 -- # local var val 00:03:25.838 11:39:16 -- setup/common.sh@20 -- # local mem_f mem 00:03:25.838 11:39:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.838 11:39:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.838 11:39:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.838 11:39:16 -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.838 11:39:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.838 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.838 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.839 11:39:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 40604412 kB' 'MemAvailable: 44672476 kB' 'Buffers: 2696 kB' 'Cached: 13564292 kB' 'SwapCached: 0 kB' 'Active: 10428712 kB' 'Inactive: 3660008 kB' 'Active(anon): 9861560 kB' 'Inactive(anon): 0 kB' 'Active(file): 567152 kB' 'Inactive(file): 3660008 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524588 kB' 'Mapped: 204984 kB' 'Shmem: 9339828 kB' 'KReclaimable: 488612 kB' 'Slab: 1122996 kB' 'SReclaimable: 488612 kB' 'SUnreclaim: 634384 kB' 'KernelStack: 22064 kB' 'PageTables: 8948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11201492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216488 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3208564 kB' 'DirectMap2M: 14303232 kB' 'DirectMap1G: 51380224 kB' 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.839 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.839 11:39:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.840 11:39:16 -- setup/common.sh@33 -- # echo 0 00:03:25.840 11:39:16 -- setup/common.sh@33 -- # return 0 00:03:25.840 11:39:16 -- setup/hugepages.sh@97 -- # anon=0 00:03:25.840 11:39:16 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:25.840 11:39:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.840 11:39:16 -- setup/common.sh@18 -- # local node= 00:03:25.840 11:39:16 -- setup/common.sh@19 -- # local var val 00:03:25.840 11:39:16 -- setup/common.sh@20 -- # local mem_f mem 00:03:25.840 11:39:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.840 11:39:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.840 11:39:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.840 11:39:16 -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.840 11:39:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.840 11:39:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 40608144 kB' 'MemAvailable: 44676208 kB' 'Buffers: 2696 kB' 'Cached: 13564308 kB' 'SwapCached: 0 kB' 'Active: 10428288 kB' 'Inactive: 3660008 kB' 'Active(anon): 9861136 kB' 'Inactive(anon): 0 kB' 'Active(file): 567152 kB' 'Inactive(file): 3660008 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524604 kB' 'Mapped: 204884 kB' 'Shmem: 9339844 kB' 'KReclaimable: 488612 kB' 'Slab: 1122948 kB' 'SReclaimable: 488612 kB' 'SUnreclaim: 634336 kB' 'KernelStack: 22064 kB' 'PageTables: 8944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11202004 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216488 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3208564 kB' 'DirectMap2M: 14303232 kB' 'DirectMap1G: 51380224 kB' 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.840 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.840 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.841 11:39:16 -- setup/common.sh@33 -- # echo 0 00:03:25.841 11:39:16 -- setup/common.sh@33 -- # return 0 00:03:25.841 11:39:16 -- setup/hugepages.sh@99 -- # surp=0 00:03:25.841 11:39:16 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:25.841 11:39:16 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:25.841 11:39:16 -- setup/common.sh@18 -- # local node= 00:03:25.841 11:39:16 -- setup/common.sh@19 -- # local var val 00:03:25.841 11:39:16 -- setup/common.sh@20 -- # local mem_f mem 00:03:25.841 11:39:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.841 11:39:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.841 11:39:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.841 11:39:16 -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.841 11:39:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.841 11:39:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 40607648 kB' 'MemAvailable: 44675712 kB' 'Buffers: 2696 kB' 'Cached: 13564320 kB' 'SwapCached: 0 kB' 'Active: 10428312 kB' 'Inactive: 3660008 kB' 'Active(anon): 9861160 kB' 'Inactive(anon): 0 kB' 'Active(file): 567152 kB' 'Inactive(file): 3660008 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524620 kB' 'Mapped: 204884 kB' 'Shmem: 9339856 kB' 'KReclaimable: 488612 kB' 'Slab: 1122948 kB' 'SReclaimable: 488612 kB' 'SUnreclaim: 634336 kB' 'KernelStack: 22064 kB' 'PageTables: 8912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11202020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216488 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3208564 kB' 'DirectMap2M: 14303232 kB' 'DirectMap1G: 51380224 kB' 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.841 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.841 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # continue 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.842 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.842 11:39:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.842 11:39:16 -- setup/common.sh@33 -- # echo 0 00:03:25.842 11:39:16 -- setup/common.sh@33 -- # return 0 00:03:26.104 11:39:16 -- setup/hugepages.sh@100 -- # resv=0 00:03:26.105 11:39:16 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:26.105 nr_hugepages=1024 00:03:26.105 11:39:16 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:26.105 resv_hugepages=0 00:03:26.105 11:39:16 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:26.105 surplus_hugepages=0 00:03:26.105 11:39:16 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:26.105 anon_hugepages=0 00:03:26.105 11:39:16 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:26.105 11:39:16 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:26.105 11:39:16 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:26.105 11:39:16 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:26.105 11:39:16 -- setup/common.sh@18 -- # local node= 00:03:26.105 11:39:16 -- setup/common.sh@19 -- # local var val 00:03:26.105 11:39:16 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.105 11:39:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.105 11:39:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.105 11:39:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.105 11:39:16 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.105 11:39:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.105 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.105 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.105 11:39:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 40607816 kB' 'MemAvailable: 44675880 kB' 'Buffers: 2696 kB' 'Cached: 13564344 kB' 'SwapCached: 0 kB' 'Active: 10427916 kB' 'Inactive: 3660008 kB' 'Active(anon): 9860764 kB' 'Inactive(anon): 0 kB' 'Active(file): 567152 kB' 'Inactive(file): 3660008 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524172 kB' 'Mapped: 204884 kB' 'Shmem: 9339880 kB' 'KReclaimable: 488612 kB' 'Slab: 1122948 kB' 'SReclaimable: 488612 kB' 'SUnreclaim: 634336 kB' 'KernelStack: 22032 kB' 'PageTables: 8816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11202036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216488 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3208564 kB' 'DirectMap2M: 14303232 kB' 'DirectMap1G: 51380224 kB' 00:03:26.105 11:39:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.105 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.105 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.105 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.105 11:39:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.105 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.105 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.105 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.105 11:39:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.105 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.105 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.105 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.105 11:39:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.105 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.105 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.105 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.105 11:39:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.105 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.105 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.105 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.105 11:39:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.105 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.105 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.105 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.105 11:39:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.105 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.105 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.105 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.105 11:39:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.105 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.105 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.105 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.105 11:39:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.105 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.105 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.105 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.105 11:39:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.105 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.105 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.105 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.105 11:39:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.105 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.105 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.105 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.105 11:39:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.105 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.105 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.105 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.105 11:39:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.105 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.105 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.105 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.105 11:39:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.105 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.105 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.105 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.105 11:39:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.105 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.105 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.105 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.105 11:39:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.105 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.105 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.105 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.105 11:39:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.105 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.105 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.105 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.105 11:39:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.105 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.105 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.105 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.105 11:39:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.105 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.105 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.105 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.105 11:39:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.105 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.105 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.105 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.105 11:39:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.105 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.105 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.105 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.105 11:39:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.105 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.105 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.105 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.105 11:39:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.105 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.105 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.105 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.105 11:39:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.105 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.105 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.105 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.105 11:39:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.105 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.105 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.105 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.105 11:39:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.105 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.105 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.105 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.105 11:39:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.105 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.105 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.105 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.105 11:39:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.105 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.105 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.105 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.105 11:39:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.105 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.105 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.105 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.105 11:39:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.105 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.105 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.105 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.105 11:39:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.106 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.106 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.106 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.106 11:39:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.106 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.106 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.106 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.106 11:39:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.106 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.106 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.106 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.106 11:39:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.106 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.106 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.106 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.106 11:39:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.106 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.106 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.106 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.106 11:39:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.106 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.106 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.106 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.106 11:39:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.106 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.106 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.106 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.106 11:39:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.106 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.106 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.106 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.106 11:39:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.106 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.106 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.106 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.106 11:39:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.106 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.106 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.106 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.106 11:39:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.106 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.106 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.106 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.106 11:39:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.106 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.106 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.106 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.106 11:39:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.106 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.106 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.106 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.106 11:39:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.106 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.106 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.106 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.106 11:39:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.106 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.106 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.106 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.106 11:39:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.106 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.106 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.106 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.106 11:39:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.106 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.106 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.106 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.106 11:39:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.106 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.106 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.106 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.106 11:39:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.106 11:39:16 -- setup/common.sh@33 -- # echo 1024 00:03:26.106 11:39:16 -- setup/common.sh@33 -- # return 0 00:03:26.106 11:39:16 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:26.106 11:39:16 -- setup/hugepages.sh@112 -- # get_nodes 00:03:26.106 11:39:16 -- setup/hugepages.sh@27 -- # local node 00:03:26.106 11:39:16 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.106 11:39:16 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:26.106 11:39:16 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.106 11:39:16 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:26.106 11:39:16 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:26.106 11:39:16 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:26.106 11:39:16 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:26.106 11:39:16 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:26.106 11:39:16 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:26.106 11:39:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.106 11:39:16 -- setup/common.sh@18 -- # local node=0 00:03:26.106 11:39:16 -- setup/common.sh@19 -- # local var val 00:03:26.106 11:39:16 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.106 11:39:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.106 11:39:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:26.106 11:39:16 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:26.106 11:39:16 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.106 11:39:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.106 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.106 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.106 11:39:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 18491188 kB' 'MemUsed: 14147952 kB' 'SwapCached: 0 kB' 'Active: 7004060 kB' 'Inactive: 3286456 kB' 'Active(anon): 6678264 kB' 'Inactive(anon): 0 kB' 'Active(file): 325796 kB' 'Inactive(file): 3286456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9922536 kB' 'Mapped: 153940 kB' 'AnonPages: 371184 kB' 'Shmem: 6310284 kB' 'KernelStack: 12664 kB' 'PageTables: 6024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 324156 kB' 'Slab: 640472 kB' 'SReclaimable: 324156 kB' 'SUnreclaim: 316316 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:26.106 11:39:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.106 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.106 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.106 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.106 11:39:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.106 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.106 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.106 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.106 11:39:16 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.106 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.106 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.106 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.106 11:39:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.106 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.106 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.106 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.106 11:39:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.106 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.106 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.106 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.106 11:39:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.106 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.106 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.106 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.106 11:39:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.106 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.106 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.106 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.106 11:39:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.106 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.106 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.106 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.106 11:39:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.106 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.106 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.106 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.106 11:39:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.106 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.106 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.106 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.106 11:39:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.106 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.106 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.106 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.106 11:39:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.106 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.106 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.106 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.106 11:39:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.106 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.106 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.106 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.107 11:39:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.107 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.107 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.107 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.107 11:39:16 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.107 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.107 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.107 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.107 11:39:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.107 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.107 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.107 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.107 11:39:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.107 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.107 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.107 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.107 11:39:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.107 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.107 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.107 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.107 11:39:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.107 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.107 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.107 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.107 11:39:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.107 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.107 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.107 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.107 11:39:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.107 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.107 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.107 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.107 11:39:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.107 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.107 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.107 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.107 11:39:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.107 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.107 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.107 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.107 11:39:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.107 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.107 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.107 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.107 11:39:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.107 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.107 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.107 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.107 11:39:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.107 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.107 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.107 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.107 11:39:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.107 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.107 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.107 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.107 11:39:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.107 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.107 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.107 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.107 11:39:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.107 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.107 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.107 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.107 11:39:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.107 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.107 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.107 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.107 11:39:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.107 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.107 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.107 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.107 11:39:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.107 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.107 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.107 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.107 11:39:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.107 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.107 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.107 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.107 11:39:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.107 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.107 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.107 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.107 11:39:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.107 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.107 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.107 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.107 11:39:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.107 11:39:16 -- setup/common.sh@32 -- # continue 00:03:26.107 11:39:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.107 11:39:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.107 11:39:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.107 11:39:16 -- setup/common.sh@33 -- # echo 0 00:03:26.107 11:39:16 -- setup/common.sh@33 -- # return 0 00:03:26.107 11:39:16 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:26.107 11:39:16 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:26.107 11:39:16 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:26.107 11:39:16 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:26.107 11:39:16 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:26.107 node0=1024 expecting 1024 00:03:26.107 11:39:16 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:26.107 11:39:16 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:26.107 11:39:16 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:26.107 11:39:16 -- setup/hugepages.sh@202 -- # setup output 00:03:26.107 11:39:16 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:26.107 11:39:16 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:29.405 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:29.405 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:29.405 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:29.405 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:29.405 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:29.405 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:29.405 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:29.405 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:29.405 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:29.405 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:29.405 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:29.405 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:29.405 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:29.405 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:29.405 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:29.405 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:29.405 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:29.405 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:29.405 11:39:19 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:29.405 11:39:19 -- setup/hugepages.sh@89 -- # local node 00:03:29.405 11:39:19 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:29.405 11:39:19 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:29.405 11:39:19 -- setup/hugepages.sh@92 -- # local surp 00:03:29.405 11:39:19 -- setup/hugepages.sh@93 -- # local resv 00:03:29.405 11:39:19 -- setup/hugepages.sh@94 -- # local anon 00:03:29.405 11:39:19 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:29.405 11:39:19 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:29.405 11:39:19 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:29.405 11:39:19 -- setup/common.sh@18 -- # local node= 00:03:29.405 11:39:19 -- setup/common.sh@19 -- # local var val 00:03:29.405 11:39:19 -- setup/common.sh@20 -- # local mem_f mem 00:03:29.405 11:39:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.405 11:39:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.405 11:39:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.405 11:39:19 -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.405 11:39:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.405 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.405 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.405 11:39:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 40609444 kB' 'MemAvailable: 44677508 kB' 'Buffers: 2696 kB' 'Cached: 13564400 kB' 'SwapCached: 0 kB' 'Active: 10429512 kB' 'Inactive: 3660008 kB' 'Active(anon): 9862360 kB' 'Inactive(anon): 0 kB' 'Active(file): 567152 kB' 'Inactive(file): 3660008 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525276 kB' 'Mapped: 204976 kB' 'Shmem: 9339936 kB' 'KReclaimable: 488612 kB' 'Slab: 1122808 kB' 'SReclaimable: 488612 kB' 'SUnreclaim: 634196 kB' 'KernelStack: 22080 kB' 'PageTables: 8968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11201980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216504 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3208564 kB' 'DirectMap2M: 14303232 kB' 'DirectMap1G: 51380224 kB' 00:03:29.405 11:39:19 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.405 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.405 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.405 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.405 11:39:19 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.406 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.406 11:39:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.406 11:39:19 -- setup/common.sh@33 -- # echo 0 00:03:29.406 11:39:19 -- setup/common.sh@33 -- # return 0 00:03:29.406 11:39:19 -- setup/hugepages.sh@97 -- # anon=0 00:03:29.406 11:39:19 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:29.406 11:39:19 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:29.406 11:39:19 -- setup/common.sh@18 -- # local node= 00:03:29.406 11:39:19 -- setup/common.sh@19 -- # local var val 00:03:29.406 11:39:19 -- setup/common.sh@20 -- # local mem_f mem 00:03:29.406 11:39:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.406 11:39:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.407 11:39:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.407 11:39:19 -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.407 11:39:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.407 11:39:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 40610416 kB' 'MemAvailable: 44678480 kB' 'Buffers: 2696 kB' 'Cached: 13564404 kB' 'SwapCached: 0 kB' 'Active: 10429260 kB' 'Inactive: 3660008 kB' 'Active(anon): 9862108 kB' 'Inactive(anon): 0 kB' 'Active(file): 567152 kB' 'Inactive(file): 3660008 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525060 kB' 'Mapped: 204964 kB' 'Shmem: 9339940 kB' 'KReclaimable: 488612 kB' 'Slab: 1122792 kB' 'SReclaimable: 488612 kB' 'SUnreclaim: 634180 kB' 'KernelStack: 22048 kB' 'PageTables: 8884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11201992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216440 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3208564 kB' 'DirectMap2M: 14303232 kB' 'DirectMap1G: 51380224 kB' 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.407 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.407 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.408 11:39:19 -- setup/common.sh@33 -- # echo 0 00:03:29.408 11:39:19 -- setup/common.sh@33 -- # return 0 00:03:29.408 11:39:19 -- setup/hugepages.sh@99 -- # surp=0 00:03:29.408 11:39:19 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:29.408 11:39:19 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:29.408 11:39:19 -- setup/common.sh@18 -- # local node= 00:03:29.408 11:39:19 -- setup/common.sh@19 -- # local var val 00:03:29.408 11:39:19 -- setup/common.sh@20 -- # local mem_f mem 00:03:29.408 11:39:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.408 11:39:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.408 11:39:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.408 11:39:19 -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.408 11:39:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.408 11:39:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 40610776 kB' 'MemAvailable: 44678840 kB' 'Buffers: 2696 kB' 'Cached: 13564404 kB' 'SwapCached: 0 kB' 'Active: 10428668 kB' 'Inactive: 3660008 kB' 'Active(anon): 9861516 kB' 'Inactive(anon): 0 kB' 'Active(file): 567152 kB' 'Inactive(file): 3660008 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524372 kB' 'Mapped: 204888 kB' 'Shmem: 9339940 kB' 'KReclaimable: 488612 kB' 'Slab: 1122760 kB' 'SReclaimable: 488612 kB' 'SUnreclaim: 634148 kB' 'KernelStack: 22016 kB' 'PageTables: 8724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11202136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216440 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3208564 kB' 'DirectMap2M: 14303232 kB' 'DirectMap1G: 51380224 kB' 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.408 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.408 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.409 11:39:19 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.409 11:39:19 -- setup/common.sh@33 -- # echo 0 00:03:29.409 11:39:19 -- setup/common.sh@33 -- # return 0 00:03:29.409 11:39:19 -- setup/hugepages.sh@100 -- # resv=0 00:03:29.409 11:39:19 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:29.409 nr_hugepages=1024 00:03:29.409 11:39:19 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:29.409 resv_hugepages=0 00:03:29.409 11:39:19 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:29.409 surplus_hugepages=0 00:03:29.409 11:39:19 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:29.409 anon_hugepages=0 00:03:29.409 11:39:19 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:29.409 11:39:19 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:29.409 11:39:19 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:29.409 11:39:19 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:29.409 11:39:19 -- setup/common.sh@18 -- # local node= 00:03:29.409 11:39:19 -- setup/common.sh@19 -- # local var val 00:03:29.409 11:39:19 -- setup/common.sh@20 -- # local mem_f mem 00:03:29.409 11:39:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.409 11:39:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.409 11:39:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.409 11:39:19 -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.409 11:39:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.409 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.410 11:39:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 40610524 kB' 'MemAvailable: 44678588 kB' 'Buffers: 2696 kB' 'Cached: 13564440 kB' 'SwapCached: 0 kB' 'Active: 10428348 kB' 'Inactive: 3660008 kB' 'Active(anon): 9861196 kB' 'Inactive(anon): 0 kB' 'Active(file): 567152 kB' 'Inactive(file): 3660008 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524508 kB' 'Mapped: 204888 kB' 'Shmem: 9339976 kB' 'KReclaimable: 488612 kB' 'Slab: 1122760 kB' 'SReclaimable: 488612 kB' 'SUnreclaim: 634148 kB' 'KernelStack: 22016 kB' 'PageTables: 8724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11202156 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216456 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3208564 kB' 'DirectMap2M: 14303232 kB' 'DirectMap1G: 51380224 kB' 00:03:29.410 11:39:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.410 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.410 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.410 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.410 11:39:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.410 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.410 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.410 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.410 11:39:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.410 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.410 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.410 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.410 11:39:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.410 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.410 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.410 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.410 11:39:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.410 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.410 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.410 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.410 11:39:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.410 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.410 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.410 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.410 11:39:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.410 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.410 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.410 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.410 11:39:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.410 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.410 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.410 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.410 11:39:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.410 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.410 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.410 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.410 11:39:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.410 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.410 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.410 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.410 11:39:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.410 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.410 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.410 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.410 11:39:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.410 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.410 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.410 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.410 11:39:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.410 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.410 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.410 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.410 11:39:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.410 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.410 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.410 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.410 11:39:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.410 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.410 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.410 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.410 11:39:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.410 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.410 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.410 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.410 11:39:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.410 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.410 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.410 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.410 11:39:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.410 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.410 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.410 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.410 11:39:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.410 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.410 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.410 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.410 11:39:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.410 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.410 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.410 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.410 11:39:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.410 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.410 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.410 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.410 11:39:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.410 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.410 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.410 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.410 11:39:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.410 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.410 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.410 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.410 11:39:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.410 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.410 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.410 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.410 11:39:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.410 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.410 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.410 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.410 11:39:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.410 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.410 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.410 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.410 11:39:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.410 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.410 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.410 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.410 11:39:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.410 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.410 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.410 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.410 11:39:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.410 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.410 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.410 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.411 11:39:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.411 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.411 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.411 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.411 11:39:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.411 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.411 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.411 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.411 11:39:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.411 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.411 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.411 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.411 11:39:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.411 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.411 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.411 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.411 11:39:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.411 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.411 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.411 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.411 11:39:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.411 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.411 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.411 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.411 11:39:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.411 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.411 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.411 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.411 11:39:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.411 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.411 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.411 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.411 11:39:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.411 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.411 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.411 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.411 11:39:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.411 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.411 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.411 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.411 11:39:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.411 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.411 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.411 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.411 11:39:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.411 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.411 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.411 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.411 11:39:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.411 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.411 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.411 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.411 11:39:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.411 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.411 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.411 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.411 11:39:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.411 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.411 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.411 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.411 11:39:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.411 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.411 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.411 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.411 11:39:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.411 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.411 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.411 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.411 11:39:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.411 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.411 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.411 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.411 11:39:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.411 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.411 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.411 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.411 11:39:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.411 11:39:19 -- setup/common.sh@33 -- # echo 1024 00:03:29.411 11:39:19 -- setup/common.sh@33 -- # return 0 00:03:29.411 11:39:19 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:29.411 11:39:19 -- setup/hugepages.sh@112 -- # get_nodes 00:03:29.411 11:39:19 -- setup/hugepages.sh@27 -- # local node 00:03:29.411 11:39:19 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:29.411 11:39:19 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:29.411 11:39:19 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:29.411 11:39:19 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:29.411 11:39:19 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:29.411 11:39:19 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:29.411 11:39:19 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:29.411 11:39:19 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:29.411 11:39:19 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:29.411 11:39:19 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:29.411 11:39:19 -- setup/common.sh@18 -- # local node=0 00:03:29.411 11:39:19 -- setup/common.sh@19 -- # local var val 00:03:29.411 11:39:19 -- setup/common.sh@20 -- # local mem_f mem 00:03:29.411 11:39:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.411 11:39:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:29.411 11:39:19 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:29.411 11:39:19 -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.411 11:39:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.411 11:39:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 18498500 kB' 'MemUsed: 14140640 kB' 'SwapCached: 0 kB' 'Active: 7002440 kB' 'Inactive: 3286456 kB' 'Active(anon): 6676644 kB' 'Inactive(anon): 0 kB' 'Active(file): 325796 kB' 'Inactive(file): 3286456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9922604 kB' 'Mapped: 153944 kB' 'AnonPages: 369412 kB' 'Shmem: 6310352 kB' 'KernelStack: 12648 kB' 'PageTables: 5984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 324156 kB' 'Slab: 640232 kB' 'SReclaimable: 324156 kB' 'SUnreclaim: 316076 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:29.411 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.411 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.411 11:39:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.411 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.411 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.411 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.411 11:39:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.411 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.411 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.411 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.411 11:39:19 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.411 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.411 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.411 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.411 11:39:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.411 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.411 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.411 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.411 11:39:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.411 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.411 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.411 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.411 11:39:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.411 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.411 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.411 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.411 11:39:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.411 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.411 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.411 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.411 11:39:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.411 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.411 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.411 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.411 11:39:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.411 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.411 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.411 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.411 11:39:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.411 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.411 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.411 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.411 11:39:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.411 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.411 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.411 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.411 11:39:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.411 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.411 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.411 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.411 11:39:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.412 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.412 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.412 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.412 11:39:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.412 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.412 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.412 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.412 11:39:19 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.412 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.412 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.412 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.412 11:39:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.412 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.412 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.412 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.412 11:39:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.412 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.412 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.412 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.412 11:39:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.412 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.412 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.412 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.412 11:39:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.412 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.412 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.412 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.412 11:39:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.412 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.412 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.412 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.412 11:39:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.412 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.412 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.412 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.412 11:39:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.412 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.412 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.412 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.412 11:39:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.412 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.412 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.412 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.412 11:39:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.412 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.412 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.412 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.412 11:39:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.412 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.412 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.412 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.412 11:39:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.412 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.412 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.412 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.412 11:39:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.412 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.412 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.412 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.412 11:39:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.412 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.412 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.412 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.412 11:39:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.412 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.412 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.412 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.412 11:39:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.412 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.412 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.412 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.412 11:39:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.412 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.412 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.412 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.412 11:39:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.412 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.412 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.412 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.412 11:39:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.412 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.412 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.412 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.412 11:39:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.412 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.412 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.412 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.412 11:39:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.412 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.412 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.412 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.412 11:39:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.412 11:39:19 -- setup/common.sh@32 -- # continue 00:03:29.412 11:39:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.412 11:39:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.412 11:39:19 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.412 11:39:19 -- setup/common.sh@33 -- # echo 0 00:03:29.412 11:39:19 -- setup/common.sh@33 -- # return 0 00:03:29.412 11:39:19 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:29.412 11:39:19 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:29.412 11:39:19 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:29.412 11:39:19 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:29.412 11:39:19 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:29.412 node0=1024 expecting 1024 00:03:29.412 11:39:19 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:29.412 00:03:29.412 real 0m6.788s 00:03:29.412 user 0m2.527s 00:03:29.412 sys 0m4.376s 00:03:29.412 11:39:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:29.412 11:39:19 -- common/autotest_common.sh@10 -- # set +x 00:03:29.412 ************************************ 00:03:29.412 END TEST no_shrink_alloc 00:03:29.412 ************************************ 00:03:29.412 11:39:19 -- setup/hugepages.sh@217 -- # clear_hp 00:03:29.412 11:39:19 -- setup/hugepages.sh@37 -- # local node hp 00:03:29.412 11:39:19 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:29.412 11:39:19 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:29.412 11:39:19 -- setup/hugepages.sh@41 -- # echo 0 00:03:29.412 11:39:19 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:29.412 11:39:19 -- setup/hugepages.sh@41 -- # echo 0 00:03:29.412 11:39:19 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:29.412 11:39:19 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:29.412 11:39:19 -- setup/hugepages.sh@41 -- # echo 0 00:03:29.412 11:39:19 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:29.412 11:39:19 -- setup/hugepages.sh@41 -- # echo 0 00:03:29.412 11:39:19 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:29.412 11:39:19 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:29.412 00:03:29.412 real 0m26.536s 00:03:29.412 user 0m9.192s 00:03:29.412 sys 0m15.794s 00:03:29.412 11:39:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:29.412 11:39:19 -- common/autotest_common.sh@10 -- # set +x 00:03:29.412 ************************************ 00:03:29.412 END TEST hugepages 00:03:29.412 ************************************ 00:03:29.412 11:39:19 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:29.412 11:39:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:29.412 11:39:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:29.412 11:39:19 -- common/autotest_common.sh@10 -- # set +x 00:03:29.672 ************************************ 00:03:29.672 START TEST driver 00:03:29.672 ************************************ 00:03:29.672 11:39:20 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:29.672 * Looking for test storage... 00:03:29.672 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:29.672 11:39:20 -- setup/driver.sh@68 -- # setup reset 00:03:29.672 11:39:20 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:29.672 11:39:20 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:34.953 11:39:24 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:34.953 11:39:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:34.953 11:39:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:34.953 11:39:24 -- common/autotest_common.sh@10 -- # set +x 00:03:34.953 ************************************ 00:03:34.953 START TEST guess_driver 00:03:34.953 ************************************ 00:03:34.953 11:39:24 -- common/autotest_common.sh@1111 -- # guess_driver 00:03:34.953 11:39:24 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:34.953 11:39:24 -- setup/driver.sh@47 -- # local fail=0 00:03:34.953 11:39:24 -- setup/driver.sh@49 -- # pick_driver 00:03:34.953 11:39:24 -- setup/driver.sh@36 -- # vfio 00:03:34.953 11:39:24 -- setup/driver.sh@21 -- # local iommu_grups 00:03:34.953 11:39:24 -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:34.953 11:39:24 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:34.953 11:39:24 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:34.953 11:39:24 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:34.953 11:39:24 -- setup/driver.sh@29 -- # (( 176 > 0 )) 00:03:34.953 11:39:24 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:34.953 11:39:24 -- setup/driver.sh@14 -- # mod vfio_pci 00:03:34.953 11:39:24 -- setup/driver.sh@12 -- # dep vfio_pci 00:03:34.953 11:39:24 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:34.953 11:39:24 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:34.953 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:34.953 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:34.953 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:34.953 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:34.953 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:34.953 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:34.953 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:34.953 11:39:24 -- setup/driver.sh@30 -- # return 0 00:03:34.953 11:39:24 -- setup/driver.sh@37 -- # echo vfio-pci 00:03:34.953 11:39:24 -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:34.953 11:39:24 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:34.953 11:39:24 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:34.953 Looking for driver=vfio-pci 00:03:34.953 11:39:24 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.953 11:39:24 -- setup/driver.sh@45 -- # setup output config 00:03:34.953 11:39:24 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:34.953 11:39:24 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:37.566 11:39:27 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.566 11:39:27 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.566 11:39:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.566 11:39:27 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.566 11:39:27 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.566 11:39:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.566 11:39:27 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.566 11:39:27 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.566 11:39:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.566 11:39:28 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.566 11:39:28 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.566 11:39:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.566 11:39:28 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.566 11:39:28 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.566 11:39:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.566 11:39:28 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.566 11:39:28 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.566 11:39:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.566 11:39:28 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.566 11:39:28 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.566 11:39:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.566 11:39:28 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.566 11:39:28 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.566 11:39:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.566 11:39:28 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.566 11:39:28 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.566 11:39:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.566 11:39:28 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.566 11:39:28 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.566 11:39:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.566 11:39:28 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.566 11:39:28 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.566 11:39:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.826 11:39:28 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.826 11:39:28 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.826 11:39:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.826 11:39:28 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.826 11:39:28 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.826 11:39:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.826 11:39:28 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.826 11:39:28 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.826 11:39:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.826 11:39:28 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.826 11:39:28 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.826 11:39:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.826 11:39:28 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.826 11:39:28 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.826 11:39:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:39.206 11:39:29 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:39.206 11:39:29 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:39.206 11:39:29 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:39.465 11:39:29 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:39.465 11:39:29 -- setup/driver.sh@65 -- # setup reset 00:03:39.465 11:39:29 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:39.466 11:39:29 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:44.741 00:03:44.741 real 0m9.712s 00:03:44.741 user 0m2.628s 00:03:44.741 sys 0m4.882s 00:03:44.741 11:39:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:44.741 11:39:34 -- common/autotest_common.sh@10 -- # set +x 00:03:44.741 ************************************ 00:03:44.741 END TEST guess_driver 00:03:44.741 ************************************ 00:03:44.741 00:03:44.741 real 0m14.405s 00:03:44.741 user 0m3.886s 00:03:44.741 sys 0m7.496s 00:03:44.741 11:39:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:44.741 11:39:34 -- common/autotest_common.sh@10 -- # set +x 00:03:44.741 ************************************ 00:03:44.741 END TEST driver 00:03:44.741 ************************************ 00:03:44.741 11:39:34 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:44.741 11:39:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:44.741 11:39:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:44.741 11:39:34 -- common/autotest_common.sh@10 -- # set +x 00:03:44.741 ************************************ 00:03:44.741 START TEST devices 00:03:44.741 ************************************ 00:03:44.741 11:39:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:44.741 * Looking for test storage... 00:03:44.741 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:44.741 11:39:34 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:44.741 11:39:34 -- setup/devices.sh@192 -- # setup reset 00:03:44.741 11:39:34 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:44.741 11:39:34 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:48.030 11:39:38 -- setup/devices.sh@194 -- # get_zoned_devs 00:03:48.030 11:39:38 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:48.030 11:39:38 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:48.030 11:39:38 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:48.030 11:39:38 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:48.030 11:39:38 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:48.030 11:39:38 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:48.030 11:39:38 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:48.030 11:39:38 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:48.030 11:39:38 -- setup/devices.sh@196 -- # blocks=() 00:03:48.030 11:39:38 -- setup/devices.sh@196 -- # declare -a blocks 00:03:48.030 11:39:38 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:48.030 11:39:38 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:48.030 11:39:38 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:48.030 11:39:38 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:48.030 11:39:38 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:48.030 11:39:38 -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:48.030 11:39:38 -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:03:48.030 11:39:38 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:03:48.030 11:39:38 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:48.030 11:39:38 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:48.030 11:39:38 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:48.030 No valid GPT data, bailing 00:03:48.289 11:39:38 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:48.289 11:39:38 -- scripts/common.sh@391 -- # pt= 00:03:48.289 11:39:38 -- scripts/common.sh@392 -- # return 1 00:03:48.289 11:39:38 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:48.289 11:39:38 -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:48.289 11:39:38 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:48.289 11:39:38 -- setup/common.sh@80 -- # echo 1600321314816 00:03:48.289 11:39:38 -- setup/devices.sh@204 -- # (( 1600321314816 >= min_disk_size )) 00:03:48.289 11:39:38 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:48.289 11:39:38 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:03:48.289 11:39:38 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:48.289 11:39:38 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:48.289 11:39:38 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:48.289 11:39:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:48.289 11:39:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:48.289 11:39:38 -- common/autotest_common.sh@10 -- # set +x 00:03:48.289 ************************************ 00:03:48.289 START TEST nvme_mount 00:03:48.289 ************************************ 00:03:48.289 11:39:38 -- common/autotest_common.sh@1111 -- # nvme_mount 00:03:48.289 11:39:38 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:48.289 11:39:38 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:48.290 11:39:38 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:48.290 11:39:38 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:48.290 11:39:38 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:48.290 11:39:38 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:48.290 11:39:38 -- setup/common.sh@40 -- # local part_no=1 00:03:48.290 11:39:38 -- setup/common.sh@41 -- # local size=1073741824 00:03:48.290 11:39:38 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:48.290 11:39:38 -- setup/common.sh@44 -- # parts=() 00:03:48.290 11:39:38 -- setup/common.sh@44 -- # local parts 00:03:48.290 11:39:38 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:48.290 11:39:38 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:48.290 11:39:38 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:48.290 11:39:38 -- setup/common.sh@46 -- # (( part++ )) 00:03:48.290 11:39:38 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:48.290 11:39:38 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:48.290 11:39:38 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:48.290 11:39:38 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:49.226 Creating new GPT entries in memory. 00:03:49.226 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:49.226 other utilities. 00:03:49.226 11:39:39 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:49.226 11:39:39 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:49.226 11:39:39 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:49.226 11:39:39 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:49.226 11:39:39 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:50.605 Creating new GPT entries in memory. 00:03:50.605 The operation has completed successfully. 00:03:50.605 11:39:40 -- setup/common.sh@57 -- # (( part++ )) 00:03:50.605 11:39:40 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:50.605 11:39:40 -- setup/common.sh@62 -- # wait 2270326 00:03:50.605 11:39:40 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:50.605 11:39:40 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:50.605 11:39:40 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:50.605 11:39:40 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:50.605 11:39:40 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:50.605 11:39:40 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:50.605 11:39:40 -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:50.605 11:39:40 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:50.605 11:39:40 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:50.605 11:39:40 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:50.605 11:39:40 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:50.605 11:39:40 -- setup/devices.sh@53 -- # local found=0 00:03:50.605 11:39:40 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:50.605 11:39:40 -- setup/devices.sh@56 -- # : 00:03:50.605 11:39:40 -- setup/devices.sh@59 -- # local pci status 00:03:50.605 11:39:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.605 11:39:40 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:50.605 11:39:40 -- setup/devices.sh@47 -- # setup output config 00:03:50.605 11:39:40 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:50.605 11:39:40 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:53.144 11:39:43 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.144 11:39:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.144 11:39:43 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.144 11:39:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.144 11:39:43 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.144 11:39:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.144 11:39:43 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.144 11:39:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.144 11:39:43 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.144 11:39:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.144 11:39:43 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.144 11:39:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.144 11:39:43 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.144 11:39:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.144 11:39:43 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.144 11:39:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.144 11:39:43 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.144 11:39:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.144 11:39:43 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.144 11:39:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.144 11:39:43 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.144 11:39:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.144 11:39:43 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.144 11:39:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.144 11:39:43 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.144 11:39:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.144 11:39:43 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.144 11:39:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.144 11:39:43 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.144 11:39:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.144 11:39:43 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.144 11:39:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.144 11:39:43 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.144 11:39:43 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:53.144 11:39:43 -- setup/devices.sh@63 -- # found=1 00:03:53.144 11:39:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.405 11:39:43 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:53.405 11:39:43 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:53.405 11:39:43 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:53.405 11:39:43 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:53.405 11:39:43 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:53.405 11:39:43 -- setup/devices.sh@110 -- # cleanup_nvme 00:03:53.405 11:39:43 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:53.405 11:39:43 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:53.405 11:39:43 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:53.405 11:39:43 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:53.405 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:53.405 11:39:43 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:53.405 11:39:43 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:53.664 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:53.664 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:03:53.664 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:53.664 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:53.664 11:39:44 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:53.664 11:39:44 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:53.664 11:39:44 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:53.664 11:39:44 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:53.664 11:39:44 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:53.664 11:39:44 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:53.664 11:39:44 -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:53.664 11:39:44 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:53.664 11:39:44 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:53.664 11:39:44 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:53.664 11:39:44 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:53.664 11:39:44 -- setup/devices.sh@53 -- # local found=0 00:03:53.664 11:39:44 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:53.664 11:39:44 -- setup/devices.sh@56 -- # : 00:03:53.664 11:39:44 -- setup/devices.sh@59 -- # local pci status 00:03:53.664 11:39:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.664 11:39:44 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:53.664 11:39:44 -- setup/devices.sh@47 -- # setup output config 00:03:53.664 11:39:44 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:53.664 11:39:44 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:56.963 11:39:47 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.963 11:39:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.963 11:39:47 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.963 11:39:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.963 11:39:47 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.963 11:39:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.963 11:39:47 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.963 11:39:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.963 11:39:47 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.963 11:39:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.963 11:39:47 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.963 11:39:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.963 11:39:47 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.963 11:39:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.963 11:39:47 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.963 11:39:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.963 11:39:47 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.963 11:39:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.963 11:39:47 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.963 11:39:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.963 11:39:47 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.963 11:39:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.963 11:39:47 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.963 11:39:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.963 11:39:47 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.963 11:39:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.963 11:39:47 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.963 11:39:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.963 11:39:47 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.963 11:39:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.963 11:39:47 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.963 11:39:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.963 11:39:47 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.963 11:39:47 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:56.963 11:39:47 -- setup/devices.sh@63 -- # found=1 00:03:56.963 11:39:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.963 11:39:47 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:56.963 11:39:47 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:56.963 11:39:47 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:56.963 11:39:47 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:56.963 11:39:47 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:56.963 11:39:47 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:56.963 11:39:47 -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:03:56.963 11:39:47 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:56.963 11:39:47 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:56.963 11:39:47 -- setup/devices.sh@50 -- # local mount_point= 00:03:56.963 11:39:47 -- setup/devices.sh@51 -- # local test_file= 00:03:56.963 11:39:47 -- setup/devices.sh@53 -- # local found=0 00:03:56.963 11:39:47 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:56.963 11:39:47 -- setup/devices.sh@59 -- # local pci status 00:03:56.963 11:39:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.963 11:39:47 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:56.963 11:39:47 -- setup/devices.sh@47 -- # setup output config 00:03:56.963 11:39:47 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:56.963 11:39:47 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:00.277 11:39:50 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.277 11:39:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.277 11:39:50 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.277 11:39:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.277 11:39:50 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.277 11:39:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.277 11:39:50 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.277 11:39:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.277 11:39:50 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.277 11:39:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.277 11:39:50 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.277 11:39:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.277 11:39:50 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.277 11:39:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.277 11:39:50 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.277 11:39:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.277 11:39:50 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.277 11:39:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.277 11:39:50 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.277 11:39:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.277 11:39:50 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.277 11:39:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.277 11:39:50 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.277 11:39:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.277 11:39:50 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.277 11:39:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.277 11:39:50 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.277 11:39:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.277 11:39:50 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.277 11:39:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.277 11:39:50 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.277 11:39:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.277 11:39:50 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.277 11:39:50 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:00.277 11:39:50 -- setup/devices.sh@63 -- # found=1 00:04:00.277 11:39:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.277 11:39:50 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:00.278 11:39:50 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:00.278 11:39:50 -- setup/devices.sh@68 -- # return 0 00:04:00.278 11:39:50 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:00.278 11:39:50 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:00.278 11:39:50 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:00.278 11:39:50 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:00.278 11:39:50 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:00.278 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:00.278 00:04:00.278 real 0m11.925s 00:04:00.278 user 0m3.365s 00:04:00.278 sys 0m6.371s 00:04:00.278 11:39:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:00.278 11:39:50 -- common/autotest_common.sh@10 -- # set +x 00:04:00.278 ************************************ 00:04:00.278 END TEST nvme_mount 00:04:00.278 ************************************ 00:04:00.278 11:39:50 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:00.278 11:39:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:00.278 11:39:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:00.278 11:39:50 -- common/autotest_common.sh@10 -- # set +x 00:04:00.538 ************************************ 00:04:00.538 START TEST dm_mount 00:04:00.538 ************************************ 00:04:00.538 11:39:50 -- common/autotest_common.sh@1111 -- # dm_mount 00:04:00.538 11:39:50 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:00.538 11:39:50 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:00.538 11:39:50 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:00.538 11:39:50 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:00.538 11:39:50 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:00.538 11:39:50 -- setup/common.sh@40 -- # local part_no=2 00:04:00.538 11:39:50 -- setup/common.sh@41 -- # local size=1073741824 00:04:00.538 11:39:50 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:00.538 11:39:50 -- setup/common.sh@44 -- # parts=() 00:04:00.538 11:39:50 -- setup/common.sh@44 -- # local parts 00:04:00.538 11:39:50 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:00.538 11:39:50 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:00.538 11:39:50 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:00.538 11:39:50 -- setup/common.sh@46 -- # (( part++ )) 00:04:00.538 11:39:50 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:00.538 11:39:50 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:00.538 11:39:50 -- setup/common.sh@46 -- # (( part++ )) 00:04:00.538 11:39:50 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:00.538 11:39:50 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:00.538 11:39:50 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:00.538 11:39:50 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:01.477 Creating new GPT entries in memory. 00:04:01.477 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:01.477 other utilities. 00:04:01.477 11:39:51 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:01.477 11:39:51 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:01.477 11:39:51 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:01.477 11:39:51 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:01.477 11:39:51 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:02.415 Creating new GPT entries in memory. 00:04:02.415 The operation has completed successfully. 00:04:02.415 11:39:52 -- setup/common.sh@57 -- # (( part++ )) 00:04:02.415 11:39:52 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:02.415 11:39:52 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:02.415 11:39:52 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:02.415 11:39:52 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:03.796 The operation has completed successfully. 00:04:03.796 11:39:53 -- setup/common.sh@57 -- # (( part++ )) 00:04:03.796 11:39:53 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:03.796 11:39:53 -- setup/common.sh@62 -- # wait 2274742 00:04:03.796 11:39:54 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:03.796 11:39:54 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:03.796 11:39:54 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:03.796 11:39:54 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:03.796 11:39:54 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:03.796 11:39:54 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:03.796 11:39:54 -- setup/devices.sh@161 -- # break 00:04:03.796 11:39:54 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:03.796 11:39:54 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:03.796 11:39:54 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:03.796 11:39:54 -- setup/devices.sh@166 -- # dm=dm-0 00:04:03.796 11:39:54 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:03.796 11:39:54 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:03.796 11:39:54 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:03.796 11:39:54 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:03.796 11:39:54 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:03.796 11:39:54 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:03.796 11:39:54 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:03.796 11:39:54 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:03.796 11:39:54 -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:03.796 11:39:54 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:03.796 11:39:54 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:03.796 11:39:54 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:03.796 11:39:54 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:03.796 11:39:54 -- setup/devices.sh@53 -- # local found=0 00:04:03.796 11:39:54 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:03.796 11:39:54 -- setup/devices.sh@56 -- # : 00:04:03.796 11:39:54 -- setup/devices.sh@59 -- # local pci status 00:04:03.796 11:39:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.796 11:39:54 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:03.796 11:39:54 -- setup/devices.sh@47 -- # setup output config 00:04:03.796 11:39:54 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.796 11:39:54 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:06.332 11:39:56 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.332 11:39:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.332 11:39:56 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.332 11:39:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.332 11:39:56 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.332 11:39:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.332 11:39:56 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.332 11:39:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.332 11:39:56 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.332 11:39:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.332 11:39:56 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.332 11:39:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.332 11:39:56 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.332 11:39:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.332 11:39:56 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.332 11:39:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.332 11:39:56 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.332 11:39:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.332 11:39:56 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.332 11:39:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.332 11:39:56 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.332 11:39:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.332 11:39:56 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.332 11:39:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.332 11:39:56 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.332 11:39:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.332 11:39:56 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.332 11:39:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.332 11:39:56 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.332 11:39:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.332 11:39:56 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.332 11:39:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.332 11:39:56 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.332 11:39:56 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:06.332 11:39:56 -- setup/devices.sh@63 -- # found=1 00:04:06.332 11:39:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.592 11:39:56 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:06.592 11:39:56 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:06.592 11:39:56 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:06.592 11:39:56 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:06.592 11:39:56 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:06.592 11:39:56 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:06.592 11:39:56 -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:06.592 11:39:56 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:06.592 11:39:56 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:06.592 11:39:56 -- setup/devices.sh@50 -- # local mount_point= 00:04:06.592 11:39:56 -- setup/devices.sh@51 -- # local test_file= 00:04:06.592 11:39:56 -- setup/devices.sh@53 -- # local found=0 00:04:06.592 11:39:56 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:06.592 11:39:56 -- setup/devices.sh@59 -- # local pci status 00:04:06.592 11:39:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.592 11:39:56 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:06.592 11:39:56 -- setup/devices.sh@47 -- # setup output config 00:04:06.592 11:39:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.592 11:39:56 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:09.886 11:39:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.887 11:39:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.887 11:39:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.887 11:39:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.887 11:39:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.887 11:39:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.887 11:39:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.887 11:39:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.887 11:39:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.887 11:39:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.887 11:39:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.887 11:39:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.887 11:39:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.887 11:39:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.887 11:39:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.887 11:39:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.887 11:39:59 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.887 11:39:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.887 11:39:59 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.887 11:39:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.887 11:39:59 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.887 11:39:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.887 11:39:59 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.887 11:39:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.887 11:39:59 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.887 11:39:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.887 11:39:59 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.887 11:39:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.887 11:39:59 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.887 11:39:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.887 11:39:59 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.887 11:39:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.887 11:39:59 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.887 11:39:59 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:09.887 11:39:59 -- setup/devices.sh@63 -- # found=1 00:04:09.887 11:39:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.887 11:40:00 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:09.887 11:40:00 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:09.887 11:40:00 -- setup/devices.sh@68 -- # return 0 00:04:09.887 11:40:00 -- setup/devices.sh@187 -- # cleanup_dm 00:04:09.887 11:40:00 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:09.887 11:40:00 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:09.887 11:40:00 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:09.887 11:40:00 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:09.887 11:40:00 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:09.887 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:09.887 11:40:00 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:09.887 11:40:00 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:09.887 00:04:09.887 real 0m9.282s 00:04:09.887 user 0m2.161s 00:04:09.887 sys 0m4.127s 00:04:09.887 11:40:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:09.887 11:40:00 -- common/autotest_common.sh@10 -- # set +x 00:04:09.887 ************************************ 00:04:09.887 END TEST dm_mount 00:04:09.887 ************************************ 00:04:09.887 11:40:00 -- setup/devices.sh@1 -- # cleanup 00:04:09.887 11:40:00 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:09.887 11:40:00 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:09.887 11:40:00 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:09.887 11:40:00 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:09.887 11:40:00 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:09.887 11:40:00 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:10.146 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:10.146 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:04:10.146 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:10.146 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:10.146 11:40:00 -- setup/devices.sh@12 -- # cleanup_dm 00:04:10.146 11:40:00 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:10.146 11:40:00 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:10.146 11:40:00 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:10.146 11:40:00 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:10.146 11:40:00 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:10.146 11:40:00 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:10.146 00:04:10.146 real 0m25.842s 00:04:10.146 user 0m7.130s 00:04:10.146 sys 0m13.395s 00:04:10.146 11:40:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:10.146 11:40:00 -- common/autotest_common.sh@10 -- # set +x 00:04:10.146 ************************************ 00:04:10.146 END TEST devices 00:04:10.146 ************************************ 00:04:10.146 00:04:10.146 real 1m31.878s 00:04:10.146 user 0m28.284s 00:04:10.146 sys 0m51.806s 00:04:10.146 11:40:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:10.146 11:40:00 -- common/autotest_common.sh@10 -- # set +x 00:04:10.146 ************************************ 00:04:10.146 END TEST setup.sh 00:04:10.146 ************************************ 00:04:10.146 11:40:00 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:13.439 Hugepages 00:04:13.439 node hugesize free / total 00:04:13.439 node0 1048576kB 0 / 0 00:04:13.439 node0 2048kB 2048 / 2048 00:04:13.439 node1 1048576kB 0 / 0 00:04:13.439 node1 2048kB 0 / 0 00:04:13.439 00:04:13.439 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:13.439 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:13.439 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:13.439 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:13.439 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:13.439 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:13.440 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:13.440 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:13.440 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:13.440 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:13.440 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:13.440 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:13.440 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:13.440 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:13.440 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:13.440 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:13.440 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:13.440 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:13.440 11:40:03 -- spdk/autotest.sh@130 -- # uname -s 00:04:13.440 11:40:03 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:13.440 11:40:03 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:13.440 11:40:03 -- common/autotest_common.sh@1517 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:16.730 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:16.730 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:16.730 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:16.730 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:16.730 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:16.730 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:16.730 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:16.730 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:16.730 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:16.730 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:16.730 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:16.730 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:16.730 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:16.730 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:16.730 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:16.730 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:18.169 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:18.434 11:40:08 -- common/autotest_common.sh@1518 -- # sleep 1 00:04:19.376 11:40:09 -- common/autotest_common.sh@1519 -- # bdfs=() 00:04:19.376 11:40:09 -- common/autotest_common.sh@1519 -- # local bdfs 00:04:19.376 11:40:09 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:19.376 11:40:09 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:19.376 11:40:09 -- common/autotest_common.sh@1499 -- # bdfs=() 00:04:19.376 11:40:09 -- common/autotest_common.sh@1499 -- # local bdfs 00:04:19.376 11:40:09 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:19.376 11:40:09 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:19.376 11:40:09 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:04:19.376 11:40:09 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:04:19.376 11:40:09 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:d8:00.0 00:04:19.376 11:40:09 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:22.667 Waiting for block devices as requested 00:04:22.667 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:22.667 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:22.927 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:22.927 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:22.927 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:23.186 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:23.186 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:23.186 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:23.186 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:23.455 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:23.455 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:23.455 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:23.714 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:23.714 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:23.714 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:23.973 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:23.973 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:04:24.231 11:40:14 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:24.231 11:40:14 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:04:24.231 11:40:14 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 00:04:24.231 11:40:14 -- common/autotest_common.sh@1488 -- # grep 0000:d8:00.0/nvme/nvme 00:04:24.231 11:40:14 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:24.231 11:40:14 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:04:24.231 11:40:14 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:24.231 11:40:14 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0 00:04:24.231 11:40:14 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:24.231 11:40:14 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:24.231 11:40:14 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:24.231 11:40:14 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:24.231 11:40:14 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:24.231 11:40:14 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:04:24.231 11:40:14 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:24.231 11:40:14 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:24.231 11:40:14 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:24.231 11:40:14 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:24.231 11:40:14 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:24.231 11:40:14 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:24.231 11:40:14 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:24.231 11:40:14 -- common/autotest_common.sh@1543 -- # continue 00:04:24.231 11:40:14 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:24.231 11:40:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:24.231 11:40:14 -- common/autotest_common.sh@10 -- # set +x 00:04:24.231 11:40:14 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:24.231 11:40:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:24.231 11:40:14 -- common/autotest_common.sh@10 -- # set +x 00:04:24.231 11:40:14 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:27.521 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:27.521 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:27.521 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:27.521 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:27.521 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:27.521 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:27.521 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:27.521 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:27.521 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:27.521 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:27.521 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:27.521 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:27.521 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:27.521 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:27.521 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:27.521 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:28.900 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:28.900 11:40:19 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:28.900 11:40:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:28.900 11:40:19 -- common/autotest_common.sh@10 -- # set +x 00:04:29.159 11:40:19 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:29.159 11:40:19 -- common/autotest_common.sh@1577 -- # mapfile -t bdfs 00:04:29.159 11:40:19 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs_by_id 0x0a54 00:04:29.159 11:40:19 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:29.159 11:40:19 -- common/autotest_common.sh@1563 -- # local bdfs 00:04:29.159 11:40:19 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs 00:04:29.159 11:40:19 -- common/autotest_common.sh@1499 -- # bdfs=() 00:04:29.159 11:40:19 -- common/autotest_common.sh@1499 -- # local bdfs 00:04:29.159 11:40:19 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:29.159 11:40:19 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:29.159 11:40:19 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:04:29.159 11:40:19 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:04:29.159 11:40:19 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:d8:00.0 00:04:29.159 11:40:19 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:04:29.159 11:40:19 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:04:29.159 11:40:19 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:04:29.159 11:40:19 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:29.159 11:40:19 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:04:29.159 11:40:19 -- common/autotest_common.sh@1572 -- # printf '%s\n' 0000:d8:00.0 00:04:29.159 11:40:19 -- common/autotest_common.sh@1578 -- # [[ -z 0000:d8:00.0 ]] 00:04:29.159 11:40:19 -- common/autotest_common.sh@1583 -- # spdk_tgt_pid=2284257 00:04:29.159 11:40:19 -- common/autotest_common.sh@1584 -- # waitforlisten 2284257 00:04:29.159 11:40:19 -- common/autotest_common.sh@817 -- # '[' -z 2284257 ']' 00:04:29.159 11:40:19 -- common/autotest_common.sh@1582 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:29.159 11:40:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.159 11:40:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:29.159 11:40:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.159 11:40:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:29.159 11:40:19 -- common/autotest_common.sh@10 -- # set +x 00:04:29.159 [2024-04-18 11:40:19.676851] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:04:29.159 [2024-04-18 11:40:19.676944] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2284257 ] 00:04:29.418 EAL: No free 2048 kB hugepages reported on node 1 00:04:29.419 [2024-04-18 11:40:19.800769] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.678 [2024-04-18 11:40:20.003426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.616 11:40:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:30.616 11:40:20 -- common/autotest_common.sh@850 -- # return 0 00:04:30.616 11:40:20 -- common/autotest_common.sh@1586 -- # bdf_id=0 00:04:30.616 11:40:20 -- common/autotest_common.sh@1587 -- # for bdf in "${bdfs[@]}" 00:04:30.616 11:40:20 -- common/autotest_common.sh@1588 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:04:33.909 nvme0n1 00:04:33.909 11:40:23 -- common/autotest_common.sh@1590 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:33.909 [2024-04-18 11:40:24.075699] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:33.909 request: 00:04:33.909 { 00:04:33.909 "nvme_ctrlr_name": "nvme0", 00:04:33.909 "password": "test", 00:04:33.909 "method": "bdev_nvme_opal_revert", 00:04:33.909 "req_id": 1 00:04:33.909 } 00:04:33.909 Got JSON-RPC error response 00:04:33.909 response: 00:04:33.909 { 00:04:33.909 "code": -32602, 00:04:33.909 "message": "Invalid parameters" 00:04:33.909 } 00:04:33.909 11:40:24 -- common/autotest_common.sh@1590 -- # true 00:04:33.909 11:40:24 -- common/autotest_common.sh@1591 -- # (( ++bdf_id )) 00:04:33.909 11:40:24 -- common/autotest_common.sh@1594 -- # killprocess 2284257 00:04:33.909 11:40:24 -- common/autotest_common.sh@936 -- # '[' -z 2284257 ']' 00:04:33.909 11:40:24 -- common/autotest_common.sh@940 -- # kill -0 2284257 00:04:33.909 11:40:24 -- common/autotest_common.sh@941 -- # uname 00:04:33.909 11:40:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:33.909 11:40:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2284257 00:04:33.909 11:40:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:33.909 11:40:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:33.909 11:40:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2284257' 00:04:33.909 killing process with pid 2284257 00:04:33.909 11:40:24 -- common/autotest_common.sh@955 -- # kill 2284257 00:04:33.909 11:40:24 -- common/autotest_common.sh@960 -- # wait 2284257 00:04:38.120 11:40:28 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:38.120 11:40:28 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:38.120 11:40:28 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:38.120 11:40:28 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:38.120 11:40:28 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:38.120 11:40:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:38.120 11:40:28 -- common/autotest_common.sh@10 -- # set +x 00:04:38.120 11:40:28 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:38.120 11:40:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:38.120 11:40:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:38.120 11:40:28 -- common/autotest_common.sh@10 -- # set +x 00:04:38.120 ************************************ 00:04:38.120 START TEST env 00:04:38.120 ************************************ 00:04:38.120 11:40:28 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:38.120 * Looking for test storage... 00:04:38.120 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:38.120 11:40:28 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:38.120 11:40:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:38.120 11:40:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:38.120 11:40:28 -- common/autotest_common.sh@10 -- # set +x 00:04:38.120 ************************************ 00:04:38.120 START TEST env_memory 00:04:38.120 ************************************ 00:04:38.120 11:40:28 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:38.120 00:04:38.120 00:04:38.120 CUnit - A unit testing framework for C - Version 2.1-3 00:04:38.120 http://cunit.sourceforge.net/ 00:04:38.120 00:04:38.120 00:04:38.120 Suite: memory 00:04:38.120 Test: alloc and free memory map ...[2024-04-18 11:40:28.601018] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:38.120 passed 00:04:38.120 Test: mem map translation ...[2024-04-18 11:40:28.636251] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:38.120 [2024-04-18 11:40:28.636279] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:38.120 [2024-04-18 11:40:28.636332] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:38.120 [2024-04-18 11:40:28.636351] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:38.444 passed 00:04:38.444 Test: mem map registration ...[2024-04-18 11:40:28.693797] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:38.444 [2024-04-18 11:40:28.693824] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:38.444 passed 00:04:38.444 Test: mem map adjacent registrations ...passed 00:04:38.444 00:04:38.444 Run Summary: Type Total Ran Passed Failed Inactive 00:04:38.444 suites 1 1 n/a 0 0 00:04:38.444 tests 4 4 4 0 0 00:04:38.444 asserts 152 152 152 0 n/a 00:04:38.444 00:04:38.444 Elapsed time = 0.209 seconds 00:04:38.444 00:04:38.444 real 0m0.252s 00:04:38.444 user 0m0.225s 00:04:38.444 sys 0m0.026s 00:04:38.444 11:40:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:38.444 11:40:28 -- common/autotest_common.sh@10 -- # set +x 00:04:38.444 ************************************ 00:04:38.444 END TEST env_memory 00:04:38.444 ************************************ 00:04:38.444 11:40:28 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:38.444 11:40:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:38.444 11:40:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:38.444 11:40:28 -- common/autotest_common.sh@10 -- # set +x 00:04:38.704 ************************************ 00:04:38.704 START TEST env_vtophys 00:04:38.704 ************************************ 00:04:38.704 11:40:29 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:38.704 EAL: lib.eal log level changed from notice to debug 00:04:38.704 EAL: Detected lcore 0 as core 0 on socket 0 00:04:38.704 EAL: Detected lcore 1 as core 1 on socket 0 00:04:38.704 EAL: Detected lcore 2 as core 2 on socket 0 00:04:38.704 EAL: Detected lcore 3 as core 3 on socket 0 00:04:38.704 EAL: Detected lcore 4 as core 4 on socket 0 00:04:38.704 EAL: Detected lcore 5 as core 5 on socket 0 00:04:38.704 EAL: Detected lcore 6 as core 6 on socket 0 00:04:38.704 EAL: Detected lcore 7 as core 8 on socket 0 00:04:38.704 EAL: Detected lcore 8 as core 9 on socket 0 00:04:38.704 EAL: Detected lcore 9 as core 10 on socket 0 00:04:38.704 EAL: Detected lcore 10 as core 11 on socket 0 00:04:38.704 EAL: Detected lcore 11 as core 12 on socket 0 00:04:38.704 EAL: Detected lcore 12 as core 13 on socket 0 00:04:38.704 EAL: Detected lcore 13 as core 14 on socket 0 00:04:38.704 EAL: Detected lcore 14 as core 16 on socket 0 00:04:38.704 EAL: Detected lcore 15 as core 17 on socket 0 00:04:38.704 EAL: Detected lcore 16 as core 18 on socket 0 00:04:38.704 EAL: Detected lcore 17 as core 19 on socket 0 00:04:38.704 EAL: Detected lcore 18 as core 20 on socket 0 00:04:38.704 EAL: Detected lcore 19 as core 21 on socket 0 00:04:38.704 EAL: Detected lcore 20 as core 22 on socket 0 00:04:38.704 EAL: Detected lcore 21 as core 24 on socket 0 00:04:38.704 EAL: Detected lcore 22 as core 25 on socket 0 00:04:38.704 EAL: Detected lcore 23 as core 26 on socket 0 00:04:38.704 EAL: Detected lcore 24 as core 27 on socket 0 00:04:38.704 EAL: Detected lcore 25 as core 28 on socket 0 00:04:38.704 EAL: Detected lcore 26 as core 29 on socket 0 00:04:38.704 EAL: Detected lcore 27 as core 30 on socket 0 00:04:38.704 EAL: Detected lcore 28 as core 0 on socket 1 00:04:38.704 EAL: Detected lcore 29 as core 1 on socket 1 00:04:38.704 EAL: Detected lcore 30 as core 2 on socket 1 00:04:38.704 EAL: Detected lcore 31 as core 3 on socket 1 00:04:38.704 EAL: Detected lcore 32 as core 4 on socket 1 00:04:38.704 EAL: Detected lcore 33 as core 5 on socket 1 00:04:38.704 EAL: Detected lcore 34 as core 6 on socket 1 00:04:38.704 EAL: Detected lcore 35 as core 8 on socket 1 00:04:38.704 EAL: Detected lcore 36 as core 9 on socket 1 00:04:38.704 EAL: Detected lcore 37 as core 10 on socket 1 00:04:38.704 EAL: Detected lcore 38 as core 11 on socket 1 00:04:38.704 EAL: Detected lcore 39 as core 12 on socket 1 00:04:38.704 EAL: Detected lcore 40 as core 13 on socket 1 00:04:38.704 EAL: Detected lcore 41 as core 14 on socket 1 00:04:38.704 EAL: Detected lcore 42 as core 16 on socket 1 00:04:38.704 EAL: Detected lcore 43 as core 17 on socket 1 00:04:38.704 EAL: Detected lcore 44 as core 18 on socket 1 00:04:38.704 EAL: Detected lcore 45 as core 19 on socket 1 00:04:38.704 EAL: Detected lcore 46 as core 20 on socket 1 00:04:38.704 EAL: Detected lcore 47 as core 21 on socket 1 00:04:38.704 EAL: Detected lcore 48 as core 22 on socket 1 00:04:38.704 EAL: Detected lcore 49 as core 24 on socket 1 00:04:38.704 EAL: Detected lcore 50 as core 25 on socket 1 00:04:38.704 EAL: Detected lcore 51 as core 26 on socket 1 00:04:38.704 EAL: Detected lcore 52 as core 27 on socket 1 00:04:38.704 EAL: Detected lcore 53 as core 28 on socket 1 00:04:38.704 EAL: Detected lcore 54 as core 29 on socket 1 00:04:38.704 EAL: Detected lcore 55 as core 30 on socket 1 00:04:38.704 EAL: Detected lcore 56 as core 0 on socket 0 00:04:38.704 EAL: Detected lcore 57 as core 1 on socket 0 00:04:38.704 EAL: Detected lcore 58 as core 2 on socket 0 00:04:38.704 EAL: Detected lcore 59 as core 3 on socket 0 00:04:38.704 EAL: Detected lcore 60 as core 4 on socket 0 00:04:38.704 EAL: Detected lcore 61 as core 5 on socket 0 00:04:38.704 EAL: Detected lcore 62 as core 6 on socket 0 00:04:38.704 EAL: Detected lcore 63 as core 8 on socket 0 00:04:38.704 EAL: Detected lcore 64 as core 9 on socket 0 00:04:38.704 EAL: Detected lcore 65 as core 10 on socket 0 00:04:38.704 EAL: Detected lcore 66 as core 11 on socket 0 00:04:38.704 EAL: Detected lcore 67 as core 12 on socket 0 00:04:38.704 EAL: Detected lcore 68 as core 13 on socket 0 00:04:38.704 EAL: Detected lcore 69 as core 14 on socket 0 00:04:38.704 EAL: Detected lcore 70 as core 16 on socket 0 00:04:38.704 EAL: Detected lcore 71 as core 17 on socket 0 00:04:38.704 EAL: Detected lcore 72 as core 18 on socket 0 00:04:38.704 EAL: Detected lcore 73 as core 19 on socket 0 00:04:38.704 EAL: Detected lcore 74 as core 20 on socket 0 00:04:38.704 EAL: Detected lcore 75 as core 21 on socket 0 00:04:38.704 EAL: Detected lcore 76 as core 22 on socket 0 00:04:38.704 EAL: Detected lcore 77 as core 24 on socket 0 00:04:38.704 EAL: Detected lcore 78 as core 25 on socket 0 00:04:38.704 EAL: Detected lcore 79 as core 26 on socket 0 00:04:38.704 EAL: Detected lcore 80 as core 27 on socket 0 00:04:38.704 EAL: Detected lcore 81 as core 28 on socket 0 00:04:38.704 EAL: Detected lcore 82 as core 29 on socket 0 00:04:38.704 EAL: Detected lcore 83 as core 30 on socket 0 00:04:38.704 EAL: Detected lcore 84 as core 0 on socket 1 00:04:38.704 EAL: Detected lcore 85 as core 1 on socket 1 00:04:38.704 EAL: Detected lcore 86 as core 2 on socket 1 00:04:38.704 EAL: Detected lcore 87 as core 3 on socket 1 00:04:38.704 EAL: Detected lcore 88 as core 4 on socket 1 00:04:38.704 EAL: Detected lcore 89 as core 5 on socket 1 00:04:38.704 EAL: Detected lcore 90 as core 6 on socket 1 00:04:38.704 EAL: Detected lcore 91 as core 8 on socket 1 00:04:38.704 EAL: Detected lcore 92 as core 9 on socket 1 00:04:38.704 EAL: Detected lcore 93 as core 10 on socket 1 00:04:38.704 EAL: Detected lcore 94 as core 11 on socket 1 00:04:38.704 EAL: Detected lcore 95 as core 12 on socket 1 00:04:38.704 EAL: Detected lcore 96 as core 13 on socket 1 00:04:38.704 EAL: Detected lcore 97 as core 14 on socket 1 00:04:38.704 EAL: Detected lcore 98 as core 16 on socket 1 00:04:38.704 EAL: Detected lcore 99 as core 17 on socket 1 00:04:38.704 EAL: Detected lcore 100 as core 18 on socket 1 00:04:38.704 EAL: Detected lcore 101 as core 19 on socket 1 00:04:38.705 EAL: Detected lcore 102 as core 20 on socket 1 00:04:38.705 EAL: Detected lcore 103 as core 21 on socket 1 00:04:38.705 EAL: Detected lcore 104 as core 22 on socket 1 00:04:38.705 EAL: Detected lcore 105 as core 24 on socket 1 00:04:38.705 EAL: Detected lcore 106 as core 25 on socket 1 00:04:38.705 EAL: Detected lcore 107 as core 26 on socket 1 00:04:38.705 EAL: Detected lcore 108 as core 27 on socket 1 00:04:38.705 EAL: Detected lcore 109 as core 28 on socket 1 00:04:38.705 EAL: Detected lcore 110 as core 29 on socket 1 00:04:38.705 EAL: Detected lcore 111 as core 30 on socket 1 00:04:38.705 EAL: Maximum logical cores by configuration: 128 00:04:38.705 EAL: Detected CPU lcores: 112 00:04:38.705 EAL: Detected NUMA nodes: 2 00:04:38.705 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:38.705 EAL: Detected shared linkage of DPDK 00:04:38.705 EAL: No shared files mode enabled, IPC will be disabled 00:04:38.705 EAL: Bus pci wants IOVA as 'DC' 00:04:38.705 EAL: Buses did not request a specific IOVA mode. 00:04:38.705 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:38.705 EAL: Selected IOVA mode 'VA' 00:04:38.705 EAL: No free 2048 kB hugepages reported on node 1 00:04:38.705 EAL: Probing VFIO support... 00:04:38.705 EAL: IOMMU type 1 (Type 1) is supported 00:04:38.705 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:38.705 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:38.705 EAL: VFIO support initialized 00:04:38.705 EAL: Ask a virtual area of 0x2e000 bytes 00:04:38.705 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:38.705 EAL: Setting up physically contiguous memory... 00:04:38.705 EAL: Setting maximum number of open files to 524288 00:04:38.705 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:38.705 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:38.705 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:38.705 EAL: Ask a virtual area of 0x61000 bytes 00:04:38.705 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:38.705 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:38.705 EAL: Ask a virtual area of 0x400000000 bytes 00:04:38.705 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:38.705 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:38.705 EAL: Ask a virtual area of 0x61000 bytes 00:04:38.705 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:38.705 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:38.705 EAL: Ask a virtual area of 0x400000000 bytes 00:04:38.705 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:38.705 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:38.705 EAL: Ask a virtual area of 0x61000 bytes 00:04:38.705 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:38.705 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:38.705 EAL: Ask a virtual area of 0x400000000 bytes 00:04:38.705 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:38.705 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:38.705 EAL: Ask a virtual area of 0x61000 bytes 00:04:38.705 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:38.705 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:38.705 EAL: Ask a virtual area of 0x400000000 bytes 00:04:38.705 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:38.705 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:38.705 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:38.705 EAL: Ask a virtual area of 0x61000 bytes 00:04:38.705 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:38.705 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:38.705 EAL: Ask a virtual area of 0x400000000 bytes 00:04:38.705 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:38.705 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:38.705 EAL: Ask a virtual area of 0x61000 bytes 00:04:38.705 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:38.705 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:38.705 EAL: Ask a virtual area of 0x400000000 bytes 00:04:38.705 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:38.705 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:38.705 EAL: Ask a virtual area of 0x61000 bytes 00:04:38.705 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:38.705 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:38.705 EAL: Ask a virtual area of 0x400000000 bytes 00:04:38.705 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:38.705 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:38.705 EAL: Ask a virtual area of 0x61000 bytes 00:04:38.705 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:38.705 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:38.705 EAL: Ask a virtual area of 0x400000000 bytes 00:04:38.705 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:38.705 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:38.705 EAL: Hugepages will be freed exactly as allocated. 00:04:38.705 EAL: No shared files mode enabled, IPC is disabled 00:04:38.705 EAL: No shared files mode enabled, IPC is disabled 00:04:38.705 EAL: TSC frequency is ~2500000 KHz 00:04:38.705 EAL: Main lcore 0 is ready (tid=7f3356809a40;cpuset=[0]) 00:04:38.705 EAL: Trying to obtain current memory policy. 00:04:38.705 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:38.705 EAL: Restoring previous memory policy: 0 00:04:38.705 EAL: request: mp_malloc_sync 00:04:38.705 EAL: No shared files mode enabled, IPC is disabled 00:04:38.705 EAL: Heap on socket 0 was expanded by 2MB 00:04:38.705 EAL: No shared files mode enabled, IPC is disabled 00:04:38.705 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:38.705 EAL: Mem event callback 'spdk:(nil)' registered 00:04:38.705 00:04:38.705 00:04:38.705 CUnit - A unit testing framework for C - Version 2.1-3 00:04:38.705 http://cunit.sourceforge.net/ 00:04:38.705 00:04:38.705 00:04:38.705 Suite: components_suite 00:04:39.274 Test: vtophys_malloc_test ...passed 00:04:39.274 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:39.274 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:39.274 EAL: Restoring previous memory policy: 4 00:04:39.274 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.274 EAL: request: mp_malloc_sync 00:04:39.274 EAL: No shared files mode enabled, IPC is disabled 00:04:39.274 EAL: Heap on socket 0 was expanded by 4MB 00:04:39.274 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.274 EAL: request: mp_malloc_sync 00:04:39.274 EAL: No shared files mode enabled, IPC is disabled 00:04:39.274 EAL: Heap on socket 0 was shrunk by 4MB 00:04:39.274 EAL: Trying to obtain current memory policy. 00:04:39.274 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:39.274 EAL: Restoring previous memory policy: 4 00:04:39.274 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.274 EAL: request: mp_malloc_sync 00:04:39.274 EAL: No shared files mode enabled, IPC is disabled 00:04:39.274 EAL: Heap on socket 0 was expanded by 6MB 00:04:39.274 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.274 EAL: request: mp_malloc_sync 00:04:39.274 EAL: No shared files mode enabled, IPC is disabled 00:04:39.274 EAL: Heap on socket 0 was shrunk by 6MB 00:04:39.274 EAL: Trying to obtain current memory policy. 00:04:39.274 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:39.274 EAL: Restoring previous memory policy: 4 00:04:39.274 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.274 EAL: request: mp_malloc_sync 00:04:39.274 EAL: No shared files mode enabled, IPC is disabled 00:04:39.274 EAL: Heap on socket 0 was expanded by 10MB 00:04:39.274 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.274 EAL: request: mp_malloc_sync 00:04:39.274 EAL: No shared files mode enabled, IPC is disabled 00:04:39.274 EAL: Heap on socket 0 was shrunk by 10MB 00:04:39.274 EAL: Trying to obtain current memory policy. 00:04:39.274 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:39.274 EAL: Restoring previous memory policy: 4 00:04:39.274 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.274 EAL: request: mp_malloc_sync 00:04:39.274 EAL: No shared files mode enabled, IPC is disabled 00:04:39.274 EAL: Heap on socket 0 was expanded by 18MB 00:04:39.274 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.274 EAL: request: mp_malloc_sync 00:04:39.274 EAL: No shared files mode enabled, IPC is disabled 00:04:39.274 EAL: Heap on socket 0 was shrunk by 18MB 00:04:39.274 EAL: Trying to obtain current memory policy. 00:04:39.274 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:39.274 EAL: Restoring previous memory policy: 4 00:04:39.274 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.274 EAL: request: mp_malloc_sync 00:04:39.274 EAL: No shared files mode enabled, IPC is disabled 00:04:39.274 EAL: Heap on socket 0 was expanded by 34MB 00:04:39.274 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.274 EAL: request: mp_malloc_sync 00:04:39.274 EAL: No shared files mode enabled, IPC is disabled 00:04:39.274 EAL: Heap on socket 0 was shrunk by 34MB 00:04:39.274 EAL: Trying to obtain current memory policy. 00:04:39.274 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:39.274 EAL: Restoring previous memory policy: 4 00:04:39.274 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.274 EAL: request: mp_malloc_sync 00:04:39.274 EAL: No shared files mode enabled, IPC is disabled 00:04:39.274 EAL: Heap on socket 0 was expanded by 66MB 00:04:39.537 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.537 EAL: request: mp_malloc_sync 00:04:39.537 EAL: No shared files mode enabled, IPC is disabled 00:04:39.537 EAL: Heap on socket 0 was shrunk by 66MB 00:04:39.537 EAL: Trying to obtain current memory policy. 00:04:39.537 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:39.537 EAL: Restoring previous memory policy: 4 00:04:39.537 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.537 EAL: request: mp_malloc_sync 00:04:39.537 EAL: No shared files mode enabled, IPC is disabled 00:04:39.537 EAL: Heap on socket 0 was expanded by 130MB 00:04:39.798 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.798 EAL: request: mp_malloc_sync 00:04:39.798 EAL: No shared files mode enabled, IPC is disabled 00:04:39.798 EAL: Heap on socket 0 was shrunk by 130MB 00:04:40.057 EAL: Trying to obtain current memory policy. 00:04:40.057 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:40.057 EAL: Restoring previous memory policy: 4 00:04:40.057 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.057 EAL: request: mp_malloc_sync 00:04:40.057 EAL: No shared files mode enabled, IPC is disabled 00:04:40.057 EAL: Heap on socket 0 was expanded by 258MB 00:04:40.625 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.625 EAL: request: mp_malloc_sync 00:04:40.625 EAL: No shared files mode enabled, IPC is disabled 00:04:40.625 EAL: Heap on socket 0 was shrunk by 258MB 00:04:41.192 EAL: Trying to obtain current memory policy. 00:04:41.192 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:41.192 EAL: Restoring previous memory policy: 4 00:04:41.192 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.192 EAL: request: mp_malloc_sync 00:04:41.192 EAL: No shared files mode enabled, IPC is disabled 00:04:41.192 EAL: Heap on socket 0 was expanded by 514MB 00:04:42.129 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.388 EAL: request: mp_malloc_sync 00:04:42.388 EAL: No shared files mode enabled, IPC is disabled 00:04:42.388 EAL: Heap on socket 0 was shrunk by 514MB 00:04:42.955 EAL: Trying to obtain current memory policy. 00:04:42.955 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.214 EAL: Restoring previous memory policy: 4 00:04:43.214 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.214 EAL: request: mp_malloc_sync 00:04:43.214 EAL: No shared files mode enabled, IPC is disabled 00:04:43.214 EAL: Heap on socket 0 was expanded by 1026MB 00:04:45.752 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.752 EAL: request: mp_malloc_sync 00:04:45.752 EAL: No shared files mode enabled, IPC is disabled 00:04:45.752 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:47.129 passed 00:04:47.129 00:04:47.129 Run Summary: Type Total Ran Passed Failed Inactive 00:04:47.129 suites 1 1 n/a 0 0 00:04:47.129 tests 2 2 2 0 0 00:04:47.129 asserts 497 497 497 0 n/a 00:04:47.129 00:04:47.129 Elapsed time = 8.064 seconds 00:04:47.129 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.129 EAL: request: mp_malloc_sync 00:04:47.129 EAL: No shared files mode enabled, IPC is disabled 00:04:47.129 EAL: Heap on socket 0 was shrunk by 2MB 00:04:47.129 EAL: No shared files mode enabled, IPC is disabled 00:04:47.129 EAL: No shared files mode enabled, IPC is disabled 00:04:47.129 EAL: No shared files mode enabled, IPC is disabled 00:04:47.129 00:04:47.129 real 0m8.325s 00:04:47.129 user 0m7.448s 00:04:47.129 sys 0m0.826s 00:04:47.129 11:40:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:47.129 11:40:37 -- common/autotest_common.sh@10 -- # set +x 00:04:47.129 ************************************ 00:04:47.129 END TEST env_vtophys 00:04:47.129 ************************************ 00:04:47.129 11:40:37 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:47.129 11:40:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:47.129 11:40:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:47.129 11:40:37 -- common/autotest_common.sh@10 -- # set +x 00:04:47.129 ************************************ 00:04:47.129 START TEST env_pci 00:04:47.129 ************************************ 00:04:47.129 11:40:37 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:47.129 00:04:47.129 00:04:47.129 CUnit - A unit testing framework for C - Version 2.1-3 00:04:47.129 http://cunit.sourceforge.net/ 00:04:47.129 00:04:47.129 00:04:47.129 Suite: pci 00:04:47.129 Test: pci_hook ...[2024-04-18 11:40:37.554944] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2287444 has claimed it 00:04:47.129 EAL: Cannot find device (10000:00:01.0) 00:04:47.129 EAL: Failed to attach device on primary process 00:04:47.129 passed 00:04:47.129 00:04:47.129 Run Summary: Type Total Ran Passed Failed Inactive 00:04:47.129 suites 1 1 n/a 0 0 00:04:47.129 tests 1 1 1 0 0 00:04:47.129 asserts 25 25 25 0 n/a 00:04:47.129 00:04:47.129 Elapsed time = 0.048 seconds 00:04:47.129 00:04:47.129 real 0m0.096s 00:04:47.129 user 0m0.026s 00:04:47.129 sys 0m0.070s 00:04:47.129 11:40:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:47.129 11:40:37 -- common/autotest_common.sh@10 -- # set +x 00:04:47.129 ************************************ 00:04:47.129 END TEST env_pci 00:04:47.129 ************************************ 00:04:47.129 11:40:37 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:47.129 11:40:37 -- env/env.sh@15 -- # uname 00:04:47.129 11:40:37 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:47.129 11:40:37 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:47.129 11:40:37 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:47.129 11:40:37 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:04:47.129 11:40:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:47.129 11:40:37 -- common/autotest_common.sh@10 -- # set +x 00:04:47.388 ************************************ 00:04:47.388 START TEST env_dpdk_post_init 00:04:47.388 ************************************ 00:04:47.388 11:40:37 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:47.388 EAL: Detected CPU lcores: 112 00:04:47.388 EAL: Detected NUMA nodes: 2 00:04:47.388 EAL: Detected shared linkage of DPDK 00:04:47.388 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:47.388 EAL: Selected IOVA mode 'VA' 00:04:47.388 EAL: No free 2048 kB hugepages reported on node 1 00:04:47.388 EAL: VFIO support initialized 00:04:47.389 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:47.648 EAL: Using IOMMU type 1 (Type 1) 00:04:47.648 EAL: Ignore mapping IO port bar(1) 00:04:47.648 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:47.648 EAL: Ignore mapping IO port bar(1) 00:04:47.648 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:47.648 EAL: Ignore mapping IO port bar(1) 00:04:47.648 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:47.648 EAL: Ignore mapping IO port bar(1) 00:04:47.648 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:47.648 EAL: Ignore mapping IO port bar(1) 00:04:47.648 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:47.648 EAL: Ignore mapping IO port bar(1) 00:04:47.648 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:47.648 EAL: Ignore mapping IO port bar(1) 00:04:47.648 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:47.648 EAL: Ignore mapping IO port bar(1) 00:04:47.648 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:47.648 EAL: Ignore mapping IO port bar(1) 00:04:47.648 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:47.648 EAL: Ignore mapping IO port bar(1) 00:04:47.648 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:47.648 EAL: Ignore mapping IO port bar(1) 00:04:47.648 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:47.648 EAL: Ignore mapping IO port bar(1) 00:04:47.648 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:47.908 EAL: Ignore mapping IO port bar(1) 00:04:47.908 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:47.908 EAL: Ignore mapping IO port bar(1) 00:04:47.908 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:47.908 EAL: Ignore mapping IO port bar(1) 00:04:47.908 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:47.908 EAL: Ignore mapping IO port bar(1) 00:04:47.908 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:48.476 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:04:52.670 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:04:52.670 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001040000 00:04:52.670 Starting DPDK initialization... 00:04:52.670 Starting SPDK post initialization... 00:04:52.670 SPDK NVMe probe 00:04:52.670 Attaching to 0000:d8:00.0 00:04:52.670 Attached to 0000:d8:00.0 00:04:52.670 Cleaning up... 00:04:52.670 00:04:52.670 real 0m5.014s 00:04:52.670 user 0m3.683s 00:04:52.670 sys 0m0.386s 00:04:52.670 11:40:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:52.670 11:40:42 -- common/autotest_common.sh@10 -- # set +x 00:04:52.670 ************************************ 00:04:52.670 END TEST env_dpdk_post_init 00:04:52.670 ************************************ 00:04:52.670 11:40:42 -- env/env.sh@26 -- # uname 00:04:52.670 11:40:42 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:52.670 11:40:42 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:52.670 11:40:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:52.670 11:40:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:52.670 11:40:42 -- common/autotest_common.sh@10 -- # set +x 00:04:52.670 ************************************ 00:04:52.670 START TEST env_mem_callbacks 00:04:52.670 ************************************ 00:04:52.670 11:40:43 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:52.670 EAL: Detected CPU lcores: 112 00:04:52.670 EAL: Detected NUMA nodes: 2 00:04:52.670 EAL: Detected shared linkage of DPDK 00:04:52.670 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:52.670 EAL: Selected IOVA mode 'VA' 00:04:52.670 EAL: No free 2048 kB hugepages reported on node 1 00:04:52.670 EAL: VFIO support initialized 00:04:52.670 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:52.670 00:04:52.670 00:04:52.670 CUnit - A unit testing framework for C - Version 2.1-3 00:04:52.670 http://cunit.sourceforge.net/ 00:04:52.670 00:04:52.670 00:04:52.670 Suite: memory 00:04:52.670 Test: test ... 00:04:52.670 register 0x200000200000 2097152 00:04:52.670 malloc 3145728 00:04:52.670 register 0x200000400000 4194304 00:04:52.670 buf 0x2000004fffc0 len 3145728 PASSED 00:04:52.670 malloc 64 00:04:52.670 buf 0x2000004ffec0 len 64 PASSED 00:04:52.670 malloc 4194304 00:04:52.670 register 0x200000800000 6291456 00:04:52.670 buf 0x2000009fffc0 len 4194304 PASSED 00:04:52.670 free 0x2000004fffc0 3145728 00:04:52.670 free 0x2000004ffec0 64 00:04:52.670 unregister 0x200000400000 4194304 PASSED 00:04:52.670 free 0x2000009fffc0 4194304 00:04:52.670 unregister 0x200000800000 6291456 PASSED 00:04:52.670 malloc 8388608 00:04:52.670 register 0x200000400000 10485760 00:04:52.670 buf 0x2000005fffc0 len 8388608 PASSED 00:04:52.670 free 0x2000005fffc0 8388608 00:04:52.670 unregister 0x200000400000 10485760 PASSED 00:04:52.670 passed 00:04:52.670 00:04:52.670 Run Summary: Type Total Ran Passed Failed Inactive 00:04:52.670 suites 1 1 n/a 0 0 00:04:52.670 tests 1 1 1 0 0 00:04:52.670 asserts 15 15 15 0 n/a 00:04:52.670 00:04:52.670 Elapsed time = 0.066 seconds 00:04:52.670 00:04:52.670 real 0m0.174s 00:04:52.670 user 0m0.098s 00:04:52.670 sys 0m0.075s 00:04:52.670 11:40:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:52.670 11:40:43 -- common/autotest_common.sh@10 -- # set +x 00:04:52.670 ************************************ 00:04:52.670 END TEST env_mem_callbacks 00:04:52.670 ************************************ 00:04:52.930 00:04:52.930 real 0m14.986s 00:04:52.930 user 0m11.867s 00:04:52.930 sys 0m2.033s 00:04:52.930 11:40:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:52.930 11:40:43 -- common/autotest_common.sh@10 -- # set +x 00:04:52.930 ************************************ 00:04:52.930 END TEST env 00:04:52.930 ************************************ 00:04:52.930 11:40:43 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:52.930 11:40:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:52.930 11:40:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:52.930 11:40:43 -- common/autotest_common.sh@10 -- # set +x 00:04:52.930 ************************************ 00:04:52.930 START TEST rpc 00:04:52.930 ************************************ 00:04:52.930 11:40:43 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:53.190 * Looking for test storage... 00:04:53.190 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:53.190 11:40:43 -- rpc/rpc.sh@65 -- # spdk_pid=2288658 00:04:53.190 11:40:43 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:53.190 11:40:43 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:53.190 11:40:43 -- rpc/rpc.sh@67 -- # waitforlisten 2288658 00:04:53.190 11:40:43 -- common/autotest_common.sh@817 -- # '[' -z 2288658 ']' 00:04:53.190 11:40:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.190 11:40:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:53.190 11:40:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.190 11:40:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:53.190 11:40:43 -- common/autotest_common.sh@10 -- # set +x 00:04:53.190 [2024-04-18 11:40:43.668691] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:04:53.190 [2024-04-18 11:40:43.668788] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2288658 ] 00:04:53.190 EAL: No free 2048 kB hugepages reported on node 1 00:04:53.449 [2024-04-18 11:40:43.792964] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.708 [2024-04-18 11:40:44.009027] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:53.708 [2024-04-18 11:40:44.009069] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2288658' to capture a snapshot of events at runtime. 00:04:53.708 [2024-04-18 11:40:44.009084] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:53.708 [2024-04-18 11:40:44.009111] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:53.708 [2024-04-18 11:40:44.009124] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2288658 for offline analysis/debug. 00:04:53.708 [2024-04-18 11:40:44.009158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.646 11:40:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:54.646 11:40:44 -- common/autotest_common.sh@850 -- # return 0 00:04:54.646 11:40:44 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:54.646 11:40:44 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:54.646 11:40:44 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:54.646 11:40:44 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:54.646 11:40:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:54.646 11:40:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:54.646 11:40:44 -- common/autotest_common.sh@10 -- # set +x 00:04:54.646 ************************************ 00:04:54.646 START TEST rpc_integrity 00:04:54.646 ************************************ 00:04:54.646 11:40:45 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:04:54.646 11:40:45 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:54.646 11:40:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:54.646 11:40:45 -- common/autotest_common.sh@10 -- # set +x 00:04:54.646 11:40:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:54.646 11:40:45 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:54.647 11:40:45 -- rpc/rpc.sh@13 -- # jq length 00:04:54.647 11:40:45 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:54.647 11:40:45 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:54.647 11:40:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:54.647 11:40:45 -- common/autotest_common.sh@10 -- # set +x 00:04:54.647 11:40:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:54.647 11:40:45 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:54.647 11:40:45 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:54.647 11:40:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:54.647 11:40:45 -- common/autotest_common.sh@10 -- # set +x 00:04:54.647 11:40:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:54.647 11:40:45 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:54.647 { 00:04:54.647 "name": "Malloc0", 00:04:54.647 "aliases": [ 00:04:54.647 "2dbbf839-859d-4bd1-81bc-f8439c542f6c" 00:04:54.647 ], 00:04:54.647 "product_name": "Malloc disk", 00:04:54.647 "block_size": 512, 00:04:54.647 "num_blocks": 16384, 00:04:54.647 "uuid": "2dbbf839-859d-4bd1-81bc-f8439c542f6c", 00:04:54.647 "assigned_rate_limits": { 00:04:54.647 "rw_ios_per_sec": 0, 00:04:54.647 "rw_mbytes_per_sec": 0, 00:04:54.647 "r_mbytes_per_sec": 0, 00:04:54.647 "w_mbytes_per_sec": 0 00:04:54.647 }, 00:04:54.647 "claimed": false, 00:04:54.647 "zoned": false, 00:04:54.647 "supported_io_types": { 00:04:54.647 "read": true, 00:04:54.647 "write": true, 00:04:54.647 "unmap": true, 00:04:54.647 "write_zeroes": true, 00:04:54.647 "flush": true, 00:04:54.647 "reset": true, 00:04:54.647 "compare": false, 00:04:54.647 "compare_and_write": false, 00:04:54.647 "abort": true, 00:04:54.647 "nvme_admin": false, 00:04:54.647 "nvme_io": false 00:04:54.647 }, 00:04:54.647 "memory_domains": [ 00:04:54.647 { 00:04:54.647 "dma_device_id": "system", 00:04:54.647 "dma_device_type": 1 00:04:54.647 }, 00:04:54.647 { 00:04:54.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:54.647 "dma_device_type": 2 00:04:54.647 } 00:04:54.647 ], 00:04:54.647 "driver_specific": {} 00:04:54.647 } 00:04:54.647 ]' 00:04:54.647 11:40:45 -- rpc/rpc.sh@17 -- # jq length 00:04:54.647 11:40:45 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:54.647 11:40:45 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:54.647 11:40:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:54.937 11:40:45 -- common/autotest_common.sh@10 -- # set +x 00:04:54.937 [2024-04-18 11:40:45.199539] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:54.937 [2024-04-18 11:40:45.199588] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:54.937 [2024-04-18 11:40:45.199617] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000021680 00:04:54.937 [2024-04-18 11:40:45.199629] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:54.937 [2024-04-18 11:40:45.201750] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:54.937 [2024-04-18 11:40:45.201780] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:54.937 Passthru0 00:04:54.937 11:40:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:54.937 11:40:45 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:54.937 11:40:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:54.937 11:40:45 -- common/autotest_common.sh@10 -- # set +x 00:04:54.937 11:40:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:54.937 11:40:45 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:54.937 { 00:04:54.937 "name": "Malloc0", 00:04:54.937 "aliases": [ 00:04:54.937 "2dbbf839-859d-4bd1-81bc-f8439c542f6c" 00:04:54.937 ], 00:04:54.937 "product_name": "Malloc disk", 00:04:54.937 "block_size": 512, 00:04:54.937 "num_blocks": 16384, 00:04:54.937 "uuid": "2dbbf839-859d-4bd1-81bc-f8439c542f6c", 00:04:54.937 "assigned_rate_limits": { 00:04:54.937 "rw_ios_per_sec": 0, 00:04:54.937 "rw_mbytes_per_sec": 0, 00:04:54.937 "r_mbytes_per_sec": 0, 00:04:54.937 "w_mbytes_per_sec": 0 00:04:54.937 }, 00:04:54.937 "claimed": true, 00:04:54.937 "claim_type": "exclusive_write", 00:04:54.937 "zoned": false, 00:04:54.937 "supported_io_types": { 00:04:54.937 "read": true, 00:04:54.937 "write": true, 00:04:54.937 "unmap": true, 00:04:54.937 "write_zeroes": true, 00:04:54.937 "flush": true, 00:04:54.937 "reset": true, 00:04:54.937 "compare": false, 00:04:54.937 "compare_and_write": false, 00:04:54.937 "abort": true, 00:04:54.937 "nvme_admin": false, 00:04:54.937 "nvme_io": false 00:04:54.937 }, 00:04:54.937 "memory_domains": [ 00:04:54.937 { 00:04:54.937 "dma_device_id": "system", 00:04:54.937 "dma_device_type": 1 00:04:54.937 }, 00:04:54.937 { 00:04:54.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:54.937 "dma_device_type": 2 00:04:54.937 } 00:04:54.937 ], 00:04:54.937 "driver_specific": {} 00:04:54.937 }, 00:04:54.937 { 00:04:54.937 "name": "Passthru0", 00:04:54.937 "aliases": [ 00:04:54.937 "9342be8c-18f4-5192-b400-e0766d9487f9" 00:04:54.937 ], 00:04:54.937 "product_name": "passthru", 00:04:54.937 "block_size": 512, 00:04:54.937 "num_blocks": 16384, 00:04:54.937 "uuid": "9342be8c-18f4-5192-b400-e0766d9487f9", 00:04:54.937 "assigned_rate_limits": { 00:04:54.937 "rw_ios_per_sec": 0, 00:04:54.937 "rw_mbytes_per_sec": 0, 00:04:54.937 "r_mbytes_per_sec": 0, 00:04:54.937 "w_mbytes_per_sec": 0 00:04:54.937 }, 00:04:54.937 "claimed": false, 00:04:54.937 "zoned": false, 00:04:54.937 "supported_io_types": { 00:04:54.937 "read": true, 00:04:54.937 "write": true, 00:04:54.937 "unmap": true, 00:04:54.937 "write_zeroes": true, 00:04:54.937 "flush": true, 00:04:54.937 "reset": true, 00:04:54.937 "compare": false, 00:04:54.937 "compare_and_write": false, 00:04:54.937 "abort": true, 00:04:54.937 "nvme_admin": false, 00:04:54.937 "nvme_io": false 00:04:54.937 }, 00:04:54.937 "memory_domains": [ 00:04:54.937 { 00:04:54.937 "dma_device_id": "system", 00:04:54.937 "dma_device_type": 1 00:04:54.937 }, 00:04:54.937 { 00:04:54.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:54.938 "dma_device_type": 2 00:04:54.938 } 00:04:54.938 ], 00:04:54.938 "driver_specific": { 00:04:54.938 "passthru": { 00:04:54.938 "name": "Passthru0", 00:04:54.938 "base_bdev_name": "Malloc0" 00:04:54.938 } 00:04:54.938 } 00:04:54.938 } 00:04:54.938 ]' 00:04:54.938 11:40:45 -- rpc/rpc.sh@21 -- # jq length 00:04:54.938 11:40:45 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:54.938 11:40:45 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:54.938 11:40:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:54.938 11:40:45 -- common/autotest_common.sh@10 -- # set +x 00:04:54.938 11:40:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:54.938 11:40:45 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:54.938 11:40:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:54.938 11:40:45 -- common/autotest_common.sh@10 -- # set +x 00:04:54.938 11:40:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:54.938 11:40:45 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:54.938 11:40:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:54.938 11:40:45 -- common/autotest_common.sh@10 -- # set +x 00:04:54.938 11:40:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:54.938 11:40:45 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:54.938 11:40:45 -- rpc/rpc.sh@26 -- # jq length 00:04:54.938 11:40:45 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:54.938 00:04:54.938 real 0m0.316s 00:04:54.938 user 0m0.181s 00:04:54.938 sys 0m0.036s 00:04:54.938 11:40:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:54.938 11:40:45 -- common/autotest_common.sh@10 -- # set +x 00:04:54.938 ************************************ 00:04:54.938 END TEST rpc_integrity 00:04:54.938 ************************************ 00:04:54.938 11:40:45 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:54.938 11:40:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:54.938 11:40:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:54.938 11:40:45 -- common/autotest_common.sh@10 -- # set +x 00:04:55.196 ************************************ 00:04:55.196 START TEST rpc_plugins 00:04:55.196 ************************************ 00:04:55.196 11:40:45 -- common/autotest_common.sh@1111 -- # rpc_plugins 00:04:55.196 11:40:45 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:55.196 11:40:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:55.196 11:40:45 -- common/autotest_common.sh@10 -- # set +x 00:04:55.196 11:40:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:55.196 11:40:45 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:55.196 11:40:45 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:55.196 11:40:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:55.196 11:40:45 -- common/autotest_common.sh@10 -- # set +x 00:04:55.196 11:40:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:55.196 11:40:45 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:55.196 { 00:04:55.196 "name": "Malloc1", 00:04:55.196 "aliases": [ 00:04:55.196 "d79e980a-dbaa-418b-a2da-d6e256f57177" 00:04:55.196 ], 00:04:55.196 "product_name": "Malloc disk", 00:04:55.196 "block_size": 4096, 00:04:55.196 "num_blocks": 256, 00:04:55.196 "uuid": "d79e980a-dbaa-418b-a2da-d6e256f57177", 00:04:55.196 "assigned_rate_limits": { 00:04:55.196 "rw_ios_per_sec": 0, 00:04:55.196 "rw_mbytes_per_sec": 0, 00:04:55.196 "r_mbytes_per_sec": 0, 00:04:55.196 "w_mbytes_per_sec": 0 00:04:55.196 }, 00:04:55.196 "claimed": false, 00:04:55.196 "zoned": false, 00:04:55.196 "supported_io_types": { 00:04:55.196 "read": true, 00:04:55.196 "write": true, 00:04:55.196 "unmap": true, 00:04:55.196 "write_zeroes": true, 00:04:55.196 "flush": true, 00:04:55.196 "reset": true, 00:04:55.196 "compare": false, 00:04:55.196 "compare_and_write": false, 00:04:55.196 "abort": true, 00:04:55.196 "nvme_admin": false, 00:04:55.196 "nvme_io": false 00:04:55.196 }, 00:04:55.196 "memory_domains": [ 00:04:55.196 { 00:04:55.196 "dma_device_id": "system", 00:04:55.196 "dma_device_type": 1 00:04:55.196 }, 00:04:55.196 { 00:04:55.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:55.196 "dma_device_type": 2 00:04:55.196 } 00:04:55.196 ], 00:04:55.196 "driver_specific": {} 00:04:55.196 } 00:04:55.196 ]' 00:04:55.196 11:40:45 -- rpc/rpc.sh@32 -- # jq length 00:04:55.196 11:40:45 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:55.196 11:40:45 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:55.196 11:40:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:55.197 11:40:45 -- common/autotest_common.sh@10 -- # set +x 00:04:55.197 11:40:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:55.197 11:40:45 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:55.197 11:40:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:55.197 11:40:45 -- common/autotest_common.sh@10 -- # set +x 00:04:55.197 11:40:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:55.197 11:40:45 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:55.197 11:40:45 -- rpc/rpc.sh@36 -- # jq length 00:04:55.197 11:40:45 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:55.197 00:04:55.197 real 0m0.144s 00:04:55.197 user 0m0.089s 00:04:55.197 sys 0m0.025s 00:04:55.197 11:40:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:55.197 11:40:45 -- common/autotest_common.sh@10 -- # set +x 00:04:55.197 ************************************ 00:04:55.197 END TEST rpc_plugins 00:04:55.197 ************************************ 00:04:55.455 11:40:45 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:55.455 11:40:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:55.455 11:40:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:55.455 11:40:45 -- common/autotest_common.sh@10 -- # set +x 00:04:55.455 ************************************ 00:04:55.455 START TEST rpc_trace_cmd_test 00:04:55.455 ************************************ 00:04:55.455 11:40:45 -- common/autotest_common.sh@1111 -- # rpc_trace_cmd_test 00:04:55.455 11:40:45 -- rpc/rpc.sh@40 -- # local info 00:04:55.456 11:40:45 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:55.456 11:40:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:55.456 11:40:45 -- common/autotest_common.sh@10 -- # set +x 00:04:55.456 11:40:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:55.456 11:40:45 -- rpc/rpc.sh@42 -- # info='{ 00:04:55.456 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2288658", 00:04:55.456 "tpoint_group_mask": "0x8", 00:04:55.456 "iscsi_conn": { 00:04:55.456 "mask": "0x2", 00:04:55.456 "tpoint_mask": "0x0" 00:04:55.456 }, 00:04:55.456 "scsi": { 00:04:55.456 "mask": "0x4", 00:04:55.456 "tpoint_mask": "0x0" 00:04:55.456 }, 00:04:55.456 "bdev": { 00:04:55.456 "mask": "0x8", 00:04:55.456 "tpoint_mask": "0xffffffffffffffff" 00:04:55.456 }, 00:04:55.456 "nvmf_rdma": { 00:04:55.456 "mask": "0x10", 00:04:55.456 "tpoint_mask": "0x0" 00:04:55.456 }, 00:04:55.456 "nvmf_tcp": { 00:04:55.456 "mask": "0x20", 00:04:55.456 "tpoint_mask": "0x0" 00:04:55.456 }, 00:04:55.456 "ftl": { 00:04:55.456 "mask": "0x40", 00:04:55.456 "tpoint_mask": "0x0" 00:04:55.456 }, 00:04:55.456 "blobfs": { 00:04:55.456 "mask": "0x80", 00:04:55.456 "tpoint_mask": "0x0" 00:04:55.456 }, 00:04:55.456 "dsa": { 00:04:55.456 "mask": "0x200", 00:04:55.456 "tpoint_mask": "0x0" 00:04:55.456 }, 00:04:55.456 "thread": { 00:04:55.456 "mask": "0x400", 00:04:55.456 "tpoint_mask": "0x0" 00:04:55.456 }, 00:04:55.456 "nvme_pcie": { 00:04:55.456 "mask": "0x800", 00:04:55.456 "tpoint_mask": "0x0" 00:04:55.456 }, 00:04:55.456 "iaa": { 00:04:55.456 "mask": "0x1000", 00:04:55.456 "tpoint_mask": "0x0" 00:04:55.456 }, 00:04:55.456 "nvme_tcp": { 00:04:55.456 "mask": "0x2000", 00:04:55.456 "tpoint_mask": "0x0" 00:04:55.456 }, 00:04:55.456 "bdev_nvme": { 00:04:55.456 "mask": "0x4000", 00:04:55.456 "tpoint_mask": "0x0" 00:04:55.456 }, 00:04:55.456 "sock": { 00:04:55.456 "mask": "0x8000", 00:04:55.456 "tpoint_mask": "0x0" 00:04:55.456 } 00:04:55.456 }' 00:04:55.456 11:40:45 -- rpc/rpc.sh@43 -- # jq length 00:04:55.456 11:40:45 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:55.456 11:40:45 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:55.714 11:40:46 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:55.714 11:40:46 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:55.714 11:40:46 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:55.714 11:40:46 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:55.714 11:40:46 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:55.714 11:40:46 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:55.714 11:40:46 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:55.714 00:04:55.714 real 0m0.218s 00:04:55.714 user 0m0.174s 00:04:55.714 sys 0m0.038s 00:04:55.714 11:40:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:55.714 11:40:46 -- common/autotest_common.sh@10 -- # set +x 00:04:55.714 ************************************ 00:04:55.714 END TEST rpc_trace_cmd_test 00:04:55.714 ************************************ 00:04:55.714 11:40:46 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:55.714 11:40:46 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:55.714 11:40:46 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:55.714 11:40:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:55.714 11:40:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:55.714 11:40:46 -- common/autotest_common.sh@10 -- # set +x 00:04:55.972 ************************************ 00:04:55.972 START TEST rpc_daemon_integrity 00:04:55.972 ************************************ 00:04:55.972 11:40:46 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:04:55.972 11:40:46 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:55.972 11:40:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:55.972 11:40:46 -- common/autotest_common.sh@10 -- # set +x 00:04:55.972 11:40:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:55.972 11:40:46 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:55.972 11:40:46 -- rpc/rpc.sh@13 -- # jq length 00:04:55.972 11:40:46 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:55.972 11:40:46 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:55.972 11:40:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:55.972 11:40:46 -- common/autotest_common.sh@10 -- # set +x 00:04:55.972 11:40:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:55.972 11:40:46 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:55.972 11:40:46 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:55.972 11:40:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:55.972 11:40:46 -- common/autotest_common.sh@10 -- # set +x 00:04:55.972 11:40:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:55.972 11:40:46 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:55.972 { 00:04:55.972 "name": "Malloc2", 00:04:55.972 "aliases": [ 00:04:55.972 "dd16368c-ebaa-43a7-b083-e6cd8e861a2e" 00:04:55.972 ], 00:04:55.972 "product_name": "Malloc disk", 00:04:55.972 "block_size": 512, 00:04:55.972 "num_blocks": 16384, 00:04:55.972 "uuid": "dd16368c-ebaa-43a7-b083-e6cd8e861a2e", 00:04:55.972 "assigned_rate_limits": { 00:04:55.972 "rw_ios_per_sec": 0, 00:04:55.972 "rw_mbytes_per_sec": 0, 00:04:55.972 "r_mbytes_per_sec": 0, 00:04:55.972 "w_mbytes_per_sec": 0 00:04:55.972 }, 00:04:55.972 "claimed": false, 00:04:55.972 "zoned": false, 00:04:55.972 "supported_io_types": { 00:04:55.972 "read": true, 00:04:55.972 "write": true, 00:04:55.972 "unmap": true, 00:04:55.972 "write_zeroes": true, 00:04:55.972 "flush": true, 00:04:55.972 "reset": true, 00:04:55.972 "compare": false, 00:04:55.972 "compare_and_write": false, 00:04:55.972 "abort": true, 00:04:55.972 "nvme_admin": false, 00:04:55.972 "nvme_io": false 00:04:55.972 }, 00:04:55.972 "memory_domains": [ 00:04:55.972 { 00:04:55.972 "dma_device_id": "system", 00:04:55.972 "dma_device_type": 1 00:04:55.972 }, 00:04:55.972 { 00:04:55.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:55.972 "dma_device_type": 2 00:04:55.972 } 00:04:55.972 ], 00:04:55.972 "driver_specific": {} 00:04:55.972 } 00:04:55.972 ]' 00:04:55.972 11:40:46 -- rpc/rpc.sh@17 -- # jq length 00:04:55.972 11:40:46 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:55.972 11:40:46 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:55.972 11:40:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:55.972 11:40:46 -- common/autotest_common.sh@10 -- # set +x 00:04:55.972 [2024-04-18 11:40:46.476637] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:55.972 [2024-04-18 11:40:46.476682] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:55.972 [2024-04-18 11:40:46.476708] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000022880 00:04:55.972 [2024-04-18 11:40:46.476719] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:55.972 [2024-04-18 11:40:46.478798] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:55.972 [2024-04-18 11:40:46.478826] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:55.972 Passthru0 00:04:55.972 11:40:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:55.972 11:40:46 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:55.972 11:40:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:55.972 11:40:46 -- common/autotest_common.sh@10 -- # set +x 00:04:55.972 11:40:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:55.972 11:40:46 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:55.972 { 00:04:55.972 "name": "Malloc2", 00:04:55.972 "aliases": [ 00:04:55.972 "dd16368c-ebaa-43a7-b083-e6cd8e861a2e" 00:04:55.972 ], 00:04:55.972 "product_name": "Malloc disk", 00:04:55.972 "block_size": 512, 00:04:55.972 "num_blocks": 16384, 00:04:55.972 "uuid": "dd16368c-ebaa-43a7-b083-e6cd8e861a2e", 00:04:55.972 "assigned_rate_limits": { 00:04:55.972 "rw_ios_per_sec": 0, 00:04:55.972 "rw_mbytes_per_sec": 0, 00:04:55.972 "r_mbytes_per_sec": 0, 00:04:55.972 "w_mbytes_per_sec": 0 00:04:55.972 }, 00:04:55.972 "claimed": true, 00:04:55.973 "claim_type": "exclusive_write", 00:04:55.973 "zoned": false, 00:04:55.973 "supported_io_types": { 00:04:55.973 "read": true, 00:04:55.973 "write": true, 00:04:55.973 "unmap": true, 00:04:55.973 "write_zeroes": true, 00:04:55.973 "flush": true, 00:04:55.973 "reset": true, 00:04:55.973 "compare": false, 00:04:55.973 "compare_and_write": false, 00:04:55.973 "abort": true, 00:04:55.973 "nvme_admin": false, 00:04:55.973 "nvme_io": false 00:04:55.973 }, 00:04:55.973 "memory_domains": [ 00:04:55.973 { 00:04:55.973 "dma_device_id": "system", 00:04:55.973 "dma_device_type": 1 00:04:55.973 }, 00:04:55.973 { 00:04:55.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:55.973 "dma_device_type": 2 00:04:55.973 } 00:04:55.973 ], 00:04:55.973 "driver_specific": {} 00:04:55.973 }, 00:04:55.973 { 00:04:55.973 "name": "Passthru0", 00:04:55.973 "aliases": [ 00:04:55.973 "adfd448e-c053-53de-8fda-86c00691046c" 00:04:55.973 ], 00:04:55.973 "product_name": "passthru", 00:04:55.973 "block_size": 512, 00:04:55.973 "num_blocks": 16384, 00:04:55.973 "uuid": "adfd448e-c053-53de-8fda-86c00691046c", 00:04:55.973 "assigned_rate_limits": { 00:04:55.973 "rw_ios_per_sec": 0, 00:04:55.973 "rw_mbytes_per_sec": 0, 00:04:55.973 "r_mbytes_per_sec": 0, 00:04:55.973 "w_mbytes_per_sec": 0 00:04:55.973 }, 00:04:55.973 "claimed": false, 00:04:55.973 "zoned": false, 00:04:55.973 "supported_io_types": { 00:04:55.973 "read": true, 00:04:55.973 "write": true, 00:04:55.973 "unmap": true, 00:04:55.973 "write_zeroes": true, 00:04:55.973 "flush": true, 00:04:55.973 "reset": true, 00:04:55.973 "compare": false, 00:04:55.973 "compare_and_write": false, 00:04:55.973 "abort": true, 00:04:55.973 "nvme_admin": false, 00:04:55.973 "nvme_io": false 00:04:55.973 }, 00:04:55.973 "memory_domains": [ 00:04:55.973 { 00:04:55.973 "dma_device_id": "system", 00:04:55.973 "dma_device_type": 1 00:04:55.973 }, 00:04:55.973 { 00:04:55.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:55.973 "dma_device_type": 2 00:04:55.973 } 00:04:55.973 ], 00:04:55.973 "driver_specific": { 00:04:55.973 "passthru": { 00:04:55.973 "name": "Passthru0", 00:04:55.973 "base_bdev_name": "Malloc2" 00:04:55.973 } 00:04:55.973 } 00:04:55.973 } 00:04:55.973 ]' 00:04:55.973 11:40:46 -- rpc/rpc.sh@21 -- # jq length 00:04:56.231 11:40:46 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:56.231 11:40:46 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:56.231 11:40:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:56.231 11:40:46 -- common/autotest_common.sh@10 -- # set +x 00:04:56.231 11:40:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:56.231 11:40:46 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:56.231 11:40:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:56.231 11:40:46 -- common/autotest_common.sh@10 -- # set +x 00:04:56.231 11:40:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:56.231 11:40:46 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:56.231 11:40:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:56.231 11:40:46 -- common/autotest_common.sh@10 -- # set +x 00:04:56.231 11:40:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:56.231 11:40:46 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:56.231 11:40:46 -- rpc/rpc.sh@26 -- # jq length 00:04:56.231 11:40:46 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:56.231 00:04:56.231 real 0m0.305s 00:04:56.231 user 0m0.172s 00:04:56.231 sys 0m0.042s 00:04:56.231 11:40:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:56.231 11:40:46 -- common/autotest_common.sh@10 -- # set +x 00:04:56.231 ************************************ 00:04:56.231 END TEST rpc_daemon_integrity 00:04:56.231 ************************************ 00:04:56.231 11:40:46 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:56.231 11:40:46 -- rpc/rpc.sh@84 -- # killprocess 2288658 00:04:56.231 11:40:46 -- common/autotest_common.sh@936 -- # '[' -z 2288658 ']' 00:04:56.231 11:40:46 -- common/autotest_common.sh@940 -- # kill -0 2288658 00:04:56.231 11:40:46 -- common/autotest_common.sh@941 -- # uname 00:04:56.231 11:40:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:56.231 11:40:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2288658 00:04:56.231 11:40:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:56.231 11:40:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:56.231 11:40:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2288658' 00:04:56.231 killing process with pid 2288658 00:04:56.231 11:40:46 -- common/autotest_common.sh@955 -- # kill 2288658 00:04:56.231 11:40:46 -- common/autotest_common.sh@960 -- # wait 2288658 00:04:58.768 00:04:58.768 real 0m5.600s 00:04:58.768 user 0m6.278s 00:04:58.768 sys 0m1.178s 00:04:58.768 11:40:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:58.768 11:40:49 -- common/autotest_common.sh@10 -- # set +x 00:04:58.768 ************************************ 00:04:58.768 END TEST rpc 00:04:58.768 ************************************ 00:04:58.768 11:40:49 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:58.768 11:40:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:58.768 11:40:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:58.768 11:40:49 -- common/autotest_common.sh@10 -- # set +x 00:04:58.768 ************************************ 00:04:58.768 START TEST skip_rpc 00:04:58.768 ************************************ 00:04:58.768 11:40:49 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:59.027 * Looking for test storage... 00:04:59.027 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:59.027 11:40:49 -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:59.027 11:40:49 -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:59.027 11:40:49 -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:59.027 11:40:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:59.027 11:40:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:59.027 11:40:49 -- common/autotest_common.sh@10 -- # set +x 00:04:59.027 ************************************ 00:04:59.027 START TEST skip_rpc 00:04:59.027 ************************************ 00:04:59.028 11:40:49 -- common/autotest_common.sh@1111 -- # test_skip_rpc 00:04:59.028 11:40:49 -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2289943 00:04:59.028 11:40:49 -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:59.028 11:40:49 -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:59.028 11:40:49 -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:59.286 [2024-04-18 11:40:49.657168] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:04:59.286 [2024-04-18 11:40:49.657243] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2289943 ] 00:04:59.286 EAL: No free 2048 kB hugepages reported on node 1 00:04:59.286 [2024-04-18 11:40:49.777504] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.545 [2024-04-18 11:40:49.982029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.810 11:40:54 -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:04.810 11:40:54 -- common/autotest_common.sh@638 -- # local es=0 00:05:04.810 11:40:54 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:04.810 11:40:54 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:05:04.810 11:40:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:04.810 11:40:54 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:05:04.810 11:40:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:04.810 11:40:54 -- common/autotest_common.sh@641 -- # rpc_cmd spdk_get_version 00:05:04.810 11:40:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:04.810 11:40:54 -- common/autotest_common.sh@10 -- # set +x 00:05:04.810 11:40:54 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:05:04.810 11:40:54 -- common/autotest_common.sh@641 -- # es=1 00:05:04.810 11:40:54 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:04.810 11:40:54 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:04.810 11:40:54 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:04.810 11:40:54 -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:04.810 11:40:54 -- rpc/skip_rpc.sh@23 -- # killprocess 2289943 00:05:04.810 11:40:54 -- common/autotest_common.sh@936 -- # '[' -z 2289943 ']' 00:05:04.810 11:40:54 -- common/autotest_common.sh@940 -- # kill -0 2289943 00:05:04.810 11:40:54 -- common/autotest_common.sh@941 -- # uname 00:05:04.810 11:40:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:04.810 11:40:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2289943 00:05:04.810 11:40:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:04.810 11:40:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:04.810 11:40:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2289943' 00:05:04.810 killing process with pid 2289943 00:05:04.810 11:40:54 -- common/autotest_common.sh@955 -- # kill 2289943 00:05:04.810 11:40:54 -- common/autotest_common.sh@960 -- # wait 2289943 00:05:06.715 00:05:06.715 real 0m7.380s 00:05:06.715 user 0m6.977s 00:05:06.715 sys 0m0.432s 00:05:06.715 11:40:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:06.715 11:40:56 -- common/autotest_common.sh@10 -- # set +x 00:05:06.715 ************************************ 00:05:06.715 END TEST skip_rpc 00:05:06.715 ************************************ 00:05:06.715 11:40:56 -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:06.715 11:40:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:06.715 11:40:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:06.715 11:40:56 -- common/autotest_common.sh@10 -- # set +x 00:05:06.715 ************************************ 00:05:06.715 START TEST skip_rpc_with_json 00:05:06.715 ************************************ 00:05:06.715 11:40:57 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_json 00:05:06.715 11:40:57 -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:06.715 11:40:57 -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2291301 00:05:06.715 11:40:57 -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:06.715 11:40:57 -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:06.715 11:40:57 -- rpc/skip_rpc.sh@31 -- # waitforlisten 2291301 00:05:06.715 11:40:57 -- common/autotest_common.sh@817 -- # '[' -z 2291301 ']' 00:05:06.715 11:40:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.715 11:40:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:06.715 11:40:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.715 11:40:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:06.715 11:40:57 -- common/autotest_common.sh@10 -- # set +x 00:05:06.715 [2024-04-18 11:40:57.263341] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:05:06.715 [2024-04-18 11:40:57.263433] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2291301 ] 00:05:06.974 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.974 [2024-04-18 11:40:57.389106] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.232 [2024-04-18 11:40:57.592535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.169 11:40:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:08.169 11:40:58 -- common/autotest_common.sh@850 -- # return 0 00:05:08.169 11:40:58 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:08.169 11:40:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:08.169 11:40:58 -- common/autotest_common.sh@10 -- # set +x 00:05:08.169 [2024-04-18 11:40:58.448302] nvmf_rpc.c:2509:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:08.169 request: 00:05:08.169 { 00:05:08.169 "trtype": "tcp", 00:05:08.169 "method": "nvmf_get_transports", 00:05:08.169 "req_id": 1 00:05:08.169 } 00:05:08.169 Got JSON-RPC error response 00:05:08.169 response: 00:05:08.169 { 00:05:08.169 "code": -19, 00:05:08.169 "message": "No such device" 00:05:08.169 } 00:05:08.169 11:40:58 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:05:08.169 11:40:58 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:08.169 11:40:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:08.169 11:40:58 -- common/autotest_common.sh@10 -- # set +x 00:05:08.169 [2024-04-18 11:40:58.460415] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:08.169 11:40:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:08.169 11:40:58 -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:08.169 11:40:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:08.169 11:40:58 -- common/autotest_common.sh@10 -- # set +x 00:05:08.169 11:40:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:08.169 11:40:58 -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:08.169 { 00:05:08.169 "subsystems": [ 00:05:08.169 { 00:05:08.169 "subsystem": "vfio_user_target", 00:05:08.169 "config": null 00:05:08.169 }, 00:05:08.169 { 00:05:08.169 "subsystem": "keyring", 00:05:08.169 "config": [] 00:05:08.169 }, 00:05:08.169 { 00:05:08.169 "subsystem": "iobuf", 00:05:08.169 "config": [ 00:05:08.169 { 00:05:08.169 "method": "iobuf_set_options", 00:05:08.169 "params": { 00:05:08.169 "small_pool_count": 8192, 00:05:08.169 "large_pool_count": 1024, 00:05:08.169 "small_bufsize": 8192, 00:05:08.169 "large_bufsize": 135168 00:05:08.169 } 00:05:08.169 } 00:05:08.169 ] 00:05:08.169 }, 00:05:08.169 { 00:05:08.169 "subsystem": "sock", 00:05:08.169 "config": [ 00:05:08.169 { 00:05:08.169 "method": "sock_impl_set_options", 00:05:08.169 "params": { 00:05:08.169 "impl_name": "posix", 00:05:08.169 "recv_buf_size": 2097152, 00:05:08.169 "send_buf_size": 2097152, 00:05:08.169 "enable_recv_pipe": true, 00:05:08.169 "enable_quickack": false, 00:05:08.169 "enable_placement_id": 0, 00:05:08.169 "enable_zerocopy_send_server": true, 00:05:08.169 "enable_zerocopy_send_client": false, 00:05:08.169 "zerocopy_threshold": 0, 00:05:08.169 "tls_version": 0, 00:05:08.169 "enable_ktls": false 00:05:08.169 } 00:05:08.169 }, 00:05:08.169 { 00:05:08.169 "method": "sock_impl_set_options", 00:05:08.169 "params": { 00:05:08.169 "impl_name": "ssl", 00:05:08.169 "recv_buf_size": 4096, 00:05:08.169 "send_buf_size": 4096, 00:05:08.169 "enable_recv_pipe": true, 00:05:08.169 "enable_quickack": false, 00:05:08.169 "enable_placement_id": 0, 00:05:08.169 "enable_zerocopy_send_server": true, 00:05:08.169 "enable_zerocopy_send_client": false, 00:05:08.169 "zerocopy_threshold": 0, 00:05:08.169 "tls_version": 0, 00:05:08.169 "enable_ktls": false 00:05:08.169 } 00:05:08.169 } 00:05:08.169 ] 00:05:08.169 }, 00:05:08.169 { 00:05:08.169 "subsystem": "vmd", 00:05:08.169 "config": [] 00:05:08.169 }, 00:05:08.169 { 00:05:08.169 "subsystem": "accel", 00:05:08.169 "config": [ 00:05:08.169 { 00:05:08.169 "method": "accel_set_options", 00:05:08.169 "params": { 00:05:08.169 "small_cache_size": 128, 00:05:08.169 "large_cache_size": 16, 00:05:08.169 "task_count": 2048, 00:05:08.169 "sequence_count": 2048, 00:05:08.169 "buf_count": 2048 00:05:08.169 } 00:05:08.169 } 00:05:08.169 ] 00:05:08.169 }, 00:05:08.169 { 00:05:08.169 "subsystem": "bdev", 00:05:08.169 "config": [ 00:05:08.169 { 00:05:08.169 "method": "bdev_set_options", 00:05:08.169 "params": { 00:05:08.169 "bdev_io_pool_size": 65535, 00:05:08.169 "bdev_io_cache_size": 256, 00:05:08.169 "bdev_auto_examine": true, 00:05:08.169 "iobuf_small_cache_size": 128, 00:05:08.169 "iobuf_large_cache_size": 16 00:05:08.169 } 00:05:08.169 }, 00:05:08.169 { 00:05:08.169 "method": "bdev_raid_set_options", 00:05:08.169 "params": { 00:05:08.169 "process_window_size_kb": 1024 00:05:08.169 } 00:05:08.169 }, 00:05:08.169 { 00:05:08.169 "method": "bdev_iscsi_set_options", 00:05:08.169 "params": { 00:05:08.169 "timeout_sec": 30 00:05:08.169 } 00:05:08.169 }, 00:05:08.169 { 00:05:08.169 "method": "bdev_nvme_set_options", 00:05:08.169 "params": { 00:05:08.169 "action_on_timeout": "none", 00:05:08.170 "timeout_us": 0, 00:05:08.170 "timeout_admin_us": 0, 00:05:08.170 "keep_alive_timeout_ms": 10000, 00:05:08.170 "arbitration_burst": 0, 00:05:08.170 "low_priority_weight": 0, 00:05:08.170 "medium_priority_weight": 0, 00:05:08.170 "high_priority_weight": 0, 00:05:08.170 "nvme_adminq_poll_period_us": 10000, 00:05:08.170 "nvme_ioq_poll_period_us": 0, 00:05:08.170 "io_queue_requests": 0, 00:05:08.170 "delay_cmd_submit": true, 00:05:08.170 "transport_retry_count": 4, 00:05:08.170 "bdev_retry_count": 3, 00:05:08.170 "transport_ack_timeout": 0, 00:05:08.170 "ctrlr_loss_timeout_sec": 0, 00:05:08.170 "reconnect_delay_sec": 0, 00:05:08.170 "fast_io_fail_timeout_sec": 0, 00:05:08.170 "disable_auto_failback": false, 00:05:08.170 "generate_uuids": false, 00:05:08.170 "transport_tos": 0, 00:05:08.170 "nvme_error_stat": false, 00:05:08.170 "rdma_srq_size": 0, 00:05:08.170 "io_path_stat": false, 00:05:08.170 "allow_accel_sequence": false, 00:05:08.170 "rdma_max_cq_size": 0, 00:05:08.170 "rdma_cm_event_timeout_ms": 0, 00:05:08.170 "dhchap_digests": [ 00:05:08.170 "sha256", 00:05:08.170 "sha384", 00:05:08.170 "sha512" 00:05:08.170 ], 00:05:08.170 "dhchap_dhgroups": [ 00:05:08.170 "null", 00:05:08.170 "ffdhe2048", 00:05:08.170 "ffdhe3072", 00:05:08.170 "ffdhe4096", 00:05:08.170 "ffdhe6144", 00:05:08.170 "ffdhe8192" 00:05:08.170 ] 00:05:08.170 } 00:05:08.170 }, 00:05:08.170 { 00:05:08.170 "method": "bdev_nvme_set_hotplug", 00:05:08.170 "params": { 00:05:08.170 "period_us": 100000, 00:05:08.170 "enable": false 00:05:08.170 } 00:05:08.170 }, 00:05:08.170 { 00:05:08.170 "method": "bdev_wait_for_examine" 00:05:08.170 } 00:05:08.170 ] 00:05:08.170 }, 00:05:08.170 { 00:05:08.170 "subsystem": "scsi", 00:05:08.170 "config": null 00:05:08.170 }, 00:05:08.170 { 00:05:08.170 "subsystem": "scheduler", 00:05:08.170 "config": [ 00:05:08.170 { 00:05:08.170 "method": "framework_set_scheduler", 00:05:08.170 "params": { 00:05:08.170 "name": "static" 00:05:08.170 } 00:05:08.170 } 00:05:08.170 ] 00:05:08.170 }, 00:05:08.170 { 00:05:08.170 "subsystem": "vhost_scsi", 00:05:08.170 "config": [] 00:05:08.170 }, 00:05:08.170 { 00:05:08.170 "subsystem": "vhost_blk", 00:05:08.170 "config": [] 00:05:08.170 }, 00:05:08.170 { 00:05:08.170 "subsystem": "ublk", 00:05:08.170 "config": [] 00:05:08.170 }, 00:05:08.170 { 00:05:08.170 "subsystem": "nbd", 00:05:08.170 "config": [] 00:05:08.170 }, 00:05:08.170 { 00:05:08.170 "subsystem": "nvmf", 00:05:08.170 "config": [ 00:05:08.170 { 00:05:08.170 "method": "nvmf_set_config", 00:05:08.170 "params": { 00:05:08.170 "discovery_filter": "match_any", 00:05:08.170 "admin_cmd_passthru": { 00:05:08.170 "identify_ctrlr": false 00:05:08.170 } 00:05:08.170 } 00:05:08.170 }, 00:05:08.170 { 00:05:08.170 "method": "nvmf_set_max_subsystems", 00:05:08.170 "params": { 00:05:08.170 "max_subsystems": 1024 00:05:08.170 } 00:05:08.170 }, 00:05:08.170 { 00:05:08.170 "method": "nvmf_set_crdt", 00:05:08.170 "params": { 00:05:08.170 "crdt1": 0, 00:05:08.170 "crdt2": 0, 00:05:08.170 "crdt3": 0 00:05:08.170 } 00:05:08.170 }, 00:05:08.170 { 00:05:08.170 "method": "nvmf_create_transport", 00:05:08.170 "params": { 00:05:08.170 "trtype": "TCP", 00:05:08.170 "max_queue_depth": 128, 00:05:08.170 "max_io_qpairs_per_ctrlr": 127, 00:05:08.170 "in_capsule_data_size": 4096, 00:05:08.170 "max_io_size": 131072, 00:05:08.170 "io_unit_size": 131072, 00:05:08.170 "max_aq_depth": 128, 00:05:08.170 "num_shared_buffers": 511, 00:05:08.170 "buf_cache_size": 4294967295, 00:05:08.170 "dif_insert_or_strip": false, 00:05:08.170 "zcopy": false, 00:05:08.170 "c2h_success": true, 00:05:08.170 "sock_priority": 0, 00:05:08.170 "abort_timeout_sec": 1, 00:05:08.170 "ack_timeout": 0 00:05:08.170 } 00:05:08.170 } 00:05:08.170 ] 00:05:08.170 }, 00:05:08.170 { 00:05:08.170 "subsystem": "iscsi", 00:05:08.170 "config": [ 00:05:08.170 { 00:05:08.170 "method": "iscsi_set_options", 00:05:08.170 "params": { 00:05:08.170 "node_base": "iqn.2016-06.io.spdk", 00:05:08.170 "max_sessions": 128, 00:05:08.170 "max_connections_per_session": 2, 00:05:08.170 "max_queue_depth": 64, 00:05:08.170 "default_time2wait": 2, 00:05:08.170 "default_time2retain": 20, 00:05:08.170 "first_burst_length": 8192, 00:05:08.170 "immediate_data": true, 00:05:08.170 "allow_duplicated_isid": false, 00:05:08.170 "error_recovery_level": 0, 00:05:08.170 "nop_timeout": 60, 00:05:08.170 "nop_in_interval": 30, 00:05:08.170 "disable_chap": false, 00:05:08.170 "require_chap": false, 00:05:08.170 "mutual_chap": false, 00:05:08.170 "chap_group": 0, 00:05:08.170 "max_large_datain_per_connection": 64, 00:05:08.170 "max_r2t_per_connection": 4, 00:05:08.170 "pdu_pool_size": 36864, 00:05:08.170 "immediate_data_pool_size": 16384, 00:05:08.170 "data_out_pool_size": 2048 00:05:08.170 } 00:05:08.170 } 00:05:08.170 ] 00:05:08.170 } 00:05:08.170 ] 00:05:08.170 } 00:05:08.170 11:40:58 -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:08.170 11:40:58 -- rpc/skip_rpc.sh@40 -- # killprocess 2291301 00:05:08.170 11:40:58 -- common/autotest_common.sh@936 -- # '[' -z 2291301 ']' 00:05:08.170 11:40:58 -- common/autotest_common.sh@940 -- # kill -0 2291301 00:05:08.170 11:40:58 -- common/autotest_common.sh@941 -- # uname 00:05:08.170 11:40:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:08.170 11:40:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2291301 00:05:08.170 11:40:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:08.170 11:40:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:08.170 11:40:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2291301' 00:05:08.170 killing process with pid 2291301 00:05:08.170 11:40:58 -- common/autotest_common.sh@955 -- # kill 2291301 00:05:08.170 11:40:58 -- common/autotest_common.sh@960 -- # wait 2291301 00:05:10.703 11:41:01 -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2292004 00:05:10.703 11:41:01 -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:10.703 11:41:01 -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:15.969 11:41:06 -- rpc/skip_rpc.sh@50 -- # killprocess 2292004 00:05:15.969 11:41:06 -- common/autotest_common.sh@936 -- # '[' -z 2292004 ']' 00:05:15.969 11:41:06 -- common/autotest_common.sh@940 -- # kill -0 2292004 00:05:15.969 11:41:06 -- common/autotest_common.sh@941 -- # uname 00:05:15.969 11:41:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:15.969 11:41:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2292004 00:05:15.969 11:41:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:15.969 11:41:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:15.969 11:41:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2292004' 00:05:15.969 killing process with pid 2292004 00:05:15.969 11:41:06 -- common/autotest_common.sh@955 -- # kill 2292004 00:05:15.969 11:41:06 -- common/autotest_common.sh@960 -- # wait 2292004 00:05:17.906 11:41:08 -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:17.906 11:41:08 -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:17.906 00:05:17.906 real 0m11.214s 00:05:17.906 user 0m10.706s 00:05:17.906 sys 0m0.954s 00:05:17.906 11:41:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:17.906 11:41:08 -- common/autotest_common.sh@10 -- # set +x 00:05:17.906 ************************************ 00:05:17.906 END TEST skip_rpc_with_json 00:05:17.906 ************************************ 00:05:17.906 11:41:08 -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:17.906 11:41:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:17.906 11:41:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:17.906 11:41:08 -- common/autotest_common.sh@10 -- # set +x 00:05:18.165 ************************************ 00:05:18.165 START TEST skip_rpc_with_delay 00:05:18.165 ************************************ 00:05:18.165 11:41:08 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_delay 00:05:18.165 11:41:08 -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:18.165 11:41:08 -- common/autotest_common.sh@638 -- # local es=0 00:05:18.165 11:41:08 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:18.165 11:41:08 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:18.165 11:41:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:18.165 11:41:08 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:18.165 11:41:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:18.165 11:41:08 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:18.165 11:41:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:18.165 11:41:08 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:18.165 11:41:08 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:18.165 11:41:08 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:18.165 [2024-04-18 11:41:08.671513] app.c: 751:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:18.165 [2024-04-18 11:41:08.671604] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:18.425 11:41:08 -- common/autotest_common.sh@641 -- # es=1 00:05:18.425 11:41:08 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:18.425 11:41:08 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:18.425 11:41:08 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:18.425 00:05:18.425 real 0m0.152s 00:05:18.425 user 0m0.071s 00:05:18.425 sys 0m0.080s 00:05:18.425 11:41:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:18.425 11:41:08 -- common/autotest_common.sh@10 -- # set +x 00:05:18.425 ************************************ 00:05:18.425 END TEST skip_rpc_with_delay 00:05:18.425 ************************************ 00:05:18.425 11:41:08 -- rpc/skip_rpc.sh@77 -- # uname 00:05:18.425 11:41:08 -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:18.425 11:41:08 -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:18.425 11:41:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:18.425 11:41:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:18.425 11:41:08 -- common/autotest_common.sh@10 -- # set +x 00:05:18.425 ************************************ 00:05:18.425 START TEST exit_on_failed_rpc_init 00:05:18.425 ************************************ 00:05:18.425 11:41:08 -- common/autotest_common.sh@1111 -- # test_exit_on_failed_rpc_init 00:05:18.425 11:41:08 -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2293515 00:05:18.425 11:41:08 -- rpc/skip_rpc.sh@63 -- # waitforlisten 2293515 00:05:18.425 11:41:08 -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:18.425 11:41:08 -- common/autotest_common.sh@817 -- # '[' -z 2293515 ']' 00:05:18.425 11:41:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.425 11:41:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:18.425 11:41:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.425 11:41:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:18.425 11:41:08 -- common/autotest_common.sh@10 -- # set +x 00:05:18.685 [2024-04-18 11:41:09.045827] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:05:18.685 [2024-04-18 11:41:09.045917] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2293515 ] 00:05:18.685 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.685 [2024-04-18 11:41:09.169480] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.943 [2024-04-18 11:41:09.381294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.882 11:41:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:19.882 11:41:10 -- common/autotest_common.sh@850 -- # return 0 00:05:19.882 11:41:10 -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:19.882 11:41:10 -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:19.882 11:41:10 -- common/autotest_common.sh@638 -- # local es=0 00:05:19.882 11:41:10 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:19.882 11:41:10 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:19.882 11:41:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:19.882 11:41:10 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:19.882 11:41:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:19.882 11:41:10 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:19.882 11:41:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:19.882 11:41:10 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:19.882 11:41:10 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:19.882 11:41:10 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:19.882 [2024-04-18 11:41:10.393165] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:05:19.882 [2024-04-18 11:41:10.393251] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2293727 ] 00:05:20.142 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.142 [2024-04-18 11:41:10.517693] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.401 [2024-04-18 11:41:10.740179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.401 [2024-04-18 11:41:10.740275] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:20.401 [2024-04-18 11:41:10.740296] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:20.401 [2024-04-18 11:41:10.740308] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:20.660 11:41:11 -- common/autotest_common.sh@641 -- # es=234 00:05:20.660 11:41:11 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:20.660 11:41:11 -- common/autotest_common.sh@650 -- # es=106 00:05:20.660 11:41:11 -- common/autotest_common.sh@651 -- # case "$es" in 00:05:20.661 11:41:11 -- common/autotest_common.sh@658 -- # es=1 00:05:20.661 11:41:11 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:20.661 11:41:11 -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:20.661 11:41:11 -- rpc/skip_rpc.sh@70 -- # killprocess 2293515 00:05:20.661 11:41:11 -- common/autotest_common.sh@936 -- # '[' -z 2293515 ']' 00:05:20.661 11:41:11 -- common/autotest_common.sh@940 -- # kill -0 2293515 00:05:20.661 11:41:11 -- common/autotest_common.sh@941 -- # uname 00:05:20.661 11:41:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:20.661 11:41:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2293515 00:05:20.919 11:41:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:20.919 11:41:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:20.919 11:41:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2293515' 00:05:20.919 killing process with pid 2293515 00:05:20.919 11:41:11 -- common/autotest_common.sh@955 -- # kill 2293515 00:05:20.919 11:41:11 -- common/autotest_common.sh@960 -- # wait 2293515 00:05:23.452 00:05:23.452 real 0m4.616s 00:05:23.452 user 0m5.146s 00:05:23.452 sys 0m0.701s 00:05:23.452 11:41:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:23.452 11:41:13 -- common/autotest_common.sh@10 -- # set +x 00:05:23.452 ************************************ 00:05:23.452 END TEST exit_on_failed_rpc_init 00:05:23.452 ************************************ 00:05:23.452 11:41:13 -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:23.452 00:05:23.452 real 0m24.325s 00:05:23.452 user 0m23.219s 00:05:23.452 sys 0m2.747s 00:05:23.452 11:41:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:23.452 11:41:13 -- common/autotest_common.sh@10 -- # set +x 00:05:23.452 ************************************ 00:05:23.452 END TEST skip_rpc 00:05:23.452 ************************************ 00:05:23.452 11:41:13 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:23.452 11:41:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:23.452 11:41:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:23.452 11:41:13 -- common/autotest_common.sh@10 -- # set +x 00:05:23.452 ************************************ 00:05:23.452 START TEST rpc_client 00:05:23.452 ************************************ 00:05:23.452 11:41:13 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:23.452 * Looking for test storage... 00:05:23.452 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:23.452 11:41:13 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:23.452 OK 00:05:23.452 11:41:13 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:23.452 00:05:23.452 real 0m0.177s 00:05:23.452 user 0m0.068s 00:05:23.452 sys 0m0.120s 00:05:23.452 11:41:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:23.452 11:41:13 -- common/autotest_common.sh@10 -- # set +x 00:05:23.452 ************************************ 00:05:23.452 END TEST rpc_client 00:05:23.452 ************************************ 00:05:23.711 11:41:14 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:23.711 11:41:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:23.711 11:41:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:23.711 11:41:14 -- common/autotest_common.sh@10 -- # set +x 00:05:23.711 ************************************ 00:05:23.711 START TEST json_config 00:05:23.711 ************************************ 00:05:23.711 11:41:14 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:23.971 11:41:14 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:23.971 11:41:14 -- nvmf/common.sh@7 -- # uname -s 00:05:23.971 11:41:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:23.971 11:41:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:23.971 11:41:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:23.971 11:41:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:23.971 11:41:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:23.971 11:41:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:23.971 11:41:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:23.971 11:41:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:23.971 11:41:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:23.971 11:41:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:23.971 11:41:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:05:23.971 11:41:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:05:23.971 11:41:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:23.971 11:41:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:23.971 11:41:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:23.971 11:41:14 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:23.971 11:41:14 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:23.971 11:41:14 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:23.971 11:41:14 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:23.971 11:41:14 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:23.971 11:41:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.971 11:41:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.971 11:41:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.971 11:41:14 -- paths/export.sh@5 -- # export PATH 00:05:23.971 11:41:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.971 11:41:14 -- nvmf/common.sh@47 -- # : 0 00:05:23.971 11:41:14 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:23.971 11:41:14 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:23.971 11:41:14 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:23.971 11:41:14 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:23.971 11:41:14 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:23.971 11:41:14 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:23.971 11:41:14 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:23.971 11:41:14 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:23.971 11:41:14 -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:23.971 11:41:14 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:23.971 11:41:14 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:23.972 11:41:14 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:23.972 11:41:14 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:23.972 11:41:14 -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:23.972 11:41:14 -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:23.972 11:41:14 -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:23.972 11:41:14 -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:23.972 11:41:14 -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:23.972 11:41:14 -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:23.972 11:41:14 -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:23.972 11:41:14 -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:23.972 11:41:14 -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:23.972 11:41:14 -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:23.972 11:41:14 -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:23.972 INFO: JSON configuration test init 00:05:23.972 11:41:14 -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:23.972 11:41:14 -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:23.972 11:41:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:23.972 11:41:14 -- common/autotest_common.sh@10 -- # set +x 00:05:23.972 11:41:14 -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:23.972 11:41:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:23.972 11:41:14 -- common/autotest_common.sh@10 -- # set +x 00:05:23.972 11:41:14 -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:23.972 11:41:14 -- json_config/common.sh@9 -- # local app=target 00:05:23.972 11:41:14 -- json_config/common.sh@10 -- # shift 00:05:23.972 11:41:14 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:23.972 11:41:14 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:23.972 11:41:14 -- json_config/common.sh@15 -- # local app_extra_params= 00:05:23.972 11:41:14 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:23.972 11:41:14 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:23.972 11:41:14 -- json_config/common.sh@22 -- # app_pid["$app"]=2294455 00:05:23.972 11:41:14 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:23.972 Waiting for target to run... 00:05:23.972 11:41:14 -- json_config/common.sh@25 -- # waitforlisten 2294455 /var/tmp/spdk_tgt.sock 00:05:23.972 11:41:14 -- common/autotest_common.sh@817 -- # '[' -z 2294455 ']' 00:05:23.972 11:41:14 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:23.972 11:41:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:23.972 11:41:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:23.972 11:41:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:23.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:23.972 11:41:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:23.972 11:41:14 -- common/autotest_common.sh@10 -- # set +x 00:05:23.972 [2024-04-18 11:41:14.414344] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:05:23.972 [2024-04-18 11:41:14.414434] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2294455 ] 00:05:23.972 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.230 [2024-04-18 11:41:14.764069] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.488 [2024-04-18 11:41:14.966408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.746 11:41:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:24.746 11:41:15 -- common/autotest_common.sh@850 -- # return 0 00:05:24.746 11:41:15 -- json_config/common.sh@26 -- # echo '' 00:05:24.746 00:05:24.746 11:41:15 -- json_config/json_config.sh@269 -- # create_accel_config 00:05:24.746 11:41:15 -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:24.746 11:41:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:24.746 11:41:15 -- common/autotest_common.sh@10 -- # set +x 00:05:24.746 11:41:15 -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:24.746 11:41:15 -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:24.746 11:41:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:24.746 11:41:15 -- common/autotest_common.sh@10 -- # set +x 00:05:24.747 11:41:15 -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:24.747 11:41:15 -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:24.747 11:41:15 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:28.935 11:41:18 -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:28.935 11:41:18 -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:28.935 11:41:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:28.935 11:41:18 -- common/autotest_common.sh@10 -- # set +x 00:05:28.935 11:41:18 -- json_config/json_config.sh@45 -- # local ret=0 00:05:28.935 11:41:18 -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:28.935 11:41:18 -- json_config/json_config.sh@46 -- # local enabled_types 00:05:28.935 11:41:18 -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:28.935 11:41:18 -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:28.935 11:41:18 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:28.935 11:41:19 -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:28.935 11:41:19 -- json_config/json_config.sh@48 -- # local get_types 00:05:28.935 11:41:19 -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:28.935 11:41:19 -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:28.935 11:41:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:28.935 11:41:19 -- common/autotest_common.sh@10 -- # set +x 00:05:28.935 11:41:19 -- json_config/json_config.sh@55 -- # return 0 00:05:28.935 11:41:19 -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:28.935 11:41:19 -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:28.935 11:41:19 -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:28.935 11:41:19 -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:28.935 11:41:19 -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:28.935 11:41:19 -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:28.935 11:41:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:28.935 11:41:19 -- common/autotest_common.sh@10 -- # set +x 00:05:28.935 11:41:19 -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:28.935 11:41:19 -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:28.935 11:41:19 -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:28.935 11:41:19 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:28.935 11:41:19 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:28.935 MallocForNvmf0 00:05:28.935 11:41:19 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:28.935 11:41:19 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:29.194 MallocForNvmf1 00:05:29.194 11:41:19 -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:29.194 11:41:19 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:29.194 [2024-04-18 11:41:19.712362] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:29.194 11:41:19 -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:29.194 11:41:19 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:29.453 11:41:19 -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:29.453 11:41:19 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:29.712 11:41:20 -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:29.712 11:41:20 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:29.712 11:41:20 -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:29.712 11:41:20 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:29.971 [2024-04-18 11:41:20.346497] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:29.971 11:41:20 -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:29.971 11:41:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:29.971 11:41:20 -- common/autotest_common.sh@10 -- # set +x 00:05:29.971 11:41:20 -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:29.971 11:41:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:29.971 11:41:20 -- common/autotest_common.sh@10 -- # set +x 00:05:29.971 11:41:20 -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:29.971 11:41:20 -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:29.971 11:41:20 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:30.230 MallocBdevForConfigChangeCheck 00:05:30.230 11:41:20 -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:30.230 11:41:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:30.230 11:41:20 -- common/autotest_common.sh@10 -- # set +x 00:05:30.230 11:41:20 -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:30.230 11:41:20 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:30.488 11:41:20 -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:30.488 INFO: shutting down applications... 00:05:30.488 11:41:20 -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:30.489 11:41:20 -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:30.489 11:41:20 -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:30.489 11:41:20 -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:33.024 Calling clear_iscsi_subsystem 00:05:33.024 Calling clear_nvmf_subsystem 00:05:33.024 Calling clear_nbd_subsystem 00:05:33.024 Calling clear_ublk_subsystem 00:05:33.024 Calling clear_vhost_blk_subsystem 00:05:33.024 Calling clear_vhost_scsi_subsystem 00:05:33.024 Calling clear_bdev_subsystem 00:05:33.024 11:41:23 -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:33.024 11:41:23 -- json_config/json_config.sh@343 -- # count=100 00:05:33.024 11:41:23 -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:33.024 11:41:23 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:33.024 11:41:23 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:33.024 11:41:23 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:33.024 11:41:23 -- json_config/json_config.sh@345 -- # break 00:05:33.024 11:41:23 -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:33.024 11:41:23 -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:33.024 11:41:23 -- json_config/common.sh@31 -- # local app=target 00:05:33.024 11:41:23 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:33.024 11:41:23 -- json_config/common.sh@35 -- # [[ -n 2294455 ]] 00:05:33.024 11:41:23 -- json_config/common.sh@38 -- # kill -SIGINT 2294455 00:05:33.024 11:41:23 -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:33.024 11:41:23 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:33.024 11:41:23 -- json_config/common.sh@41 -- # kill -0 2294455 00:05:33.024 11:41:23 -- json_config/common.sh@45 -- # sleep 0.5 00:05:33.591 11:41:23 -- json_config/common.sh@40 -- # (( i++ )) 00:05:33.591 11:41:23 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:33.591 11:41:23 -- json_config/common.sh@41 -- # kill -0 2294455 00:05:33.591 11:41:23 -- json_config/common.sh@45 -- # sleep 0.5 00:05:34.195 11:41:24 -- json_config/common.sh@40 -- # (( i++ )) 00:05:34.195 11:41:24 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:34.195 11:41:24 -- json_config/common.sh@41 -- # kill -0 2294455 00:05:34.195 11:41:24 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:34.195 11:41:24 -- json_config/common.sh@43 -- # break 00:05:34.195 11:41:24 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:34.195 11:41:24 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:34.195 SPDK target shutdown done 00:05:34.195 11:41:24 -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:34.195 INFO: relaunching applications... 00:05:34.195 11:41:24 -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:34.195 11:41:24 -- json_config/common.sh@9 -- # local app=target 00:05:34.195 11:41:24 -- json_config/common.sh@10 -- # shift 00:05:34.195 11:41:24 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:34.195 11:41:24 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:34.195 11:41:24 -- json_config/common.sh@15 -- # local app_extra_params= 00:05:34.195 11:41:24 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:34.195 11:41:24 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:34.195 11:41:24 -- json_config/common.sh@22 -- # app_pid["$app"]=2296438 00:05:34.195 11:41:24 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:34.195 Waiting for target to run... 00:05:34.195 11:41:24 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:34.195 11:41:24 -- json_config/common.sh@25 -- # waitforlisten 2296438 /var/tmp/spdk_tgt.sock 00:05:34.195 11:41:24 -- common/autotest_common.sh@817 -- # '[' -z 2296438 ']' 00:05:34.195 11:41:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:34.195 11:41:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:34.195 11:41:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:34.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:34.195 11:41:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:34.195 11:41:24 -- common/autotest_common.sh@10 -- # set +x 00:05:34.195 [2024-04-18 11:41:24.574124] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:05:34.195 [2024-04-18 11:41:24.574221] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2296438 ] 00:05:34.195 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.762 [2024-04-18 11:41:25.060228] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.762 [2024-04-18 11:41:25.277630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.946 [2024-04-18 11:41:29.015337] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:38.946 [2024-04-18 11:41:29.047758] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:38.946 11:41:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:38.946 11:41:29 -- common/autotest_common.sh@850 -- # return 0 00:05:38.946 11:41:29 -- json_config/common.sh@26 -- # echo '' 00:05:38.946 00:05:38.946 11:41:29 -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:38.946 11:41:29 -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:38.946 INFO: Checking if target configuration is the same... 00:05:38.946 11:41:29 -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:38.946 11:41:29 -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:38.946 11:41:29 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:38.946 + '[' 2 -ne 2 ']' 00:05:38.946 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:38.946 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:38.946 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:38.946 +++ basename /dev/fd/62 00:05:38.946 ++ mktemp /tmp/62.XXX 00:05:38.946 + tmp_file_1=/tmp/62.bg2 00:05:38.946 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:38.946 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:38.946 + tmp_file_2=/tmp/spdk_tgt_config.json.198 00:05:38.946 + ret=0 00:05:38.946 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:39.210 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:39.210 + diff -u /tmp/62.bg2 /tmp/spdk_tgt_config.json.198 00:05:39.210 + echo 'INFO: JSON config files are the same' 00:05:39.210 INFO: JSON config files are the same 00:05:39.210 + rm /tmp/62.bg2 /tmp/spdk_tgt_config.json.198 00:05:39.210 + exit 0 00:05:39.210 11:41:29 -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:39.210 11:41:29 -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:39.210 INFO: changing configuration and checking if this can be detected... 00:05:39.210 11:41:29 -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:39.210 11:41:29 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:39.469 11:41:29 -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:39.469 11:41:29 -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:39.469 11:41:29 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:39.469 + '[' 2 -ne 2 ']' 00:05:39.469 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:39.469 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:39.469 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:39.469 +++ basename /dev/fd/62 00:05:39.469 ++ mktemp /tmp/62.XXX 00:05:39.469 + tmp_file_1=/tmp/62.XmP 00:05:39.469 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:39.469 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:39.469 + tmp_file_2=/tmp/spdk_tgt_config.json.C3T 00:05:39.469 + ret=0 00:05:39.469 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:39.727 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:39.727 + diff -u /tmp/62.XmP /tmp/spdk_tgt_config.json.C3T 00:05:39.727 + ret=1 00:05:39.727 + echo '=== Start of file: /tmp/62.XmP ===' 00:05:39.727 + cat /tmp/62.XmP 00:05:39.727 + echo '=== End of file: /tmp/62.XmP ===' 00:05:39.727 + echo '' 00:05:39.727 + echo '=== Start of file: /tmp/spdk_tgt_config.json.C3T ===' 00:05:39.727 + cat /tmp/spdk_tgt_config.json.C3T 00:05:39.727 + echo '=== End of file: /tmp/spdk_tgt_config.json.C3T ===' 00:05:39.727 + echo '' 00:05:39.727 + rm /tmp/62.XmP /tmp/spdk_tgt_config.json.C3T 00:05:39.727 + exit 1 00:05:39.727 11:41:30 -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:39.727 INFO: configuration change detected. 00:05:39.728 11:41:30 -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:39.728 11:41:30 -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:39.728 11:41:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:39.728 11:41:30 -- common/autotest_common.sh@10 -- # set +x 00:05:39.728 11:41:30 -- json_config/json_config.sh@307 -- # local ret=0 00:05:39.728 11:41:30 -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:39.728 11:41:30 -- json_config/json_config.sh@317 -- # [[ -n 2296438 ]] 00:05:39.728 11:41:30 -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:39.728 11:41:30 -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:39.728 11:41:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:39.728 11:41:30 -- common/autotest_common.sh@10 -- # set +x 00:05:39.728 11:41:30 -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:39.728 11:41:30 -- json_config/json_config.sh@193 -- # uname -s 00:05:39.728 11:41:30 -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:39.728 11:41:30 -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:39.728 11:41:30 -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:39.728 11:41:30 -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:39.728 11:41:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:39.728 11:41:30 -- common/autotest_common.sh@10 -- # set +x 00:05:39.985 11:41:30 -- json_config/json_config.sh@323 -- # killprocess 2296438 00:05:39.985 11:41:30 -- common/autotest_common.sh@936 -- # '[' -z 2296438 ']' 00:05:39.985 11:41:30 -- common/autotest_common.sh@940 -- # kill -0 2296438 00:05:39.985 11:41:30 -- common/autotest_common.sh@941 -- # uname 00:05:39.985 11:41:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:39.985 11:41:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2296438 00:05:39.985 11:41:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:39.985 11:41:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:39.985 11:41:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2296438' 00:05:39.985 killing process with pid 2296438 00:05:39.985 11:41:30 -- common/autotest_common.sh@955 -- # kill 2296438 00:05:39.985 11:41:30 -- common/autotest_common.sh@960 -- # wait 2296438 00:05:42.616 11:41:33 -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:42.616 11:41:33 -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:42.616 11:41:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:42.616 11:41:33 -- common/autotest_common.sh@10 -- # set +x 00:05:42.616 11:41:33 -- json_config/json_config.sh@328 -- # return 0 00:05:42.616 11:41:33 -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:42.616 INFO: Success 00:05:42.616 00:05:42.616 real 0m18.898s 00:05:42.616 user 0m19.279s 00:05:42.616 sys 0m2.558s 00:05:42.616 11:41:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:42.616 11:41:33 -- common/autotest_common.sh@10 -- # set +x 00:05:42.616 ************************************ 00:05:42.616 END TEST json_config 00:05:42.616 ************************************ 00:05:42.616 11:41:33 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:42.616 11:41:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:42.616 11:41:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:42.616 11:41:33 -- common/autotest_common.sh@10 -- # set +x 00:05:42.874 ************************************ 00:05:42.874 START TEST json_config_extra_key 00:05:42.874 ************************************ 00:05:42.874 11:41:33 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:42.874 11:41:33 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:42.874 11:41:33 -- nvmf/common.sh@7 -- # uname -s 00:05:42.874 11:41:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:42.874 11:41:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:42.874 11:41:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:42.874 11:41:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:42.874 11:41:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:42.874 11:41:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:42.874 11:41:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:42.874 11:41:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:42.874 11:41:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:42.874 11:41:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:42.874 11:41:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:05:42.874 11:41:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:05:42.874 11:41:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:42.874 11:41:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:42.874 11:41:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:42.874 11:41:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:42.874 11:41:33 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:42.874 11:41:33 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:42.874 11:41:33 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:42.874 11:41:33 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:42.874 11:41:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.874 11:41:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.874 11:41:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.874 11:41:33 -- paths/export.sh@5 -- # export PATH 00:05:42.874 11:41:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.874 11:41:33 -- nvmf/common.sh@47 -- # : 0 00:05:42.874 11:41:33 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:42.874 11:41:33 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:42.874 11:41:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:42.874 11:41:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:42.874 11:41:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:42.874 11:41:33 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:42.874 11:41:33 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:42.874 11:41:33 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:42.874 11:41:33 -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:42.874 11:41:33 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:42.874 11:41:33 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:42.874 11:41:33 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:42.874 11:41:33 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:42.874 11:41:33 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:42.874 11:41:33 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:42.874 11:41:33 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:42.874 11:41:33 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:42.874 11:41:33 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:42.874 11:41:33 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:42.874 INFO: launching applications... 00:05:42.874 11:41:33 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:42.874 11:41:33 -- json_config/common.sh@9 -- # local app=target 00:05:42.874 11:41:33 -- json_config/common.sh@10 -- # shift 00:05:42.874 11:41:33 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:42.874 11:41:33 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:42.874 11:41:33 -- json_config/common.sh@15 -- # local app_extra_params= 00:05:42.874 11:41:33 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:42.874 11:41:33 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:42.874 11:41:33 -- json_config/common.sh@22 -- # app_pid["$app"]=2298140 00:05:42.874 11:41:33 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:42.874 Waiting for target to run... 00:05:42.874 11:41:33 -- json_config/common.sh@25 -- # waitforlisten 2298140 /var/tmp/spdk_tgt.sock 00:05:42.874 11:41:33 -- common/autotest_common.sh@817 -- # '[' -z 2298140 ']' 00:05:42.874 11:41:33 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:42.874 11:41:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:42.874 11:41:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:42.874 11:41:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:42.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:42.874 11:41:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:42.874 11:41:33 -- common/autotest_common.sh@10 -- # set +x 00:05:43.133 [2024-04-18 11:41:33.507640] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:05:43.133 [2024-04-18 11:41:33.507735] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2298140 ] 00:05:43.133 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.700 [2024-04-18 11:41:33.991423] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.700 [2024-04-18 11:41:34.207328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.635 11:41:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:44.635 11:41:34 -- common/autotest_common.sh@850 -- # return 0 00:05:44.635 11:41:34 -- json_config/common.sh@26 -- # echo '' 00:05:44.635 00:05:44.635 11:41:34 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:44.635 INFO: shutting down applications... 00:05:44.635 11:41:34 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:44.635 11:41:34 -- json_config/common.sh@31 -- # local app=target 00:05:44.635 11:41:34 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:44.635 11:41:34 -- json_config/common.sh@35 -- # [[ -n 2298140 ]] 00:05:44.635 11:41:34 -- json_config/common.sh@38 -- # kill -SIGINT 2298140 00:05:44.635 11:41:34 -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:44.635 11:41:34 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:44.635 11:41:34 -- json_config/common.sh@41 -- # kill -0 2298140 00:05:44.635 11:41:34 -- json_config/common.sh@45 -- # sleep 0.5 00:05:45.203 11:41:35 -- json_config/common.sh@40 -- # (( i++ )) 00:05:45.203 11:41:35 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:45.203 11:41:35 -- json_config/common.sh@41 -- # kill -0 2298140 00:05:45.203 11:41:35 -- json_config/common.sh@45 -- # sleep 0.5 00:05:45.461 11:41:35 -- json_config/common.sh@40 -- # (( i++ )) 00:05:45.461 11:41:35 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:45.461 11:41:35 -- json_config/common.sh@41 -- # kill -0 2298140 00:05:45.461 11:41:35 -- json_config/common.sh@45 -- # sleep 0.5 00:05:46.028 11:41:36 -- json_config/common.sh@40 -- # (( i++ )) 00:05:46.028 11:41:36 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:46.028 11:41:36 -- json_config/common.sh@41 -- # kill -0 2298140 00:05:46.028 11:41:36 -- json_config/common.sh@45 -- # sleep 0.5 00:05:46.595 11:41:37 -- json_config/common.sh@40 -- # (( i++ )) 00:05:46.595 11:41:37 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:46.595 11:41:37 -- json_config/common.sh@41 -- # kill -0 2298140 00:05:46.595 11:41:37 -- json_config/common.sh@45 -- # sleep 0.5 00:05:47.163 11:41:37 -- json_config/common.sh@40 -- # (( i++ )) 00:05:47.163 11:41:37 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:47.163 11:41:37 -- json_config/common.sh@41 -- # kill -0 2298140 00:05:47.163 11:41:37 -- json_config/common.sh@45 -- # sleep 0.5 00:05:47.730 11:41:38 -- json_config/common.sh@40 -- # (( i++ )) 00:05:47.730 11:41:38 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:47.730 11:41:38 -- json_config/common.sh@41 -- # kill -0 2298140 00:05:47.730 11:41:38 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:47.730 11:41:38 -- json_config/common.sh@43 -- # break 00:05:47.730 11:41:38 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:47.730 11:41:38 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:47.730 SPDK target shutdown done 00:05:47.730 11:41:38 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:47.730 Success 00:05:47.730 00:05:47.730 real 0m4.726s 00:05:47.730 user 0m3.945s 00:05:47.730 sys 0m0.754s 00:05:47.730 11:41:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:47.730 11:41:38 -- common/autotest_common.sh@10 -- # set +x 00:05:47.730 ************************************ 00:05:47.730 END TEST json_config_extra_key 00:05:47.730 ************************************ 00:05:47.730 11:41:38 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:47.730 11:41:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:47.731 11:41:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:47.731 11:41:38 -- common/autotest_common.sh@10 -- # set +x 00:05:47.731 ************************************ 00:05:47.731 START TEST alias_rpc 00:05:47.731 ************************************ 00:05:47.731 11:41:38 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:47.990 * Looking for test storage... 00:05:47.990 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:47.990 11:41:38 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:47.990 11:41:38 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2299030 00:05:47.990 11:41:38 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:47.990 11:41:38 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2299030 00:05:47.990 11:41:38 -- common/autotest_common.sh@817 -- # '[' -z 2299030 ']' 00:05:47.990 11:41:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.990 11:41:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:47.990 11:41:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.990 11:41:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:47.990 11:41:38 -- common/autotest_common.sh@10 -- # set +x 00:05:47.990 [2024-04-18 11:41:38.400221] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:05:47.990 [2024-04-18 11:41:38.400312] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2299030 ] 00:05:47.990 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.990 [2024-04-18 11:41:38.521524] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.248 [2024-04-18 11:41:38.721160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.183 11:41:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:49.183 11:41:39 -- common/autotest_common.sh@850 -- # return 0 00:05:49.183 11:41:39 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:49.442 11:41:39 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2299030 00:05:49.442 11:41:39 -- common/autotest_common.sh@936 -- # '[' -z 2299030 ']' 00:05:49.442 11:41:39 -- common/autotest_common.sh@940 -- # kill -0 2299030 00:05:49.442 11:41:39 -- common/autotest_common.sh@941 -- # uname 00:05:49.442 11:41:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:49.442 11:41:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2299030 00:05:49.442 11:41:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:49.442 11:41:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:49.442 11:41:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2299030' 00:05:49.442 killing process with pid 2299030 00:05:49.442 11:41:39 -- common/autotest_common.sh@955 -- # kill 2299030 00:05:49.442 11:41:39 -- common/autotest_common.sh@960 -- # wait 2299030 00:05:52.016 00:05:52.016 real 0m3.958s 00:05:52.016 user 0m3.891s 00:05:52.016 sys 0m0.594s 00:05:52.016 11:41:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:52.016 11:41:42 -- common/autotest_common.sh@10 -- # set +x 00:05:52.016 ************************************ 00:05:52.016 END TEST alias_rpc 00:05:52.016 ************************************ 00:05:52.016 11:41:42 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:05:52.016 11:41:42 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:52.016 11:41:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:52.016 11:41:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:52.016 11:41:42 -- common/autotest_common.sh@10 -- # set +x 00:05:52.016 ************************************ 00:05:52.016 START TEST spdkcli_tcp 00:05:52.016 ************************************ 00:05:52.016 11:41:42 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:52.016 * Looking for test storage... 00:05:52.016 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:52.016 11:41:42 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:52.016 11:41:42 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:52.017 11:41:42 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:52.017 11:41:42 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:52.017 11:41:42 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:52.017 11:41:42 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:52.017 11:41:42 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:52.017 11:41:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:52.017 11:41:42 -- common/autotest_common.sh@10 -- # set +x 00:05:52.017 11:41:42 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2299772 00:05:52.017 11:41:42 -- spdkcli/tcp.sh@27 -- # waitforlisten 2299772 00:05:52.017 11:41:42 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:52.017 11:41:42 -- common/autotest_common.sh@817 -- # '[' -z 2299772 ']' 00:05:52.017 11:41:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.017 11:41:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:52.017 11:41:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.017 11:41:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:52.017 11:41:42 -- common/autotest_common.sh@10 -- # set +x 00:05:52.017 [2024-04-18 11:41:42.527562] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:05:52.017 [2024-04-18 11:41:42.527669] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2299772 ] 00:05:52.274 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.274 [2024-04-18 11:41:42.650144] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:52.531 [2024-04-18 11:41:42.855880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.531 [2024-04-18 11:41:42.855898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.464 11:41:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:53.464 11:41:43 -- common/autotest_common.sh@850 -- # return 0 00:05:53.464 11:41:43 -- spdkcli/tcp.sh@31 -- # socat_pid=2299935 00:05:53.464 11:41:43 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:53.464 11:41:43 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:53.464 [ 00:05:53.464 "bdev_malloc_delete", 00:05:53.464 "bdev_malloc_create", 00:05:53.464 "bdev_null_resize", 00:05:53.464 "bdev_null_delete", 00:05:53.464 "bdev_null_create", 00:05:53.464 "bdev_nvme_cuse_unregister", 00:05:53.464 "bdev_nvme_cuse_register", 00:05:53.464 "bdev_opal_new_user", 00:05:53.464 "bdev_opal_set_lock_state", 00:05:53.464 "bdev_opal_delete", 00:05:53.464 "bdev_opal_get_info", 00:05:53.464 "bdev_opal_create", 00:05:53.464 "bdev_nvme_opal_revert", 00:05:53.464 "bdev_nvme_opal_init", 00:05:53.464 "bdev_nvme_send_cmd", 00:05:53.464 "bdev_nvme_get_path_iostat", 00:05:53.464 "bdev_nvme_get_mdns_discovery_info", 00:05:53.464 "bdev_nvme_stop_mdns_discovery", 00:05:53.464 "bdev_nvme_start_mdns_discovery", 00:05:53.464 "bdev_nvme_set_multipath_policy", 00:05:53.464 "bdev_nvme_set_preferred_path", 00:05:53.464 "bdev_nvme_get_io_paths", 00:05:53.464 "bdev_nvme_remove_error_injection", 00:05:53.464 "bdev_nvme_add_error_injection", 00:05:53.464 "bdev_nvme_get_discovery_info", 00:05:53.464 "bdev_nvme_stop_discovery", 00:05:53.464 "bdev_nvme_start_discovery", 00:05:53.464 "bdev_nvme_get_controller_health_info", 00:05:53.464 "bdev_nvme_disable_controller", 00:05:53.464 "bdev_nvme_enable_controller", 00:05:53.464 "bdev_nvme_reset_controller", 00:05:53.464 "bdev_nvme_get_transport_statistics", 00:05:53.464 "bdev_nvme_apply_firmware", 00:05:53.464 "bdev_nvme_detach_controller", 00:05:53.464 "bdev_nvme_get_controllers", 00:05:53.464 "bdev_nvme_attach_controller", 00:05:53.464 "bdev_nvme_set_hotplug", 00:05:53.464 "bdev_nvme_set_options", 00:05:53.464 "bdev_passthru_delete", 00:05:53.464 "bdev_passthru_create", 00:05:53.464 "bdev_lvol_grow_lvstore", 00:05:53.464 "bdev_lvol_get_lvols", 00:05:53.464 "bdev_lvol_get_lvstores", 00:05:53.464 "bdev_lvol_delete", 00:05:53.464 "bdev_lvol_set_read_only", 00:05:53.464 "bdev_lvol_resize", 00:05:53.464 "bdev_lvol_decouple_parent", 00:05:53.464 "bdev_lvol_inflate", 00:05:53.464 "bdev_lvol_rename", 00:05:53.464 "bdev_lvol_clone_bdev", 00:05:53.464 "bdev_lvol_clone", 00:05:53.464 "bdev_lvol_snapshot", 00:05:53.464 "bdev_lvol_create", 00:05:53.464 "bdev_lvol_delete_lvstore", 00:05:53.464 "bdev_lvol_rename_lvstore", 00:05:53.464 "bdev_lvol_create_lvstore", 00:05:53.464 "bdev_raid_set_options", 00:05:53.464 "bdev_raid_remove_base_bdev", 00:05:53.464 "bdev_raid_add_base_bdev", 00:05:53.464 "bdev_raid_delete", 00:05:53.464 "bdev_raid_create", 00:05:53.464 "bdev_raid_get_bdevs", 00:05:53.464 "bdev_error_inject_error", 00:05:53.464 "bdev_error_delete", 00:05:53.464 "bdev_error_create", 00:05:53.464 "bdev_split_delete", 00:05:53.464 "bdev_split_create", 00:05:53.464 "bdev_delay_delete", 00:05:53.464 "bdev_delay_create", 00:05:53.464 "bdev_delay_update_latency", 00:05:53.464 "bdev_zone_block_delete", 00:05:53.464 "bdev_zone_block_create", 00:05:53.464 "blobfs_create", 00:05:53.464 "blobfs_detect", 00:05:53.464 "blobfs_set_cache_size", 00:05:53.464 "bdev_aio_delete", 00:05:53.464 "bdev_aio_rescan", 00:05:53.464 "bdev_aio_create", 00:05:53.464 "bdev_ftl_set_property", 00:05:53.464 "bdev_ftl_get_properties", 00:05:53.464 "bdev_ftl_get_stats", 00:05:53.464 "bdev_ftl_unmap", 00:05:53.464 "bdev_ftl_unload", 00:05:53.464 "bdev_ftl_delete", 00:05:53.464 "bdev_ftl_load", 00:05:53.464 "bdev_ftl_create", 00:05:53.464 "bdev_virtio_attach_controller", 00:05:53.464 "bdev_virtio_scsi_get_devices", 00:05:53.464 "bdev_virtio_detach_controller", 00:05:53.464 "bdev_virtio_blk_set_hotplug", 00:05:53.464 "bdev_iscsi_delete", 00:05:53.464 "bdev_iscsi_create", 00:05:53.464 "bdev_iscsi_set_options", 00:05:53.464 "accel_error_inject_error", 00:05:53.464 "ioat_scan_accel_module", 00:05:53.464 "dsa_scan_accel_module", 00:05:53.464 "iaa_scan_accel_module", 00:05:53.464 "vfu_virtio_create_scsi_endpoint", 00:05:53.464 "vfu_virtio_scsi_remove_target", 00:05:53.464 "vfu_virtio_scsi_add_target", 00:05:53.464 "vfu_virtio_create_blk_endpoint", 00:05:53.464 "vfu_virtio_delete_endpoint", 00:05:53.464 "keyring_file_remove_key", 00:05:53.464 "keyring_file_add_key", 00:05:53.464 "iscsi_set_options", 00:05:53.464 "iscsi_get_auth_groups", 00:05:53.464 "iscsi_auth_group_remove_secret", 00:05:53.464 "iscsi_auth_group_add_secret", 00:05:53.464 "iscsi_delete_auth_group", 00:05:53.464 "iscsi_create_auth_group", 00:05:53.464 "iscsi_set_discovery_auth", 00:05:53.464 "iscsi_get_options", 00:05:53.464 "iscsi_target_node_request_logout", 00:05:53.464 "iscsi_target_node_set_redirect", 00:05:53.464 "iscsi_target_node_set_auth", 00:05:53.464 "iscsi_target_node_add_lun", 00:05:53.464 "iscsi_get_stats", 00:05:53.464 "iscsi_get_connections", 00:05:53.464 "iscsi_portal_group_set_auth", 00:05:53.464 "iscsi_start_portal_group", 00:05:53.464 "iscsi_delete_portal_group", 00:05:53.464 "iscsi_create_portal_group", 00:05:53.464 "iscsi_get_portal_groups", 00:05:53.464 "iscsi_delete_target_node", 00:05:53.464 "iscsi_target_node_remove_pg_ig_maps", 00:05:53.464 "iscsi_target_node_add_pg_ig_maps", 00:05:53.464 "iscsi_create_target_node", 00:05:53.464 "iscsi_get_target_nodes", 00:05:53.464 "iscsi_delete_initiator_group", 00:05:53.464 "iscsi_initiator_group_remove_initiators", 00:05:53.464 "iscsi_initiator_group_add_initiators", 00:05:53.464 "iscsi_create_initiator_group", 00:05:53.464 "iscsi_get_initiator_groups", 00:05:53.464 "nvmf_set_crdt", 00:05:53.464 "nvmf_set_config", 00:05:53.464 "nvmf_set_max_subsystems", 00:05:53.464 "nvmf_subsystem_get_listeners", 00:05:53.464 "nvmf_subsystem_get_qpairs", 00:05:53.464 "nvmf_subsystem_get_controllers", 00:05:53.464 "nvmf_get_stats", 00:05:53.464 "nvmf_get_transports", 00:05:53.464 "nvmf_create_transport", 00:05:53.464 "nvmf_get_targets", 00:05:53.464 "nvmf_delete_target", 00:05:53.464 "nvmf_create_target", 00:05:53.464 "nvmf_subsystem_allow_any_host", 00:05:53.464 "nvmf_subsystem_remove_host", 00:05:53.464 "nvmf_subsystem_add_host", 00:05:53.464 "nvmf_ns_remove_host", 00:05:53.464 "nvmf_ns_add_host", 00:05:53.464 "nvmf_subsystem_remove_ns", 00:05:53.464 "nvmf_subsystem_add_ns", 00:05:53.464 "nvmf_subsystem_listener_set_ana_state", 00:05:53.464 "nvmf_discovery_get_referrals", 00:05:53.464 "nvmf_discovery_remove_referral", 00:05:53.464 "nvmf_discovery_add_referral", 00:05:53.464 "nvmf_subsystem_remove_listener", 00:05:53.464 "nvmf_subsystem_add_listener", 00:05:53.464 "nvmf_delete_subsystem", 00:05:53.465 "nvmf_create_subsystem", 00:05:53.465 "nvmf_get_subsystems", 00:05:53.465 "env_dpdk_get_mem_stats", 00:05:53.465 "nbd_get_disks", 00:05:53.465 "nbd_stop_disk", 00:05:53.465 "nbd_start_disk", 00:05:53.465 "ublk_recover_disk", 00:05:53.465 "ublk_get_disks", 00:05:53.465 "ublk_stop_disk", 00:05:53.465 "ublk_start_disk", 00:05:53.465 "ublk_destroy_target", 00:05:53.465 "ublk_create_target", 00:05:53.465 "virtio_blk_create_transport", 00:05:53.465 "virtio_blk_get_transports", 00:05:53.465 "vhost_controller_set_coalescing", 00:05:53.465 "vhost_get_controllers", 00:05:53.465 "vhost_delete_controller", 00:05:53.465 "vhost_create_blk_controller", 00:05:53.465 "vhost_scsi_controller_remove_target", 00:05:53.465 "vhost_scsi_controller_add_target", 00:05:53.465 "vhost_start_scsi_controller", 00:05:53.465 "vhost_create_scsi_controller", 00:05:53.465 "thread_set_cpumask", 00:05:53.465 "framework_get_scheduler", 00:05:53.465 "framework_set_scheduler", 00:05:53.465 "framework_get_reactors", 00:05:53.465 "thread_get_io_channels", 00:05:53.465 "thread_get_pollers", 00:05:53.465 "thread_get_stats", 00:05:53.465 "framework_monitor_context_switch", 00:05:53.465 "spdk_kill_instance", 00:05:53.465 "log_enable_timestamps", 00:05:53.465 "log_get_flags", 00:05:53.465 "log_clear_flag", 00:05:53.465 "log_set_flag", 00:05:53.465 "log_get_level", 00:05:53.465 "log_set_level", 00:05:53.465 "log_get_print_level", 00:05:53.465 "log_set_print_level", 00:05:53.465 "framework_enable_cpumask_locks", 00:05:53.465 "framework_disable_cpumask_locks", 00:05:53.465 "framework_wait_init", 00:05:53.465 "framework_start_init", 00:05:53.465 "scsi_get_devices", 00:05:53.465 "bdev_get_histogram", 00:05:53.465 "bdev_enable_histogram", 00:05:53.465 "bdev_set_qos_limit", 00:05:53.465 "bdev_set_qd_sampling_period", 00:05:53.465 "bdev_get_bdevs", 00:05:53.465 "bdev_reset_iostat", 00:05:53.465 "bdev_get_iostat", 00:05:53.465 "bdev_examine", 00:05:53.465 "bdev_wait_for_examine", 00:05:53.465 "bdev_set_options", 00:05:53.465 "notify_get_notifications", 00:05:53.465 "notify_get_types", 00:05:53.465 "accel_get_stats", 00:05:53.465 "accel_set_options", 00:05:53.465 "accel_set_driver", 00:05:53.465 "accel_crypto_key_destroy", 00:05:53.465 "accel_crypto_keys_get", 00:05:53.465 "accel_crypto_key_create", 00:05:53.465 "accel_assign_opc", 00:05:53.465 "accel_get_module_info", 00:05:53.465 "accel_get_opc_assignments", 00:05:53.465 "vmd_rescan", 00:05:53.465 "vmd_remove_device", 00:05:53.465 "vmd_enable", 00:05:53.465 "sock_set_default_impl", 00:05:53.465 "sock_impl_set_options", 00:05:53.465 "sock_impl_get_options", 00:05:53.465 "iobuf_get_stats", 00:05:53.465 "iobuf_set_options", 00:05:53.465 "keyring_get_keys", 00:05:53.465 "framework_get_pci_devices", 00:05:53.465 "framework_get_config", 00:05:53.465 "framework_get_subsystems", 00:05:53.465 "vfu_tgt_set_base_path", 00:05:53.465 "trace_get_info", 00:05:53.465 "trace_get_tpoint_group_mask", 00:05:53.465 "trace_disable_tpoint_group", 00:05:53.465 "trace_enable_tpoint_group", 00:05:53.465 "trace_clear_tpoint_mask", 00:05:53.465 "trace_set_tpoint_mask", 00:05:53.465 "spdk_get_version", 00:05:53.465 "rpc_get_methods" 00:05:53.465 ] 00:05:53.465 11:41:43 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:53.465 11:41:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:53.465 11:41:43 -- common/autotest_common.sh@10 -- # set +x 00:05:53.465 11:41:43 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:53.465 11:41:43 -- spdkcli/tcp.sh@38 -- # killprocess 2299772 00:05:53.465 11:41:43 -- common/autotest_common.sh@936 -- # '[' -z 2299772 ']' 00:05:53.465 11:41:43 -- common/autotest_common.sh@940 -- # kill -0 2299772 00:05:53.465 11:41:43 -- common/autotest_common.sh@941 -- # uname 00:05:53.465 11:41:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:53.465 11:41:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2299772 00:05:53.724 11:41:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:53.724 11:41:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:53.724 11:41:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2299772' 00:05:53.724 killing process with pid 2299772 00:05:53.724 11:41:44 -- common/autotest_common.sh@955 -- # kill 2299772 00:05:53.724 11:41:44 -- common/autotest_common.sh@960 -- # wait 2299772 00:05:56.256 00:05:56.256 real 0m4.075s 00:05:56.256 user 0m7.120s 00:05:56.256 sys 0m0.644s 00:05:56.256 11:41:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:56.256 11:41:46 -- common/autotest_common.sh@10 -- # set +x 00:05:56.256 ************************************ 00:05:56.256 END TEST spdkcli_tcp 00:05:56.256 ************************************ 00:05:56.256 11:41:46 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:56.256 11:41:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:56.256 11:41:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:56.256 11:41:46 -- common/autotest_common.sh@10 -- # set +x 00:05:56.256 ************************************ 00:05:56.256 START TEST dpdk_mem_utility 00:05:56.256 ************************************ 00:05:56.256 11:41:46 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:56.256 * Looking for test storage... 00:05:56.256 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:56.256 11:41:46 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:56.256 11:41:46 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2300518 00:05:56.256 11:41:46 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2300518 00:05:56.256 11:41:46 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:56.256 11:41:46 -- common/autotest_common.sh@817 -- # '[' -z 2300518 ']' 00:05:56.256 11:41:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.256 11:41:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:56.257 11:41:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.257 11:41:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:56.257 11:41:46 -- common/autotest_common.sh@10 -- # set +x 00:05:56.257 [2024-04-18 11:41:46.795148] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:05:56.257 [2024-04-18 11:41:46.795242] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2300518 ] 00:05:56.515 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.515 [2024-04-18 11:41:46.919633] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.773 [2024-04-18 11:41:47.126906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.709 11:41:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:57.709 11:41:48 -- common/autotest_common.sh@850 -- # return 0 00:05:57.709 11:41:48 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:57.709 11:41:48 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:57.709 11:41:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:57.709 11:41:48 -- common/autotest_common.sh@10 -- # set +x 00:05:57.709 { 00:05:57.709 "filename": "/tmp/spdk_mem_dump.txt" 00:05:57.709 } 00:05:57.709 11:41:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:57.709 11:41:48 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:57.709 DPDK memory size 820.000000 MiB in 1 heap(s) 00:05:57.709 1 heaps totaling size 820.000000 MiB 00:05:57.709 size: 820.000000 MiB heap id: 0 00:05:57.709 end heaps---------- 00:05:57.709 8 mempools totaling size 598.116089 MiB 00:05:57.709 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:57.709 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:57.709 size: 84.521057 MiB name: bdev_io_2300518 00:05:57.709 size: 51.011292 MiB name: evtpool_2300518 00:05:57.709 size: 50.003479 MiB name: msgpool_2300518 00:05:57.709 size: 21.763794 MiB name: PDU_Pool 00:05:57.709 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:57.709 size: 0.026123 MiB name: Session_Pool 00:05:57.709 end mempools------- 00:05:57.709 6 memzones totaling size 4.142822 MiB 00:05:57.709 size: 1.000366 MiB name: RG_ring_0_2300518 00:05:57.709 size: 1.000366 MiB name: RG_ring_1_2300518 00:05:57.709 size: 1.000366 MiB name: RG_ring_4_2300518 00:05:57.709 size: 1.000366 MiB name: RG_ring_5_2300518 00:05:57.709 size: 0.125366 MiB name: RG_ring_2_2300518 00:05:57.709 size: 0.015991 MiB name: RG_ring_3_2300518 00:05:57.709 end memzones------- 00:05:57.709 11:41:48 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:57.709 heap id: 0 total size: 820.000000 MiB number of busy elements: 41 number of free elements: 19 00:05:57.709 list of free elements. size: 18.514832 MiB 00:05:57.709 element at address: 0x200000400000 with size: 1.999451 MiB 00:05:57.709 element at address: 0x200000800000 with size: 1.996887 MiB 00:05:57.709 element at address: 0x200007000000 with size: 1.995972 MiB 00:05:57.709 element at address: 0x20000b200000 with size: 1.995972 MiB 00:05:57.709 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:57.709 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:57.709 element at address: 0x200019600000 with size: 0.999329 MiB 00:05:57.709 element at address: 0x200003e00000 with size: 0.996094 MiB 00:05:57.709 element at address: 0x200032200000 with size: 0.994324 MiB 00:05:57.709 element at address: 0x200018e00000 with size: 0.959900 MiB 00:05:57.709 element at address: 0x200019900040 with size: 0.937256 MiB 00:05:57.709 element at address: 0x200000200000 with size: 0.840942 MiB 00:05:57.709 element at address: 0x20001b000000 with size: 0.583191 MiB 00:05:57.709 element at address: 0x200019200000 with size: 0.491150 MiB 00:05:57.709 element at address: 0x200019a00000 with size: 0.485657 MiB 00:05:57.709 element at address: 0x200013800000 with size: 0.470581 MiB 00:05:57.709 element at address: 0x200028400000 with size: 0.411072 MiB 00:05:57.709 element at address: 0x200003a00000 with size: 0.356140 MiB 00:05:57.709 element at address: 0x20000b1ff040 with size: 0.001038 MiB 00:05:57.709 list of standard malloc elements. size: 199.220764 MiB 00:05:57.709 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:05:57.709 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:05:57.709 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:57.709 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:57.709 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:57.709 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:57.709 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:05:57.709 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:57.709 element at address: 0x2000137ff040 with size: 0.000427 MiB 00:05:57.709 element at address: 0x2000137ffa00 with size: 0.000366 MiB 00:05:57.709 element at address: 0x2000002d7480 with size: 0.000244 MiB 00:05:57.709 element at address: 0x2000002d7580 with size: 0.000244 MiB 00:05:57.709 element at address: 0x2000002d7680 with size: 0.000244 MiB 00:05:57.709 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:05:57.709 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:05:57.709 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:57.709 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:57.709 element at address: 0x200003aff980 with size: 0.000244 MiB 00:05:57.709 element at address: 0x200003affa80 with size: 0.000244 MiB 00:05:57.709 element at address: 0x200003eff000 with size: 0.000244 MiB 00:05:57.709 element at address: 0x20000b1ff480 with size: 0.000244 MiB 00:05:57.709 element at address: 0x20000b1ff580 with size: 0.000244 MiB 00:05:57.709 element at address: 0x20000b1ff680 with size: 0.000244 MiB 00:05:57.709 element at address: 0x20000b1ff780 with size: 0.000244 MiB 00:05:57.709 element at address: 0x20000b1ff880 with size: 0.000244 MiB 00:05:57.709 element at address: 0x20000b1ff980 with size: 0.000244 MiB 00:05:57.710 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:05:57.710 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:05:57.710 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:05:57.710 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:05:57.710 element at address: 0x2000137ff200 with size: 0.000244 MiB 00:05:57.710 element at address: 0x2000137ff300 with size: 0.000244 MiB 00:05:57.710 element at address: 0x2000137ff400 with size: 0.000244 MiB 00:05:57.710 element at address: 0x2000137ff500 with size: 0.000244 MiB 00:05:57.710 element at address: 0x2000137ff600 with size: 0.000244 MiB 00:05:57.710 element at address: 0x2000137ff700 with size: 0.000244 MiB 00:05:57.710 element at address: 0x2000137ff800 with size: 0.000244 MiB 00:05:57.710 element at address: 0x2000137ff900 with size: 0.000244 MiB 00:05:57.710 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:05:57.710 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:05:57.710 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:05:57.710 list of memzone associated elements. size: 602.264404 MiB 00:05:57.710 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:05:57.710 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:57.710 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:05:57.710 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:57.710 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:05:57.710 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2300518_0 00:05:57.710 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:05:57.710 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2300518_0 00:05:57.710 element at address: 0x200003fff340 with size: 48.003113 MiB 00:05:57.710 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2300518_0 00:05:57.710 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:05:57.710 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:57.710 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:05:57.710 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:57.710 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:05:57.710 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2300518 00:05:57.710 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:05:57.710 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2300518 00:05:57.710 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:57.710 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2300518 00:05:57.710 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:57.710 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:57.710 element at address: 0x200019abc780 with size: 1.008179 MiB 00:05:57.710 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:57.710 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:57.710 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:57.710 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:05:57.710 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:57.710 element at address: 0x200003eff100 with size: 1.000549 MiB 00:05:57.710 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2300518 00:05:57.710 element at address: 0x200003affb80 with size: 1.000549 MiB 00:05:57.710 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2300518 00:05:57.710 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:05:57.710 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2300518 00:05:57.710 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:05:57.710 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2300518 00:05:57.710 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:05:57.710 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2300518 00:05:57.710 element at address: 0x20001927dbc0 with size: 0.500549 MiB 00:05:57.710 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:57.710 element at address: 0x200013878780 with size: 0.500549 MiB 00:05:57.710 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:57.710 element at address: 0x200019a7c540 with size: 0.250549 MiB 00:05:57.710 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:57.710 element at address: 0x200003adf740 with size: 0.125549 MiB 00:05:57.710 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2300518 00:05:57.710 element at address: 0x200018ef5bc0 with size: 0.031799 MiB 00:05:57.710 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:57.710 element at address: 0x2000284693c0 with size: 0.023804 MiB 00:05:57.710 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:57.710 element at address: 0x200003adb500 with size: 0.016174 MiB 00:05:57.710 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2300518 00:05:57.710 element at address: 0x20002846f540 with size: 0.002502 MiB 00:05:57.710 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:57.710 element at address: 0x2000002d7780 with size: 0.000366 MiB 00:05:57.710 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2300518 00:05:57.710 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:05:57.710 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2300518 00:05:57.710 element at address: 0x20000b1ffa80 with size: 0.000366 MiB 00:05:57.710 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:57.710 11:41:48 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:57.710 11:41:48 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2300518 00:05:57.710 11:41:48 -- common/autotest_common.sh@936 -- # '[' -z 2300518 ']' 00:05:57.710 11:41:48 -- common/autotest_common.sh@940 -- # kill -0 2300518 00:05:57.710 11:41:48 -- common/autotest_common.sh@941 -- # uname 00:05:57.710 11:41:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:57.710 11:41:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2300518 00:05:57.710 11:41:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:57.710 11:41:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:57.710 11:41:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2300518' 00:05:57.710 killing process with pid 2300518 00:05:57.710 11:41:48 -- common/autotest_common.sh@955 -- # kill 2300518 00:05:57.710 11:41:48 -- common/autotest_common.sh@960 -- # wait 2300518 00:06:00.243 00:06:00.243 real 0m3.933s 00:06:00.243 user 0m3.814s 00:06:00.243 sys 0m0.589s 00:06:00.243 11:41:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:00.243 11:41:50 -- common/autotest_common.sh@10 -- # set +x 00:06:00.243 ************************************ 00:06:00.243 END TEST dpdk_mem_utility 00:06:00.243 ************************************ 00:06:00.243 11:41:50 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:00.243 11:41:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:00.243 11:41:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:00.243 11:41:50 -- common/autotest_common.sh@10 -- # set +x 00:06:00.243 ************************************ 00:06:00.243 START TEST event 00:06:00.243 ************************************ 00:06:00.243 11:41:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:00.502 * Looking for test storage... 00:06:00.502 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:00.502 11:41:50 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:00.502 11:41:50 -- bdev/nbd_common.sh@6 -- # set -e 00:06:00.502 11:41:50 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:00.502 11:41:50 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:00.502 11:41:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:00.502 11:41:50 -- common/autotest_common.sh@10 -- # set +x 00:06:00.502 ************************************ 00:06:00.502 START TEST event_perf 00:06:00.502 ************************************ 00:06:00.502 11:41:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:00.760 Running I/O for 1 seconds...[2024-04-18 11:41:51.051499] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:06:00.761 [2024-04-18 11:41:51.051576] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2301384 ] 00:06:00.761 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.761 [2024-04-18 11:41:51.172369] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:01.019 [2024-04-18 11:41:51.385186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.019 [2024-04-18 11:41:51.385257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:01.019 [2024-04-18 11:41:51.385330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.019 [2024-04-18 11:41:51.385334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:02.395 Running I/O for 1 seconds... 00:06:02.395 lcore 0: 209832 00:06:02.395 lcore 1: 209832 00:06:02.395 lcore 2: 209831 00:06:02.395 lcore 3: 209831 00:06:02.395 done. 00:06:02.395 00:06:02.395 real 0m1.787s 00:06:02.395 user 0m4.610s 00:06:02.395 sys 0m0.172s 00:06:02.395 11:41:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:02.395 11:41:52 -- common/autotest_common.sh@10 -- # set +x 00:06:02.395 ************************************ 00:06:02.395 END TEST event_perf 00:06:02.395 ************************************ 00:06:02.395 11:41:52 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:02.395 11:41:52 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:02.395 11:41:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:02.395 11:41:52 -- common/autotest_common.sh@10 -- # set +x 00:06:02.654 ************************************ 00:06:02.654 START TEST event_reactor 00:06:02.654 ************************************ 00:06:02.654 11:41:52 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:02.654 [2024-04-18 11:41:53.043120] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:06:02.654 [2024-04-18 11:41:53.043197] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2301743 ] 00:06:02.654 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.654 [2024-04-18 11:41:53.167576] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.912 [2024-04-18 11:41:53.376314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.288 test_start 00:06:04.288 oneshot 00:06:04.288 tick 100 00:06:04.288 tick 100 00:06:04.288 tick 250 00:06:04.288 tick 100 00:06:04.288 tick 100 00:06:04.288 tick 100 00:06:04.288 tick 250 00:06:04.288 tick 500 00:06:04.288 tick 100 00:06:04.288 tick 100 00:06:04.288 tick 250 00:06:04.288 tick 100 00:06:04.288 tick 100 00:06:04.288 test_end 00:06:04.288 00:06:04.288 real 0m1.775s 00:06:04.288 user 0m1.603s 00:06:04.288 sys 0m0.164s 00:06:04.288 11:41:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:04.288 11:41:54 -- common/autotest_common.sh@10 -- # set +x 00:06:04.288 ************************************ 00:06:04.288 END TEST event_reactor 00:06:04.288 ************************************ 00:06:04.288 11:41:54 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:04.288 11:41:54 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:04.288 11:41:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:04.288 11:41:54 -- common/autotest_common.sh@10 -- # set +x 00:06:04.546 ************************************ 00:06:04.546 START TEST event_reactor_perf 00:06:04.546 ************************************ 00:06:04.546 11:41:54 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:04.546 [2024-04-18 11:41:55.022266] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:06:04.546 [2024-04-18 11:41:55.022363] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2302228 ] 00:06:04.546 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.805 [2024-04-18 11:41:55.146702] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.805 [2024-04-18 11:41:55.351835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.707 test_start 00:06:06.707 test_end 00:06:06.707 Performance: 402360 events per second 00:06:06.707 00:06:06.707 real 0m1.775s 00:06:06.707 user 0m1.617s 00:06:06.707 sys 0m0.150s 00:06:06.707 11:41:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:06.707 11:41:56 -- common/autotest_common.sh@10 -- # set +x 00:06:06.707 ************************************ 00:06:06.707 END TEST event_reactor_perf 00:06:06.707 ************************************ 00:06:06.707 11:41:56 -- event/event.sh@49 -- # uname -s 00:06:06.707 11:41:56 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:06.707 11:41:56 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:06.707 11:41:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:06.707 11:41:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:06.707 11:41:56 -- common/autotest_common.sh@10 -- # set +x 00:06:06.707 ************************************ 00:06:06.707 START TEST event_scheduler 00:06:06.707 ************************************ 00:06:06.707 11:41:56 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:06.707 * Looking for test storage... 00:06:06.707 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:06.708 11:41:57 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:06.708 11:41:57 -- scheduler/scheduler.sh@35 -- # scheduler_pid=2302561 00:06:06.708 11:41:57 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:06.708 11:41:57 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:06.708 11:41:57 -- scheduler/scheduler.sh@37 -- # waitforlisten 2302561 00:06:06.708 11:41:57 -- common/autotest_common.sh@817 -- # '[' -z 2302561 ']' 00:06:06.708 11:41:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.708 11:41:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:06.708 11:41:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.708 11:41:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:06.708 11:41:57 -- common/autotest_common.sh@10 -- # set +x 00:06:06.708 [2024-04-18 11:41:57.169632] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:06:06.708 [2024-04-18 11:41:57.169748] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2302561 ] 00:06:06.708 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.966 [2024-04-18 11:41:57.291015] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:06.966 [2024-04-18 11:41:57.498412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.966 [2024-04-18 11:41:57.498493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.966 [2024-04-18 11:41:57.498533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:06.966 [2024-04-18 11:41:57.498524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:07.595 11:41:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:07.595 11:41:57 -- common/autotest_common.sh@850 -- # return 0 00:06:07.595 11:41:57 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:07.595 11:41:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:07.595 11:41:57 -- common/autotest_common.sh@10 -- # set +x 00:06:07.595 POWER: Env isn't set yet! 00:06:07.595 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:07.595 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:07.595 POWER: Cannot set governor of lcore 0 to userspace 00:06:07.595 POWER: Attempting to initialise PSTAT power management... 00:06:07.595 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:06:07.595 POWER: Initialized successfully for lcore 0 power management 00:06:07.595 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:06:07.595 POWER: Initialized successfully for lcore 1 power management 00:06:07.595 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:06:07.595 POWER: Initialized successfully for lcore 2 power management 00:06:07.595 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:06:07.595 POWER: Initialized successfully for lcore 3 power management 00:06:07.595 11:41:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:07.595 11:41:57 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:07.595 11:41:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:07.595 11:41:57 -- common/autotest_common.sh@10 -- # set +x 00:06:07.916 [2024-04-18 11:41:58.365280] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:07.916 11:41:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:07.917 11:41:58 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:07.917 11:41:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:07.917 11:41:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:07.917 11:41:58 -- common/autotest_common.sh@10 -- # set +x 00:06:08.175 ************************************ 00:06:08.175 START TEST scheduler_create_thread 00:06:08.175 ************************************ 00:06:08.175 11:41:58 -- common/autotest_common.sh@1111 -- # scheduler_create_thread 00:06:08.176 11:41:58 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:08.176 11:41:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:08.176 11:41:58 -- common/autotest_common.sh@10 -- # set +x 00:06:08.176 2 00:06:08.176 11:41:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:08.176 11:41:58 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:08.176 11:41:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:08.176 11:41:58 -- common/autotest_common.sh@10 -- # set +x 00:06:08.176 3 00:06:08.176 11:41:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:08.176 11:41:58 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:08.176 11:41:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:08.176 11:41:58 -- common/autotest_common.sh@10 -- # set +x 00:06:08.176 4 00:06:08.176 11:41:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:08.176 11:41:58 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:08.176 11:41:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:08.176 11:41:58 -- common/autotest_common.sh@10 -- # set +x 00:06:08.176 5 00:06:08.176 11:41:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:08.176 11:41:58 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:08.176 11:41:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:08.176 11:41:58 -- common/autotest_common.sh@10 -- # set +x 00:06:08.176 6 00:06:08.176 11:41:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:08.176 11:41:58 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:08.176 11:41:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:08.176 11:41:58 -- common/autotest_common.sh@10 -- # set +x 00:06:08.176 7 00:06:08.176 11:41:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:08.176 11:41:58 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:08.176 11:41:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:08.176 11:41:58 -- common/autotest_common.sh@10 -- # set +x 00:06:08.176 8 00:06:08.176 11:41:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:08.176 11:41:58 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:08.176 11:41:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:08.176 11:41:58 -- common/autotest_common.sh@10 -- # set +x 00:06:08.176 9 00:06:08.176 11:41:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:08.176 11:41:58 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:08.176 11:41:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:08.176 11:41:58 -- common/autotest_common.sh@10 -- # set +x 00:06:08.176 10 00:06:08.176 11:41:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:08.176 11:41:58 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:08.176 11:41:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:08.176 11:41:58 -- common/autotest_common.sh@10 -- # set +x 00:06:08.176 11:41:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:08.176 11:41:58 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:08.176 11:41:58 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:08.176 11:41:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:08.176 11:41:58 -- common/autotest_common.sh@10 -- # set +x 00:06:08.176 11:41:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:08.176 11:41:58 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:08.176 11:41:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:08.176 11:41:58 -- common/autotest_common.sh@10 -- # set +x 00:06:09.550 11:42:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:09.550 11:42:00 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:09.550 11:42:00 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:09.550 11:42:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:09.550 11:42:00 -- common/autotest_common.sh@10 -- # set +x 00:06:10.926 11:42:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:10.926 00:06:10.926 real 0m2.625s 00:06:10.926 user 0m0.026s 00:06:10.926 sys 0m0.005s 00:06:10.926 11:42:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:10.926 11:42:01 -- common/autotest_common.sh@10 -- # set +x 00:06:10.926 ************************************ 00:06:10.926 END TEST scheduler_create_thread 00:06:10.926 ************************************ 00:06:10.926 11:42:01 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:10.926 11:42:01 -- scheduler/scheduler.sh@46 -- # killprocess 2302561 00:06:10.926 11:42:01 -- common/autotest_common.sh@936 -- # '[' -z 2302561 ']' 00:06:10.926 11:42:01 -- common/autotest_common.sh@940 -- # kill -0 2302561 00:06:10.926 11:42:01 -- common/autotest_common.sh@941 -- # uname 00:06:10.926 11:42:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:10.926 11:42:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2302561 00:06:10.926 11:42:01 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:10.926 11:42:01 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:10.926 11:42:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2302561' 00:06:10.926 killing process with pid 2302561 00:06:10.926 11:42:01 -- common/autotest_common.sh@955 -- # kill 2302561 00:06:10.926 11:42:01 -- common/autotest_common.sh@960 -- # wait 2302561 00:06:11.184 [2024-04-18 11:42:01.635002] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:12.117 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:06:12.118 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:06:12.118 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:06:12.118 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:06:12.118 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:06:12.118 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:06:12.118 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:06:12.118 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:06:12.376 00:06:12.376 real 0m5.925s 00:06:12.376 user 0m9.106s 00:06:12.376 sys 0m0.640s 00:06:12.376 11:42:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:12.376 11:42:02 -- common/autotest_common.sh@10 -- # set +x 00:06:12.376 ************************************ 00:06:12.376 END TEST event_scheduler 00:06:12.376 ************************************ 00:06:12.635 11:42:02 -- event/event.sh@51 -- # modprobe -n nbd 00:06:12.635 11:42:02 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:12.635 11:42:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:12.635 11:42:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:12.635 11:42:02 -- common/autotest_common.sh@10 -- # set +x 00:06:12.635 ************************************ 00:06:12.635 START TEST app_repeat 00:06:12.635 ************************************ 00:06:12.635 11:42:03 -- common/autotest_common.sh@1111 -- # app_repeat_test 00:06:12.635 11:42:03 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.635 11:42:03 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.635 11:42:03 -- event/event.sh@13 -- # local nbd_list 00:06:12.635 11:42:03 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:12.635 11:42:03 -- event/event.sh@14 -- # local bdev_list 00:06:12.635 11:42:03 -- event/event.sh@15 -- # local repeat_times=4 00:06:12.635 11:42:03 -- event/event.sh@17 -- # modprobe nbd 00:06:12.635 11:42:03 -- event/event.sh@19 -- # repeat_pid=2303818 00:06:12.635 11:42:03 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:12.635 11:42:03 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:12.635 11:42:03 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2303818' 00:06:12.635 Process app_repeat pid: 2303818 00:06:12.635 11:42:03 -- event/event.sh@23 -- # for i in {0..2} 00:06:12.635 11:42:03 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:12.635 spdk_app_start Round 0 00:06:12.635 11:42:03 -- event/event.sh@25 -- # waitforlisten 2303818 /var/tmp/spdk-nbd.sock 00:06:12.635 11:42:03 -- common/autotest_common.sh@817 -- # '[' -z 2303818 ']' 00:06:12.635 11:42:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:12.635 11:42:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:12.635 11:42:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:12.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:12.635 11:42:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:12.635 11:42:03 -- common/autotest_common.sh@10 -- # set +x 00:06:12.635 [2024-04-18 11:42:03.182380] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:06:12.635 [2024-04-18 11:42:03.182474] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2303818 ] 00:06:12.892 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.893 [2024-04-18 11:42:03.310829] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:13.150 [2024-04-18 11:42:03.520840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.150 [2024-04-18 11:42:03.520848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.717 11:42:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:13.717 11:42:03 -- common/autotest_common.sh@850 -- # return 0 00:06:13.717 11:42:03 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:13.717 Malloc0 00:06:13.717 11:42:04 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:13.976 Malloc1 00:06:13.976 11:42:04 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:13.976 11:42:04 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.976 11:42:04 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:13.976 11:42:04 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:13.976 11:42:04 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.976 11:42:04 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:13.976 11:42:04 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:13.976 11:42:04 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.976 11:42:04 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:13.976 11:42:04 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:13.976 11:42:04 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.976 11:42:04 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:13.976 11:42:04 -- bdev/nbd_common.sh@12 -- # local i 00:06:13.976 11:42:04 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:13.976 11:42:04 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.976 11:42:04 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:14.234 /dev/nbd0 00:06:14.234 11:42:04 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:14.234 11:42:04 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:14.234 11:42:04 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:06:14.234 11:42:04 -- common/autotest_common.sh@855 -- # local i 00:06:14.234 11:42:04 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:14.234 11:42:04 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:14.234 11:42:04 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:06:14.234 11:42:04 -- common/autotest_common.sh@859 -- # break 00:06:14.234 11:42:04 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:14.234 11:42:04 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:14.234 11:42:04 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:14.234 1+0 records in 00:06:14.234 1+0 records out 00:06:14.234 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000299355 s, 13.7 MB/s 00:06:14.234 11:42:04 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:14.234 11:42:04 -- common/autotest_common.sh@872 -- # size=4096 00:06:14.234 11:42:04 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:14.234 11:42:04 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:14.234 11:42:04 -- common/autotest_common.sh@875 -- # return 0 00:06:14.234 11:42:04 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:14.234 11:42:04 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:14.234 11:42:04 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:14.491 /dev/nbd1 00:06:14.491 11:42:04 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:14.491 11:42:04 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:14.491 11:42:04 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:06:14.491 11:42:04 -- common/autotest_common.sh@855 -- # local i 00:06:14.491 11:42:04 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:14.491 11:42:04 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:14.491 11:42:04 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:06:14.491 11:42:04 -- common/autotest_common.sh@859 -- # break 00:06:14.491 11:42:04 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:14.492 11:42:04 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:14.492 11:42:04 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:14.492 1+0 records in 00:06:14.492 1+0 records out 00:06:14.492 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000220735 s, 18.6 MB/s 00:06:14.492 11:42:04 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:14.492 11:42:04 -- common/autotest_common.sh@872 -- # size=4096 00:06:14.492 11:42:04 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:14.492 11:42:04 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:14.492 11:42:04 -- common/autotest_common.sh@875 -- # return 0 00:06:14.492 11:42:04 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:14.492 11:42:04 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:14.492 11:42:04 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:14.492 11:42:04 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.492 11:42:04 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:14.750 11:42:05 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:14.750 { 00:06:14.750 "nbd_device": "/dev/nbd0", 00:06:14.750 "bdev_name": "Malloc0" 00:06:14.750 }, 00:06:14.750 { 00:06:14.750 "nbd_device": "/dev/nbd1", 00:06:14.750 "bdev_name": "Malloc1" 00:06:14.750 } 00:06:14.750 ]' 00:06:14.750 11:42:05 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:14.750 { 00:06:14.750 "nbd_device": "/dev/nbd0", 00:06:14.750 "bdev_name": "Malloc0" 00:06:14.750 }, 00:06:14.750 { 00:06:14.750 "nbd_device": "/dev/nbd1", 00:06:14.750 "bdev_name": "Malloc1" 00:06:14.750 } 00:06:14.750 ]' 00:06:14.750 11:42:05 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:14.750 11:42:05 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:14.750 /dev/nbd1' 00:06:14.750 11:42:05 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:14.750 /dev/nbd1' 00:06:14.750 11:42:05 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:14.750 11:42:05 -- bdev/nbd_common.sh@65 -- # count=2 00:06:14.750 11:42:05 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:14.750 11:42:05 -- bdev/nbd_common.sh@95 -- # count=2 00:06:14.750 11:42:05 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:14.750 11:42:05 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:14.750 11:42:05 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.750 11:42:05 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:14.750 11:42:05 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:14.750 11:42:05 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:14.750 11:42:05 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:14.750 11:42:05 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:14.750 256+0 records in 00:06:14.750 256+0 records out 00:06:14.750 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.01097 s, 95.6 MB/s 00:06:14.750 11:42:05 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:14.750 11:42:05 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:14.750 256+0 records in 00:06:14.750 256+0 records out 00:06:14.750 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0224396 s, 46.7 MB/s 00:06:14.750 11:42:05 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:14.750 11:42:05 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:14.750 256+0 records in 00:06:14.750 256+0 records out 00:06:14.750 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0176451 s, 59.4 MB/s 00:06:14.750 11:42:05 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:14.750 11:42:05 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.750 11:42:05 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:14.750 11:42:05 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:14.750 11:42:05 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:14.750 11:42:05 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:14.750 11:42:05 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:14.750 11:42:05 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:14.750 11:42:05 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:14.750 11:42:05 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:14.750 11:42:05 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:14.750 11:42:05 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:14.750 11:42:05 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:14.750 11:42:05 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.750 11:42:05 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.750 11:42:05 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:14.750 11:42:05 -- bdev/nbd_common.sh@51 -- # local i 00:06:14.750 11:42:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.750 11:42:05 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:15.008 11:42:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:15.008 11:42:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:15.008 11:42:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:15.008 11:42:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:15.008 11:42:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:15.008 11:42:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:15.008 11:42:05 -- bdev/nbd_common.sh@41 -- # break 00:06:15.008 11:42:05 -- bdev/nbd_common.sh@45 -- # return 0 00:06:15.008 11:42:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:15.008 11:42:05 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:15.267 11:42:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:15.267 11:42:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:15.267 11:42:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:15.267 11:42:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:15.267 11:42:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:15.267 11:42:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:15.267 11:42:05 -- bdev/nbd_common.sh@41 -- # break 00:06:15.267 11:42:05 -- bdev/nbd_common.sh@45 -- # return 0 00:06:15.267 11:42:05 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:15.267 11:42:05 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.267 11:42:05 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:15.267 11:42:05 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:15.267 11:42:05 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:15.267 11:42:05 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:15.526 11:42:05 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:15.526 11:42:05 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:15.526 11:42:05 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:15.526 11:42:05 -- bdev/nbd_common.sh@65 -- # true 00:06:15.526 11:42:05 -- bdev/nbd_common.sh@65 -- # count=0 00:06:15.526 11:42:05 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:15.526 11:42:05 -- bdev/nbd_common.sh@104 -- # count=0 00:06:15.526 11:42:05 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:15.526 11:42:05 -- bdev/nbd_common.sh@109 -- # return 0 00:06:15.526 11:42:05 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:15.784 11:42:06 -- event/event.sh@35 -- # sleep 3 00:06:17.160 [2024-04-18 11:42:07.544641] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:17.419 [2024-04-18 11:42:07.742610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.419 [2024-04-18 11:42:07.742611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.677 [2024-04-18 11:42:07.971379] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:17.677 [2024-04-18 11:42:07.971438] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:19.051 11:42:09 -- event/event.sh@23 -- # for i in {0..2} 00:06:19.051 11:42:09 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:19.051 spdk_app_start Round 1 00:06:19.051 11:42:09 -- event/event.sh@25 -- # waitforlisten 2303818 /var/tmp/spdk-nbd.sock 00:06:19.051 11:42:09 -- common/autotest_common.sh@817 -- # '[' -z 2303818 ']' 00:06:19.051 11:42:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:19.051 11:42:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:19.051 11:42:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:19.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:19.051 11:42:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:19.051 11:42:09 -- common/autotest_common.sh@10 -- # set +x 00:06:19.051 11:42:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:19.051 11:42:09 -- common/autotest_common.sh@850 -- # return 0 00:06:19.051 11:42:09 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:19.051 Malloc0 00:06:19.308 11:42:09 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:19.308 Malloc1 00:06:19.566 11:42:09 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:19.566 11:42:09 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.566 11:42:09 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:19.566 11:42:09 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:19.566 11:42:09 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.566 11:42:09 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:19.566 11:42:09 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:19.566 11:42:09 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.566 11:42:09 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:19.566 11:42:09 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:19.566 11:42:09 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.566 11:42:09 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:19.566 11:42:09 -- bdev/nbd_common.sh@12 -- # local i 00:06:19.566 11:42:09 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:19.566 11:42:09 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.566 11:42:09 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:19.566 /dev/nbd0 00:06:19.566 11:42:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:19.566 11:42:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:19.566 11:42:10 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:06:19.566 11:42:10 -- common/autotest_common.sh@855 -- # local i 00:06:19.566 11:42:10 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:19.566 11:42:10 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:19.566 11:42:10 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:06:19.566 11:42:10 -- common/autotest_common.sh@859 -- # break 00:06:19.566 11:42:10 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:19.566 11:42:10 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:19.566 11:42:10 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:19.566 1+0 records in 00:06:19.566 1+0 records out 00:06:19.566 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225419 s, 18.2 MB/s 00:06:19.566 11:42:10 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:19.566 11:42:10 -- common/autotest_common.sh@872 -- # size=4096 00:06:19.566 11:42:10 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:19.566 11:42:10 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:19.566 11:42:10 -- common/autotest_common.sh@875 -- # return 0 00:06:19.566 11:42:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.566 11:42:10 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.566 11:42:10 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:19.824 /dev/nbd1 00:06:19.824 11:42:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:19.824 11:42:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:19.824 11:42:10 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:06:19.824 11:42:10 -- common/autotest_common.sh@855 -- # local i 00:06:19.824 11:42:10 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:19.824 11:42:10 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:19.824 11:42:10 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:06:19.825 11:42:10 -- common/autotest_common.sh@859 -- # break 00:06:19.825 11:42:10 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:19.825 11:42:10 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:19.825 11:42:10 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:19.825 1+0 records in 00:06:19.825 1+0 records out 00:06:19.825 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000178641 s, 22.9 MB/s 00:06:19.825 11:42:10 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:19.825 11:42:10 -- common/autotest_common.sh@872 -- # size=4096 00:06:19.825 11:42:10 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:19.825 11:42:10 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:19.825 11:42:10 -- common/autotest_common.sh@875 -- # return 0 00:06:19.825 11:42:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.825 11:42:10 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.825 11:42:10 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:19.825 11:42:10 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.825 11:42:10 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:20.083 11:42:10 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:20.083 { 00:06:20.083 "nbd_device": "/dev/nbd0", 00:06:20.083 "bdev_name": "Malloc0" 00:06:20.083 }, 00:06:20.083 { 00:06:20.083 "nbd_device": "/dev/nbd1", 00:06:20.083 "bdev_name": "Malloc1" 00:06:20.083 } 00:06:20.083 ]' 00:06:20.083 11:42:10 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:20.083 { 00:06:20.083 "nbd_device": "/dev/nbd0", 00:06:20.083 "bdev_name": "Malloc0" 00:06:20.083 }, 00:06:20.083 { 00:06:20.083 "nbd_device": "/dev/nbd1", 00:06:20.083 "bdev_name": "Malloc1" 00:06:20.083 } 00:06:20.083 ]' 00:06:20.083 11:42:10 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:20.083 11:42:10 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:20.083 /dev/nbd1' 00:06:20.083 11:42:10 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:20.083 /dev/nbd1' 00:06:20.083 11:42:10 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:20.083 11:42:10 -- bdev/nbd_common.sh@65 -- # count=2 00:06:20.083 11:42:10 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:20.083 11:42:10 -- bdev/nbd_common.sh@95 -- # count=2 00:06:20.083 11:42:10 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:20.083 11:42:10 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:20.083 11:42:10 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.083 11:42:10 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:20.083 11:42:10 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:20.083 11:42:10 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:20.083 11:42:10 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:20.083 11:42:10 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:20.083 256+0 records in 00:06:20.083 256+0 records out 00:06:20.083 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0113801 s, 92.1 MB/s 00:06:20.083 11:42:10 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:20.083 11:42:10 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:20.083 256+0 records in 00:06:20.083 256+0 records out 00:06:20.083 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149789 s, 70.0 MB/s 00:06:20.083 11:42:10 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:20.083 11:42:10 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:20.083 256+0 records in 00:06:20.083 256+0 records out 00:06:20.083 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0173313 s, 60.5 MB/s 00:06:20.083 11:42:10 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:20.083 11:42:10 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.083 11:42:10 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:20.083 11:42:10 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:20.083 11:42:10 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:20.083 11:42:10 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:20.083 11:42:10 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:20.083 11:42:10 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.083 11:42:10 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:20.083 11:42:10 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.083 11:42:10 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:20.083 11:42:10 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:20.083 11:42:10 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:20.083 11:42:10 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.083 11:42:10 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.083 11:42:10 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:20.083 11:42:10 -- bdev/nbd_common.sh@51 -- # local i 00:06:20.083 11:42:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.083 11:42:10 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:20.341 11:42:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:20.341 11:42:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:20.341 11:42:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:20.341 11:42:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.341 11:42:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.341 11:42:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:20.341 11:42:10 -- bdev/nbd_common.sh@41 -- # break 00:06:20.341 11:42:10 -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.341 11:42:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.341 11:42:10 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:20.599 11:42:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:20.599 11:42:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:20.599 11:42:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:20.599 11:42:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.599 11:42:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.599 11:42:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:20.599 11:42:10 -- bdev/nbd_common.sh@41 -- # break 00:06:20.599 11:42:10 -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.599 11:42:10 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:20.599 11:42:10 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.599 11:42:10 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:20.599 11:42:11 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:20.857 11:42:11 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:20.857 11:42:11 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:20.857 11:42:11 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:20.857 11:42:11 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:20.857 11:42:11 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:20.857 11:42:11 -- bdev/nbd_common.sh@65 -- # true 00:06:20.857 11:42:11 -- bdev/nbd_common.sh@65 -- # count=0 00:06:20.857 11:42:11 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:20.857 11:42:11 -- bdev/nbd_common.sh@104 -- # count=0 00:06:20.857 11:42:11 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:20.857 11:42:11 -- bdev/nbd_common.sh@109 -- # return 0 00:06:20.857 11:42:11 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:21.115 11:42:11 -- event/event.sh@35 -- # sleep 3 00:06:22.489 [2024-04-18 11:42:12.903884] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:22.748 [2024-04-18 11:42:13.105560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.748 [2024-04-18 11:42:13.105565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.006 [2024-04-18 11:42:13.332143] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:23.006 [2024-04-18 11:42:13.332199] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:24.379 11:42:14 -- event/event.sh@23 -- # for i in {0..2} 00:06:24.379 11:42:14 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:24.379 spdk_app_start Round 2 00:06:24.379 11:42:14 -- event/event.sh@25 -- # waitforlisten 2303818 /var/tmp/spdk-nbd.sock 00:06:24.379 11:42:14 -- common/autotest_common.sh@817 -- # '[' -z 2303818 ']' 00:06:24.379 11:42:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:24.379 11:42:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:24.379 11:42:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:24.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:24.379 11:42:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:24.379 11:42:14 -- common/autotest_common.sh@10 -- # set +x 00:06:24.379 11:42:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:24.379 11:42:14 -- common/autotest_common.sh@850 -- # return 0 00:06:24.379 11:42:14 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:24.637 Malloc0 00:06:24.637 11:42:14 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:24.895 Malloc1 00:06:24.895 11:42:15 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:24.895 11:42:15 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.895 11:42:15 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:24.895 11:42:15 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:24.895 11:42:15 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.895 11:42:15 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:24.895 11:42:15 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:24.895 11:42:15 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.895 11:42:15 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:24.895 11:42:15 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:24.895 11:42:15 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.895 11:42:15 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:24.895 11:42:15 -- bdev/nbd_common.sh@12 -- # local i 00:06:24.895 11:42:15 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:24.895 11:42:15 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:24.895 11:42:15 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:24.895 /dev/nbd0 00:06:24.895 11:42:15 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:24.895 11:42:15 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:24.895 11:42:15 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:06:24.895 11:42:15 -- common/autotest_common.sh@855 -- # local i 00:06:24.895 11:42:15 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:24.895 11:42:15 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:24.895 11:42:15 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:06:24.895 11:42:15 -- common/autotest_common.sh@859 -- # break 00:06:24.895 11:42:15 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:24.895 11:42:15 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:24.895 11:42:15 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:24.895 1+0 records in 00:06:24.895 1+0 records out 00:06:24.895 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230267 s, 17.8 MB/s 00:06:24.895 11:42:15 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:24.895 11:42:15 -- common/autotest_common.sh@872 -- # size=4096 00:06:24.895 11:42:15 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:24.895 11:42:15 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:24.895 11:42:15 -- common/autotest_common.sh@875 -- # return 0 00:06:24.895 11:42:15 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:24.895 11:42:15 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:24.895 11:42:15 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:25.154 /dev/nbd1 00:06:25.154 11:42:15 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:25.154 11:42:15 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:25.154 11:42:15 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:06:25.154 11:42:15 -- common/autotest_common.sh@855 -- # local i 00:06:25.154 11:42:15 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:25.154 11:42:15 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:25.154 11:42:15 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:06:25.154 11:42:15 -- common/autotest_common.sh@859 -- # break 00:06:25.154 11:42:15 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:25.154 11:42:15 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:25.154 11:42:15 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:25.154 1+0 records in 00:06:25.154 1+0 records out 00:06:25.154 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0002569 s, 15.9 MB/s 00:06:25.154 11:42:15 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:25.154 11:42:15 -- common/autotest_common.sh@872 -- # size=4096 00:06:25.154 11:42:15 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:25.154 11:42:15 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:25.154 11:42:15 -- common/autotest_common.sh@875 -- # return 0 00:06:25.154 11:42:15 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:25.154 11:42:15 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:25.154 11:42:15 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:25.154 11:42:15 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.154 11:42:15 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:25.412 11:42:15 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:25.412 { 00:06:25.412 "nbd_device": "/dev/nbd0", 00:06:25.412 "bdev_name": "Malloc0" 00:06:25.412 }, 00:06:25.412 { 00:06:25.412 "nbd_device": "/dev/nbd1", 00:06:25.412 "bdev_name": "Malloc1" 00:06:25.412 } 00:06:25.412 ]' 00:06:25.412 11:42:15 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:25.412 { 00:06:25.412 "nbd_device": "/dev/nbd0", 00:06:25.412 "bdev_name": "Malloc0" 00:06:25.412 }, 00:06:25.412 { 00:06:25.412 "nbd_device": "/dev/nbd1", 00:06:25.412 "bdev_name": "Malloc1" 00:06:25.412 } 00:06:25.412 ]' 00:06:25.412 11:42:15 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:25.412 11:42:15 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:25.412 /dev/nbd1' 00:06:25.412 11:42:15 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:25.412 /dev/nbd1' 00:06:25.412 11:42:15 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:25.412 11:42:15 -- bdev/nbd_common.sh@65 -- # count=2 00:06:25.412 11:42:15 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:25.413 11:42:15 -- bdev/nbd_common.sh@95 -- # count=2 00:06:25.413 11:42:15 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:25.413 11:42:15 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:25.413 11:42:15 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.413 11:42:15 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:25.413 11:42:15 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:25.413 11:42:15 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:25.413 11:42:15 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:25.413 11:42:15 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:25.413 256+0 records in 00:06:25.413 256+0 records out 00:06:25.413 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0112881 s, 92.9 MB/s 00:06:25.413 11:42:15 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:25.413 11:42:15 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:25.413 256+0 records in 00:06:25.413 256+0 records out 00:06:25.413 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0157267 s, 66.7 MB/s 00:06:25.413 11:42:15 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:25.413 11:42:15 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:25.413 256+0 records in 00:06:25.413 256+0 records out 00:06:25.413 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0175778 s, 59.7 MB/s 00:06:25.413 11:42:15 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:25.413 11:42:15 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.413 11:42:15 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:25.413 11:42:15 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:25.413 11:42:15 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:25.413 11:42:15 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:25.413 11:42:15 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:25.413 11:42:15 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:25.413 11:42:15 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:25.413 11:42:15 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:25.413 11:42:15 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:25.413 11:42:15 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:25.413 11:42:15 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:25.413 11:42:15 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.413 11:42:15 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.413 11:42:15 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:25.413 11:42:15 -- bdev/nbd_common.sh@51 -- # local i 00:06:25.413 11:42:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:25.413 11:42:15 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:25.671 11:42:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:25.671 11:42:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:25.671 11:42:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:25.671 11:42:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.671 11:42:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.671 11:42:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:25.671 11:42:16 -- bdev/nbd_common.sh@41 -- # break 00:06:25.671 11:42:16 -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.671 11:42:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:25.671 11:42:16 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:25.929 11:42:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:25.929 11:42:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:25.929 11:42:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:25.929 11:42:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.929 11:42:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.929 11:42:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:25.929 11:42:16 -- bdev/nbd_common.sh@41 -- # break 00:06:25.929 11:42:16 -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.929 11:42:16 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:25.929 11:42:16 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.929 11:42:16 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:26.187 11:42:16 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:26.188 11:42:16 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:26.188 11:42:16 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:26.188 11:42:16 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:26.188 11:42:16 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:26.188 11:42:16 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:26.188 11:42:16 -- bdev/nbd_common.sh@65 -- # true 00:06:26.188 11:42:16 -- bdev/nbd_common.sh@65 -- # count=0 00:06:26.188 11:42:16 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:26.188 11:42:16 -- bdev/nbd_common.sh@104 -- # count=0 00:06:26.188 11:42:16 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:26.188 11:42:16 -- bdev/nbd_common.sh@109 -- # return 0 00:06:26.188 11:42:16 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:26.446 11:42:16 -- event/event.sh@35 -- # sleep 3 00:06:27.822 [2024-04-18 11:42:18.235549] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:28.081 [2024-04-18 11:42:18.434823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.081 [2024-04-18 11:42:18.434825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.339 [2024-04-18 11:42:18.664394] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:28.339 [2024-04-18 11:42:18.664447] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:29.716 11:42:19 -- event/event.sh@38 -- # waitforlisten 2303818 /var/tmp/spdk-nbd.sock 00:06:29.716 11:42:19 -- common/autotest_common.sh@817 -- # '[' -z 2303818 ']' 00:06:29.716 11:42:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:29.716 11:42:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:29.716 11:42:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:29.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:29.716 11:42:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:29.716 11:42:19 -- common/autotest_common.sh@10 -- # set +x 00:06:29.716 11:42:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:29.716 11:42:20 -- common/autotest_common.sh@850 -- # return 0 00:06:29.716 11:42:20 -- event/event.sh@39 -- # killprocess 2303818 00:06:29.716 11:42:20 -- common/autotest_common.sh@936 -- # '[' -z 2303818 ']' 00:06:29.716 11:42:20 -- common/autotest_common.sh@940 -- # kill -0 2303818 00:06:29.716 11:42:20 -- common/autotest_common.sh@941 -- # uname 00:06:29.716 11:42:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:29.716 11:42:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2303818 00:06:29.716 11:42:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:29.716 11:42:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:29.716 11:42:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2303818' 00:06:29.716 killing process with pid 2303818 00:06:29.716 11:42:20 -- common/autotest_common.sh@955 -- # kill 2303818 00:06:29.716 11:42:20 -- common/autotest_common.sh@960 -- # wait 2303818 00:06:31.094 spdk_app_start is called in Round 0. 00:06:31.094 Shutdown signal received, stop current app iteration 00:06:31.094 Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 reinitialization... 00:06:31.094 spdk_app_start is called in Round 1. 00:06:31.094 Shutdown signal received, stop current app iteration 00:06:31.094 Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 reinitialization... 00:06:31.094 spdk_app_start is called in Round 2. 00:06:31.094 Shutdown signal received, stop current app iteration 00:06:31.094 Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 reinitialization... 00:06:31.094 spdk_app_start is called in Round 3. 00:06:31.094 Shutdown signal received, stop current app iteration 00:06:31.094 11:42:21 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:31.094 11:42:21 -- event/event.sh@42 -- # return 0 00:06:31.094 00:06:31.094 real 0m18.180s 00:06:31.094 user 0m36.218s 00:06:31.094 sys 0m3.170s 00:06:31.094 11:42:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:31.094 11:42:21 -- common/autotest_common.sh@10 -- # set +x 00:06:31.094 ************************************ 00:06:31.094 END TEST app_repeat 00:06:31.094 ************************************ 00:06:31.094 11:42:21 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:31.094 11:42:21 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:31.094 11:42:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:31.094 11:42:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:31.094 11:42:21 -- common/autotest_common.sh@10 -- # set +x 00:06:31.094 ************************************ 00:06:31.094 START TEST cpu_locks 00:06:31.094 ************************************ 00:06:31.094 11:42:21 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:31.094 * Looking for test storage... 00:06:31.094 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:31.094 11:42:21 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:31.094 11:42:21 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:31.094 11:42:21 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:31.094 11:42:21 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:31.094 11:42:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:31.094 11:42:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:31.094 11:42:21 -- common/autotest_common.sh@10 -- # set +x 00:06:31.353 ************************************ 00:06:31.353 START TEST default_locks 00:06:31.353 ************************************ 00:06:31.353 11:42:21 -- common/autotest_common.sh@1111 -- # default_locks 00:06:31.353 11:42:21 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2307687 00:06:31.353 11:42:21 -- event/cpu_locks.sh@47 -- # waitforlisten 2307687 00:06:31.353 11:42:21 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:31.353 11:42:21 -- common/autotest_common.sh@817 -- # '[' -z 2307687 ']' 00:06:31.353 11:42:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.353 11:42:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:31.353 11:42:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.353 11:42:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:31.353 11:42:21 -- common/autotest_common.sh@10 -- # set +x 00:06:31.353 [2024-04-18 11:42:21.804438] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:06:31.353 [2024-04-18 11:42:21.804527] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2307687 ] 00:06:31.353 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.635 [2024-04-18 11:42:21.926281] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.635 [2024-04-18 11:42:22.125564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.612 11:42:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:32.612 11:42:23 -- common/autotest_common.sh@850 -- # return 0 00:06:32.612 11:42:23 -- event/cpu_locks.sh@49 -- # locks_exist 2307687 00:06:32.612 11:42:23 -- event/cpu_locks.sh@22 -- # lslocks -p 2307687 00:06:32.612 11:42:23 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:33.178 lslocks: write error 00:06:33.178 11:42:23 -- event/cpu_locks.sh@50 -- # killprocess 2307687 00:06:33.178 11:42:23 -- common/autotest_common.sh@936 -- # '[' -z 2307687 ']' 00:06:33.178 11:42:23 -- common/autotest_common.sh@940 -- # kill -0 2307687 00:06:33.178 11:42:23 -- common/autotest_common.sh@941 -- # uname 00:06:33.178 11:42:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:33.178 11:42:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2307687 00:06:33.178 11:42:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:33.178 11:42:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:33.178 11:42:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2307687' 00:06:33.178 killing process with pid 2307687 00:06:33.178 11:42:23 -- common/autotest_common.sh@955 -- # kill 2307687 00:06:33.178 11:42:23 -- common/autotest_common.sh@960 -- # wait 2307687 00:06:35.711 11:42:25 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2307687 00:06:35.711 11:42:25 -- common/autotest_common.sh@638 -- # local es=0 00:06:35.711 11:42:25 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 2307687 00:06:35.711 11:42:25 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:06:35.711 11:42:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:35.711 11:42:25 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:06:35.711 11:42:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:35.711 11:42:25 -- common/autotest_common.sh@641 -- # waitforlisten 2307687 00:06:35.711 11:42:25 -- common/autotest_common.sh@817 -- # '[' -z 2307687 ']' 00:06:35.711 11:42:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.711 11:42:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:35.711 11:42:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.711 11:42:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:35.711 11:42:25 -- common/autotest_common.sh@10 -- # set +x 00:06:35.711 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (2307687) - No such process 00:06:35.711 ERROR: process (pid: 2307687) is no longer running 00:06:35.711 11:42:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:35.711 11:42:25 -- common/autotest_common.sh@850 -- # return 1 00:06:35.711 11:42:25 -- common/autotest_common.sh@641 -- # es=1 00:06:35.711 11:42:25 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:35.711 11:42:25 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:35.711 11:42:25 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:35.711 11:42:25 -- event/cpu_locks.sh@54 -- # no_locks 00:06:35.711 11:42:26 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:35.711 11:42:26 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:35.711 11:42:26 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:35.711 00:06:35.711 real 0m4.288s 00:06:35.711 user 0m4.221s 00:06:35.711 sys 0m0.868s 00:06:35.711 11:42:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:35.711 11:42:26 -- common/autotest_common.sh@10 -- # set +x 00:06:35.711 ************************************ 00:06:35.711 END TEST default_locks 00:06:35.711 ************************************ 00:06:35.711 11:42:26 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:35.711 11:42:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:35.711 11:42:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:35.711 11:42:26 -- common/autotest_common.sh@10 -- # set +x 00:06:35.711 ************************************ 00:06:35.711 START TEST default_locks_via_rpc 00:06:35.711 ************************************ 00:06:35.711 11:42:26 -- common/autotest_common.sh@1111 -- # default_locks_via_rpc 00:06:35.711 11:42:26 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2308524 00:06:35.711 11:42:26 -- event/cpu_locks.sh@63 -- # waitforlisten 2308524 00:06:35.711 11:42:26 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:35.711 11:42:26 -- common/autotest_common.sh@817 -- # '[' -z 2308524 ']' 00:06:35.711 11:42:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.711 11:42:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:35.711 11:42:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.711 11:42:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:35.711 11:42:26 -- common/autotest_common.sh@10 -- # set +x 00:06:35.970 [2024-04-18 11:42:26.292694] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:06:35.970 [2024-04-18 11:42:26.292784] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2308524 ] 00:06:35.970 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.970 [2024-04-18 11:42:26.414388] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.228 [2024-04-18 11:42:26.615365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.164 11:42:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:37.164 11:42:27 -- common/autotest_common.sh@850 -- # return 0 00:06:37.164 11:42:27 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:37.164 11:42:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:37.164 11:42:27 -- common/autotest_common.sh@10 -- # set +x 00:06:37.164 11:42:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:37.164 11:42:27 -- event/cpu_locks.sh@67 -- # no_locks 00:06:37.164 11:42:27 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:37.164 11:42:27 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:37.164 11:42:27 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:37.164 11:42:27 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:37.164 11:42:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:37.164 11:42:27 -- common/autotest_common.sh@10 -- # set +x 00:06:37.164 11:42:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:37.164 11:42:27 -- event/cpu_locks.sh@71 -- # locks_exist 2308524 00:06:37.164 11:42:27 -- event/cpu_locks.sh@22 -- # lslocks -p 2308524 00:06:37.164 11:42:27 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:37.422 11:42:27 -- event/cpu_locks.sh@73 -- # killprocess 2308524 00:06:37.423 11:42:27 -- common/autotest_common.sh@936 -- # '[' -z 2308524 ']' 00:06:37.423 11:42:27 -- common/autotest_common.sh@940 -- # kill -0 2308524 00:06:37.423 11:42:27 -- common/autotest_common.sh@941 -- # uname 00:06:37.423 11:42:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:37.423 11:42:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2308524 00:06:37.423 11:42:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:37.423 11:42:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:37.423 11:42:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2308524' 00:06:37.423 killing process with pid 2308524 00:06:37.423 11:42:27 -- common/autotest_common.sh@955 -- # kill 2308524 00:06:37.423 11:42:27 -- common/autotest_common.sh@960 -- # wait 2308524 00:06:39.954 00:06:39.954 real 0m4.014s 00:06:39.954 user 0m3.920s 00:06:39.954 sys 0m0.680s 00:06:39.954 11:42:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:39.954 11:42:30 -- common/autotest_common.sh@10 -- # set +x 00:06:39.954 ************************************ 00:06:39.954 END TEST default_locks_via_rpc 00:06:39.954 ************************************ 00:06:39.954 11:42:30 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:39.954 11:42:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:39.954 11:42:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:39.954 11:42:30 -- common/autotest_common.sh@10 -- # set +x 00:06:39.954 ************************************ 00:06:39.954 START TEST non_locking_app_on_locked_coremask 00:06:39.954 ************************************ 00:06:39.954 11:42:30 -- common/autotest_common.sh@1111 -- # non_locking_app_on_locked_coremask 00:06:39.954 11:42:30 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2309353 00:06:39.954 11:42:30 -- event/cpu_locks.sh@81 -- # waitforlisten 2309353 /var/tmp/spdk.sock 00:06:39.954 11:42:30 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:39.954 11:42:30 -- common/autotest_common.sh@817 -- # '[' -z 2309353 ']' 00:06:39.954 11:42:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.954 11:42:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:39.954 11:42:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.954 11:42:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:39.954 11:42:30 -- common/autotest_common.sh@10 -- # set +x 00:06:39.954 [2024-04-18 11:42:30.502987] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:06:39.954 [2024-04-18 11:42:30.503078] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2309353 ] 00:06:40.212 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.213 [2024-04-18 11:42:30.627982] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.471 [2024-04-18 11:42:30.850282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.403 11:42:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:41.403 11:42:31 -- common/autotest_common.sh@850 -- # return 0 00:06:41.403 11:42:31 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2309583 00:06:41.403 11:42:31 -- event/cpu_locks.sh@85 -- # waitforlisten 2309583 /var/tmp/spdk2.sock 00:06:41.403 11:42:31 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:41.403 11:42:31 -- common/autotest_common.sh@817 -- # '[' -z 2309583 ']' 00:06:41.403 11:42:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:41.403 11:42:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:41.403 11:42:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:41.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:41.403 11:42:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:41.403 11:42:31 -- common/autotest_common.sh@10 -- # set +x 00:06:41.403 [2024-04-18 11:42:31.811272] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:06:41.403 [2024-04-18 11:42:31.811361] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2309583 ] 00:06:41.403 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.661 [2024-04-18 11:42:31.980788] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:41.661 [2024-04-18 11:42:31.980834] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.919 [2024-04-18 11:42:32.396299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.821 11:42:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:43.821 11:42:34 -- common/autotest_common.sh@850 -- # return 0 00:06:43.821 11:42:34 -- event/cpu_locks.sh@87 -- # locks_exist 2309353 00:06:43.821 11:42:34 -- event/cpu_locks.sh@22 -- # lslocks -p 2309353 00:06:43.821 11:42:34 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:44.388 lslocks: write error 00:06:44.388 11:42:34 -- event/cpu_locks.sh@89 -- # killprocess 2309353 00:06:44.388 11:42:34 -- common/autotest_common.sh@936 -- # '[' -z 2309353 ']' 00:06:44.388 11:42:34 -- common/autotest_common.sh@940 -- # kill -0 2309353 00:06:44.388 11:42:34 -- common/autotest_common.sh@941 -- # uname 00:06:44.388 11:42:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:44.388 11:42:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2309353 00:06:44.388 11:42:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:44.388 11:42:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:44.388 11:42:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2309353' 00:06:44.388 killing process with pid 2309353 00:06:44.388 11:42:34 -- common/autotest_common.sh@955 -- # kill 2309353 00:06:44.388 11:42:34 -- common/autotest_common.sh@960 -- # wait 2309353 00:06:49.688 11:42:39 -- event/cpu_locks.sh@90 -- # killprocess 2309583 00:06:49.688 11:42:39 -- common/autotest_common.sh@936 -- # '[' -z 2309583 ']' 00:06:49.688 11:42:39 -- common/autotest_common.sh@940 -- # kill -0 2309583 00:06:49.688 11:42:39 -- common/autotest_common.sh@941 -- # uname 00:06:49.688 11:42:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:49.688 11:42:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2309583 00:06:49.688 11:42:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:49.688 11:42:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:49.688 11:42:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2309583' 00:06:49.688 killing process with pid 2309583 00:06:49.688 11:42:39 -- common/autotest_common.sh@955 -- # kill 2309583 00:06:49.688 11:42:39 -- common/autotest_common.sh@960 -- # wait 2309583 00:06:51.590 00:06:51.590 real 0m11.428s 00:06:51.590 user 0m11.527s 00:06:51.590 sys 0m1.381s 00:06:51.590 11:42:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:51.590 11:42:41 -- common/autotest_common.sh@10 -- # set +x 00:06:51.590 ************************************ 00:06:51.590 END TEST non_locking_app_on_locked_coremask 00:06:51.590 ************************************ 00:06:51.590 11:42:41 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:51.590 11:42:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:51.590 11:42:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:51.590 11:42:41 -- common/autotest_common.sh@10 -- # set +x 00:06:51.590 ************************************ 00:06:51.590 START TEST locking_app_on_unlocked_coremask 00:06:51.590 ************************************ 00:06:51.590 11:42:42 -- common/autotest_common.sh@1111 -- # locking_app_on_unlocked_coremask 00:06:51.590 11:42:42 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2311276 00:06:51.590 11:42:42 -- event/cpu_locks.sh@99 -- # waitforlisten 2311276 /var/tmp/spdk.sock 00:06:51.590 11:42:42 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:51.590 11:42:42 -- common/autotest_common.sh@817 -- # '[' -z 2311276 ']' 00:06:51.590 11:42:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.590 11:42:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:51.590 11:42:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.590 11:42:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:51.590 11:42:42 -- common/autotest_common.sh@10 -- # set +x 00:06:51.590 [2024-04-18 11:42:42.128281] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:06:51.590 [2024-04-18 11:42:42.128374] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2311276 ] 00:06:51.849 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.849 [2024-04-18 11:42:42.252625] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:51.849 [2024-04-18 11:42:42.252666] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.107 [2024-04-18 11:42:42.461567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.042 11:42:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:53.042 11:42:43 -- common/autotest_common.sh@850 -- # return 0 00:06:53.042 11:42:43 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2311545 00:06:53.042 11:42:43 -- event/cpu_locks.sh@103 -- # waitforlisten 2311545 /var/tmp/spdk2.sock 00:06:53.042 11:42:43 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:53.042 11:42:43 -- common/autotest_common.sh@817 -- # '[' -z 2311545 ']' 00:06:53.042 11:42:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:53.042 11:42:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:53.042 11:42:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:53.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:53.042 11:42:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:53.042 11:42:43 -- common/autotest_common.sh@10 -- # set +x 00:06:53.042 [2024-04-18 11:42:43.421646] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:06:53.042 [2024-04-18 11:42:43.421737] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2311545 ] 00:06:53.042 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.042 [2024-04-18 11:42:43.590849] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.608 [2024-04-18 11:42:44.007522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.507 11:42:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:55.507 11:42:45 -- common/autotest_common.sh@850 -- # return 0 00:06:55.507 11:42:45 -- event/cpu_locks.sh@105 -- # locks_exist 2311545 00:06:55.507 11:42:45 -- event/cpu_locks.sh@22 -- # lslocks -p 2311545 00:06:55.507 11:42:45 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:56.880 lslocks: write error 00:06:56.880 11:42:47 -- event/cpu_locks.sh@107 -- # killprocess 2311276 00:06:56.880 11:42:47 -- common/autotest_common.sh@936 -- # '[' -z 2311276 ']' 00:06:56.880 11:42:47 -- common/autotest_common.sh@940 -- # kill -0 2311276 00:06:56.880 11:42:47 -- common/autotest_common.sh@941 -- # uname 00:06:56.880 11:42:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:56.880 11:42:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2311276 00:06:56.880 11:42:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:56.880 11:42:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:56.880 11:42:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2311276' 00:06:56.880 killing process with pid 2311276 00:06:56.880 11:42:47 -- common/autotest_common.sh@955 -- # kill 2311276 00:06:56.880 11:42:47 -- common/autotest_common.sh@960 -- # wait 2311276 00:07:02.145 11:42:51 -- event/cpu_locks.sh@108 -- # killprocess 2311545 00:07:02.145 11:42:51 -- common/autotest_common.sh@936 -- # '[' -z 2311545 ']' 00:07:02.145 11:42:51 -- common/autotest_common.sh@940 -- # kill -0 2311545 00:07:02.145 11:42:51 -- common/autotest_common.sh@941 -- # uname 00:07:02.145 11:42:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:02.145 11:42:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2311545 00:07:02.145 11:42:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:02.145 11:42:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:02.145 11:42:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2311545' 00:07:02.145 killing process with pid 2311545 00:07:02.145 11:42:51 -- common/autotest_common.sh@955 -- # kill 2311545 00:07:02.145 11:42:51 -- common/autotest_common.sh@960 -- # wait 2311545 00:07:04.047 00:07:04.047 real 0m12.105s 00:07:04.047 user 0m12.205s 00:07:04.047 sys 0m1.694s 00:07:04.047 11:42:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:04.047 11:42:54 -- common/autotest_common.sh@10 -- # set +x 00:07:04.047 ************************************ 00:07:04.047 END TEST locking_app_on_unlocked_coremask 00:07:04.047 ************************************ 00:07:04.047 11:42:54 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:04.047 11:42:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:04.047 11:42:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:04.047 11:42:54 -- common/autotest_common.sh@10 -- # set +x 00:07:04.047 ************************************ 00:07:04.047 START TEST locking_app_on_locked_coremask 00:07:04.047 ************************************ 00:07:04.047 11:42:54 -- common/autotest_common.sh@1111 -- # locking_app_on_locked_coremask 00:07:04.047 11:42:54 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2313452 00:07:04.047 11:42:54 -- event/cpu_locks.sh@116 -- # waitforlisten 2313452 /var/tmp/spdk.sock 00:07:04.047 11:42:54 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:04.047 11:42:54 -- common/autotest_common.sh@817 -- # '[' -z 2313452 ']' 00:07:04.047 11:42:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.047 11:42:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:04.047 11:42:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.047 11:42:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:04.047 11:42:54 -- common/autotest_common.sh@10 -- # set +x 00:07:04.047 [2024-04-18 11:42:54.431414] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:07:04.047 [2024-04-18 11:42:54.431543] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2313452 ] 00:07:04.047 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.047 [2024-04-18 11:42:54.559536] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.305 [2024-04-18 11:42:54.782308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.241 11:42:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:05.241 11:42:55 -- common/autotest_common.sh@850 -- # return 0 00:07:05.241 11:42:55 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:05.241 11:42:55 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2313718 00:07:05.241 11:42:55 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2313718 /var/tmp/spdk2.sock 00:07:05.241 11:42:55 -- common/autotest_common.sh@638 -- # local es=0 00:07:05.241 11:42:55 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 2313718 /var/tmp/spdk2.sock 00:07:05.241 11:42:55 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:07:05.241 11:42:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:05.241 11:42:55 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:07:05.241 11:42:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:05.241 11:42:55 -- common/autotest_common.sh@641 -- # waitforlisten 2313718 /var/tmp/spdk2.sock 00:07:05.241 11:42:55 -- common/autotest_common.sh@817 -- # '[' -z 2313718 ']' 00:07:05.241 11:42:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:05.241 11:42:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:05.241 11:42:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:05.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:05.241 11:42:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:05.241 11:42:55 -- common/autotest_common.sh@10 -- # set +x 00:07:05.241 [2024-04-18 11:42:55.733033] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:07:05.241 [2024-04-18 11:42:55.733121] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2313718 ] 00:07:05.500 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.500 [2024-04-18 11:42:55.903053] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2313452 has claimed it. 00:07:05.500 [2024-04-18 11:42:55.903107] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:06.067 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (2313718) - No such process 00:07:06.067 ERROR: process (pid: 2313718) is no longer running 00:07:06.067 11:42:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:06.067 11:42:56 -- common/autotest_common.sh@850 -- # return 1 00:07:06.067 11:42:56 -- common/autotest_common.sh@641 -- # es=1 00:07:06.067 11:42:56 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:06.067 11:42:56 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:06.067 11:42:56 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:06.067 11:42:56 -- event/cpu_locks.sh@122 -- # locks_exist 2313452 00:07:06.067 11:42:56 -- event/cpu_locks.sh@22 -- # lslocks -p 2313452 00:07:06.067 11:42:56 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:06.634 lslocks: write error 00:07:06.634 11:42:57 -- event/cpu_locks.sh@124 -- # killprocess 2313452 00:07:06.634 11:42:57 -- common/autotest_common.sh@936 -- # '[' -z 2313452 ']' 00:07:06.634 11:42:57 -- common/autotest_common.sh@940 -- # kill -0 2313452 00:07:06.634 11:42:57 -- common/autotest_common.sh@941 -- # uname 00:07:06.634 11:42:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:06.634 11:42:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2313452 00:07:06.634 11:42:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:06.634 11:42:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:06.634 11:42:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2313452' 00:07:06.634 killing process with pid 2313452 00:07:06.634 11:42:57 -- common/autotest_common.sh@955 -- # kill 2313452 00:07:06.634 11:42:57 -- common/autotest_common.sh@960 -- # wait 2313452 00:07:09.165 00:07:09.165 real 0m5.091s 00:07:09.165 user 0m5.173s 00:07:09.165 sys 0m1.047s 00:07:09.165 11:42:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:09.165 11:42:59 -- common/autotest_common.sh@10 -- # set +x 00:07:09.165 ************************************ 00:07:09.165 END TEST locking_app_on_locked_coremask 00:07:09.165 ************************************ 00:07:09.165 11:42:59 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:09.165 11:42:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:09.165 11:42:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:09.165 11:42:59 -- common/autotest_common.sh@10 -- # set +x 00:07:09.165 ************************************ 00:07:09.165 START TEST locking_overlapped_coremask 00:07:09.165 ************************************ 00:07:09.165 11:42:59 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask 00:07:09.165 11:42:59 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2314547 00:07:09.165 11:42:59 -- event/cpu_locks.sh@133 -- # waitforlisten 2314547 /var/tmp/spdk.sock 00:07:09.165 11:42:59 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:09.165 11:42:59 -- common/autotest_common.sh@817 -- # '[' -z 2314547 ']' 00:07:09.165 11:42:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.165 11:42:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:09.165 11:42:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.165 11:42:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:09.165 11:42:59 -- common/autotest_common.sh@10 -- # set +x 00:07:09.424 [2024-04-18 11:42:59.731324] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:07:09.424 [2024-04-18 11:42:59.731417] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2314547 ] 00:07:09.424 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.424 [2024-04-18 11:42:59.860311] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:09.682 [2024-04-18 11:43:00.086456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.682 [2024-04-18 11:43:00.086469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.683 [2024-04-18 11:43:00.086474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:10.618 11:43:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:10.618 11:43:01 -- common/autotest_common.sh@850 -- # return 0 00:07:10.618 11:43:01 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2314795 00:07:10.618 11:43:01 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2314795 /var/tmp/spdk2.sock 00:07:10.618 11:43:01 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:10.618 11:43:01 -- common/autotest_common.sh@638 -- # local es=0 00:07:10.618 11:43:01 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 2314795 /var/tmp/spdk2.sock 00:07:10.618 11:43:01 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:07:10.618 11:43:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:10.618 11:43:01 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:07:10.618 11:43:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:10.618 11:43:01 -- common/autotest_common.sh@641 -- # waitforlisten 2314795 /var/tmp/spdk2.sock 00:07:10.618 11:43:01 -- common/autotest_common.sh@817 -- # '[' -z 2314795 ']' 00:07:10.618 11:43:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:10.618 11:43:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:10.618 11:43:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:10.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:10.618 11:43:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:10.618 11:43:01 -- common/autotest_common.sh@10 -- # set +x 00:07:10.619 [2024-04-18 11:43:01.121801] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:07:10.619 [2024-04-18 11:43:01.121889] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2314795 ] 00:07:10.893 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.893 [2024-04-18 11:43:01.292445] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2314547 has claimed it. 00:07:10.893 [2024-04-18 11:43:01.292504] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:11.161 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (2314795) - No such process 00:07:11.161 ERROR: process (pid: 2314795) is no longer running 00:07:11.161 11:43:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:11.161 11:43:01 -- common/autotest_common.sh@850 -- # return 1 00:07:11.161 11:43:01 -- common/autotest_common.sh@641 -- # es=1 00:07:11.161 11:43:01 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:11.161 11:43:01 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:11.161 11:43:01 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:11.161 11:43:01 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:11.161 11:43:01 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:11.161 11:43:01 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:11.161 11:43:01 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:11.161 11:43:01 -- event/cpu_locks.sh@141 -- # killprocess 2314547 00:07:11.161 11:43:01 -- common/autotest_common.sh@936 -- # '[' -z 2314547 ']' 00:07:11.161 11:43:01 -- common/autotest_common.sh@940 -- # kill -0 2314547 00:07:11.161 11:43:01 -- common/autotest_common.sh@941 -- # uname 00:07:11.420 11:43:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:11.420 11:43:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2314547 00:07:11.420 11:43:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:11.420 11:43:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:11.420 11:43:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2314547' 00:07:11.420 killing process with pid 2314547 00:07:11.420 11:43:01 -- common/autotest_common.sh@955 -- # kill 2314547 00:07:11.420 11:43:01 -- common/autotest_common.sh@960 -- # wait 2314547 00:07:13.967 00:07:13.967 real 0m4.547s 00:07:13.967 user 0m11.792s 00:07:13.967 sys 0m0.703s 00:07:13.967 11:43:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:13.967 11:43:04 -- common/autotest_common.sh@10 -- # set +x 00:07:13.967 ************************************ 00:07:13.967 END TEST locking_overlapped_coremask 00:07:13.967 ************************************ 00:07:13.967 11:43:04 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:13.967 11:43:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:13.967 11:43:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:13.967 11:43:04 -- common/autotest_common.sh@10 -- # set +x 00:07:13.967 ************************************ 00:07:13.967 START TEST locking_overlapped_coremask_via_rpc 00:07:13.967 ************************************ 00:07:13.967 11:43:04 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask_via_rpc 00:07:13.967 11:43:04 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2315398 00:07:13.967 11:43:04 -- event/cpu_locks.sh@149 -- # waitforlisten 2315398 /var/tmp/spdk.sock 00:07:13.967 11:43:04 -- common/autotest_common.sh@817 -- # '[' -z 2315398 ']' 00:07:13.967 11:43:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.967 11:43:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:13.967 11:43:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.967 11:43:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:13.967 11:43:04 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:13.967 11:43:04 -- common/autotest_common.sh@10 -- # set +x 00:07:13.967 [2024-04-18 11:43:04.434672] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:07:13.967 [2024-04-18 11:43:04.434780] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2315398 ] 00:07:13.967 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.226 [2024-04-18 11:43:04.557062] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:14.226 [2024-04-18 11:43:04.557099] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:14.226 [2024-04-18 11:43:04.762045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.226 [2024-04-18 11:43:04.762107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.226 [2024-04-18 11:43:04.762113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:15.161 11:43:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:15.161 11:43:05 -- common/autotest_common.sh@850 -- # return 0 00:07:15.161 11:43:05 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2315664 00:07:15.161 11:43:05 -- event/cpu_locks.sh@153 -- # waitforlisten 2315664 /var/tmp/spdk2.sock 00:07:15.161 11:43:05 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:15.161 11:43:05 -- common/autotest_common.sh@817 -- # '[' -z 2315664 ']' 00:07:15.161 11:43:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:15.161 11:43:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:15.161 11:43:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:15.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:15.161 11:43:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:15.161 11:43:05 -- common/autotest_common.sh@10 -- # set +x 00:07:15.419 [2024-04-18 11:43:05.785468] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:07:15.419 [2024-04-18 11:43:05.785557] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2315664 ] 00:07:15.419 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.419 [2024-04-18 11:43:05.954843] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:15.419 [2024-04-18 11:43:05.954888] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:15.986 [2024-04-18 11:43:06.402091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:15.986 [2024-04-18 11:43:06.405507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:15.986 [2024-04-18 11:43:06.405532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:07:17.888 11:43:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:17.888 11:43:08 -- common/autotest_common.sh@850 -- # return 0 00:07:17.888 11:43:08 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:17.888 11:43:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.888 11:43:08 -- common/autotest_common.sh@10 -- # set +x 00:07:17.888 11:43:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.888 11:43:08 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:17.888 11:43:08 -- common/autotest_common.sh@638 -- # local es=0 00:07:17.888 11:43:08 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:17.888 11:43:08 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:07:17.888 11:43:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:17.888 11:43:08 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:07:17.888 11:43:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:17.888 11:43:08 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:17.888 11:43:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.888 11:43:08 -- common/autotest_common.sh@10 -- # set +x 00:07:17.888 [2024-04-18 11:43:08.287570] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2315398 has claimed it. 00:07:17.888 request: 00:07:17.888 { 00:07:17.888 "method": "framework_enable_cpumask_locks", 00:07:17.888 "req_id": 1 00:07:17.888 } 00:07:17.888 Got JSON-RPC error response 00:07:17.888 response: 00:07:17.888 { 00:07:17.888 "code": -32603, 00:07:17.888 "message": "Failed to claim CPU core: 2" 00:07:17.888 } 00:07:17.888 11:43:08 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:07:17.888 11:43:08 -- common/autotest_common.sh@641 -- # es=1 00:07:17.888 11:43:08 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:17.888 11:43:08 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:17.888 11:43:08 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:17.888 11:43:08 -- event/cpu_locks.sh@158 -- # waitforlisten 2315398 /var/tmp/spdk.sock 00:07:17.888 11:43:08 -- common/autotest_common.sh@817 -- # '[' -z 2315398 ']' 00:07:17.888 11:43:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.888 11:43:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:17.888 11:43:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.888 11:43:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:17.888 11:43:08 -- common/autotest_common.sh@10 -- # set +x 00:07:18.146 11:43:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:18.146 11:43:08 -- common/autotest_common.sh@850 -- # return 0 00:07:18.146 11:43:08 -- event/cpu_locks.sh@159 -- # waitforlisten 2315664 /var/tmp/spdk2.sock 00:07:18.146 11:43:08 -- common/autotest_common.sh@817 -- # '[' -z 2315664 ']' 00:07:18.146 11:43:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:18.146 11:43:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:18.146 11:43:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:18.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:18.146 11:43:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:18.146 11:43:08 -- common/autotest_common.sh@10 -- # set +x 00:07:18.146 11:43:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:18.146 11:43:08 -- common/autotest_common.sh@850 -- # return 0 00:07:18.146 11:43:08 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:18.146 11:43:08 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:18.146 11:43:08 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:18.146 11:43:08 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:18.146 00:07:18.146 real 0m4.331s 00:07:18.146 user 0m0.963s 00:07:18.146 sys 0m0.215s 00:07:18.146 11:43:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:18.146 11:43:08 -- common/autotest_common.sh@10 -- # set +x 00:07:18.146 ************************************ 00:07:18.146 END TEST locking_overlapped_coremask_via_rpc 00:07:18.146 ************************************ 00:07:18.404 11:43:08 -- event/cpu_locks.sh@174 -- # cleanup 00:07:18.404 11:43:08 -- event/cpu_locks.sh@15 -- # [[ -z 2315398 ]] 00:07:18.404 11:43:08 -- event/cpu_locks.sh@15 -- # killprocess 2315398 00:07:18.404 11:43:08 -- common/autotest_common.sh@936 -- # '[' -z 2315398 ']' 00:07:18.404 11:43:08 -- common/autotest_common.sh@940 -- # kill -0 2315398 00:07:18.404 11:43:08 -- common/autotest_common.sh@941 -- # uname 00:07:18.404 11:43:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:18.404 11:43:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2315398 00:07:18.404 11:43:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:18.404 11:43:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:18.404 11:43:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2315398' 00:07:18.404 killing process with pid 2315398 00:07:18.404 11:43:08 -- common/autotest_common.sh@955 -- # kill 2315398 00:07:18.404 11:43:08 -- common/autotest_common.sh@960 -- # wait 2315398 00:07:20.936 11:43:11 -- event/cpu_locks.sh@16 -- # [[ -z 2315664 ]] 00:07:20.936 11:43:11 -- event/cpu_locks.sh@16 -- # killprocess 2315664 00:07:20.936 11:43:11 -- common/autotest_common.sh@936 -- # '[' -z 2315664 ']' 00:07:20.936 11:43:11 -- common/autotest_common.sh@940 -- # kill -0 2315664 00:07:20.936 11:43:11 -- common/autotest_common.sh@941 -- # uname 00:07:20.936 11:43:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:20.936 11:43:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2315664 00:07:20.936 11:43:11 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:07:20.936 11:43:11 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:07:20.936 11:43:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2315664' 00:07:20.936 killing process with pid 2315664 00:07:20.936 11:43:11 -- common/autotest_common.sh@955 -- # kill 2315664 00:07:20.936 11:43:11 -- common/autotest_common.sh@960 -- # wait 2315664 00:07:23.467 11:43:13 -- event/cpu_locks.sh@18 -- # rm -f 00:07:23.467 11:43:13 -- event/cpu_locks.sh@1 -- # cleanup 00:07:23.467 11:43:13 -- event/cpu_locks.sh@15 -- # [[ -z 2315398 ]] 00:07:23.467 11:43:13 -- event/cpu_locks.sh@15 -- # killprocess 2315398 00:07:23.467 11:43:13 -- common/autotest_common.sh@936 -- # '[' -z 2315398 ']' 00:07:23.467 11:43:13 -- common/autotest_common.sh@940 -- # kill -0 2315398 00:07:23.467 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (2315398) - No such process 00:07:23.467 11:43:13 -- common/autotest_common.sh@963 -- # echo 'Process with pid 2315398 is not found' 00:07:23.467 Process with pid 2315398 is not found 00:07:23.467 11:43:13 -- event/cpu_locks.sh@16 -- # [[ -z 2315664 ]] 00:07:23.467 11:43:13 -- event/cpu_locks.sh@16 -- # killprocess 2315664 00:07:23.467 11:43:13 -- common/autotest_common.sh@936 -- # '[' -z 2315664 ']' 00:07:23.467 11:43:13 -- common/autotest_common.sh@940 -- # kill -0 2315664 00:07:23.467 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (2315664) - No such process 00:07:23.467 11:43:13 -- common/autotest_common.sh@963 -- # echo 'Process with pid 2315664 is not found' 00:07:23.467 Process with pid 2315664 is not found 00:07:23.467 11:43:13 -- event/cpu_locks.sh@18 -- # rm -f 00:07:23.467 00:07:23.467 real 0m52.286s 00:07:23.467 user 1m25.241s 00:07:23.467 sys 0m8.311s 00:07:23.467 11:43:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:23.467 11:43:13 -- common/autotest_common.sh@10 -- # set +x 00:07:23.467 ************************************ 00:07:23.467 END TEST cpu_locks 00:07:23.467 ************************************ 00:07:23.467 00:07:23.467 real 1m23.057s 00:07:23.467 user 2m18.835s 00:07:23.467 sys 0m13.399s 00:07:23.467 11:43:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:23.467 11:43:13 -- common/autotest_common.sh@10 -- # set +x 00:07:23.467 ************************************ 00:07:23.467 END TEST event 00:07:23.467 ************************************ 00:07:23.467 11:43:13 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:23.467 11:43:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:23.467 11:43:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:23.467 11:43:13 -- common/autotest_common.sh@10 -- # set +x 00:07:23.467 ************************************ 00:07:23.467 START TEST thread 00:07:23.467 ************************************ 00:07:23.467 11:43:13 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:23.727 * Looking for test storage... 00:07:23.727 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:23.727 11:43:14 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:23.727 11:43:14 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:23.727 11:43:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:23.727 11:43:14 -- common/autotest_common.sh@10 -- # set +x 00:07:23.727 ************************************ 00:07:23.727 START TEST thread_poller_perf 00:07:23.727 ************************************ 00:07:23.727 11:43:14 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:23.986 [2024-04-18 11:43:14.287781] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:07:23.986 [2024-04-18 11:43:14.287856] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2317128 ] 00:07:23.986 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.986 [2024-04-18 11:43:14.407988] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.244 [2024-04-18 11:43:14.611390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.244 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:25.619 ====================================== 00:07:25.619 busy:2509001292 (cyc) 00:07:25.619 total_run_count: 413000 00:07:25.619 tsc_hz: 2500000000 (cyc) 00:07:25.619 ====================================== 00:07:25.619 poller_cost: 6075 (cyc), 2430 (nsec) 00:07:25.619 00:07:25.619 real 0m1.770s 00:07:25.619 user 0m1.609s 00:07:25.619 sys 0m0.154s 00:07:25.619 11:43:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:25.619 11:43:16 -- common/autotest_common.sh@10 -- # set +x 00:07:25.619 ************************************ 00:07:25.619 END TEST thread_poller_perf 00:07:25.620 ************************************ 00:07:25.620 11:43:16 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:25.620 11:43:16 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:25.620 11:43:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:25.620 11:43:16 -- common/autotest_common.sh@10 -- # set +x 00:07:25.878 ************************************ 00:07:25.878 START TEST thread_poller_perf 00:07:25.878 ************************************ 00:07:25.878 11:43:16 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:25.878 [2024-04-18 11:43:16.267527] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:07:25.878 [2024-04-18 11:43:16.267600] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2317555 ] 00:07:25.878 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.878 [2024-04-18 11:43:16.388617] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.136 [2024-04-18 11:43:16.593489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.136 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:27.512 ====================================== 00:07:27.512 busy:2503467832 (cyc) 00:07:27.512 total_run_count: 5318000 00:07:27.512 tsc_hz: 2500000000 (cyc) 00:07:27.512 ====================================== 00:07:27.512 poller_cost: 470 (cyc), 188 (nsec) 00:07:27.512 00:07:27.512 real 0m1.776s 00:07:27.512 user 0m1.618s 00:07:27.512 sys 0m0.152s 00:07:27.512 11:43:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:27.512 11:43:17 -- common/autotest_common.sh@10 -- # set +x 00:07:27.512 ************************************ 00:07:27.512 END TEST thread_poller_perf 00:07:27.512 ************************************ 00:07:27.512 11:43:18 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:27.512 00:07:27.512 real 0m4.073s 00:07:27.512 user 0m3.397s 00:07:27.512 sys 0m0.638s 00:07:27.512 11:43:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:27.512 11:43:18 -- common/autotest_common.sh@10 -- # set +x 00:07:27.512 ************************************ 00:07:27.512 END TEST thread 00:07:27.512 ************************************ 00:07:27.770 11:43:18 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:27.770 11:43:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:27.770 11:43:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:27.770 11:43:18 -- common/autotest_common.sh@10 -- # set +x 00:07:27.770 ************************************ 00:07:27.770 START TEST accel 00:07:27.770 ************************************ 00:07:27.770 11:43:18 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:28.028 * Looking for test storage... 00:07:28.028 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:28.028 11:43:18 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:28.028 11:43:18 -- accel/accel.sh@82 -- # get_expected_opcs 00:07:28.028 11:43:18 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:28.028 11:43:18 -- accel/accel.sh@62 -- # spdk_tgt_pid=2318011 00:07:28.028 11:43:18 -- accel/accel.sh@63 -- # waitforlisten 2318011 00:07:28.028 11:43:18 -- common/autotest_common.sh@817 -- # '[' -z 2318011 ']' 00:07:28.028 11:43:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.028 11:43:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:28.028 11:43:18 -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:28.028 11:43:18 -- accel/accel.sh@61 -- # build_accel_config 00:07:28.028 11:43:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.028 11:43:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:28.028 11:43:18 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:28.028 11:43:18 -- common/autotest_common.sh@10 -- # set +x 00:07:28.028 11:43:18 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:28.028 11:43:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.028 11:43:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.028 11:43:18 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:28.028 11:43:18 -- accel/accel.sh@40 -- # local IFS=, 00:07:28.028 11:43:18 -- accel/accel.sh@41 -- # jq -r . 00:07:28.028 [2024-04-18 11:43:18.468322] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:07:28.028 [2024-04-18 11:43:18.468412] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2318011 ] 00:07:28.028 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.287 [2024-04-18 11:43:18.591364] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.287 [2024-04-18 11:43:18.798995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.222 11:43:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:29.222 11:43:19 -- common/autotest_common.sh@850 -- # return 0 00:07:29.222 11:43:19 -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:29.222 11:43:19 -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:29.222 11:43:19 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:29.222 11:43:19 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:29.222 11:43:19 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:29.222 11:43:19 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:29.222 11:43:19 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:29.222 11:43:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:29.222 11:43:19 -- common/autotest_common.sh@10 -- # set +x 00:07:29.222 11:43:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:29.222 11:43:19 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:29.222 11:43:19 -- accel/accel.sh@72 -- # IFS== 00:07:29.222 11:43:19 -- accel/accel.sh@72 -- # read -r opc module 00:07:29.222 11:43:19 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:29.222 11:43:19 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:29.222 11:43:19 -- accel/accel.sh@72 -- # IFS== 00:07:29.222 11:43:19 -- accel/accel.sh@72 -- # read -r opc module 00:07:29.222 11:43:19 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:29.222 11:43:19 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:29.222 11:43:19 -- accel/accel.sh@72 -- # IFS== 00:07:29.222 11:43:19 -- accel/accel.sh@72 -- # read -r opc module 00:07:29.222 11:43:19 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:29.222 11:43:19 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:29.222 11:43:19 -- accel/accel.sh@72 -- # IFS== 00:07:29.222 11:43:19 -- accel/accel.sh@72 -- # read -r opc module 00:07:29.222 11:43:19 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:29.222 11:43:19 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:29.222 11:43:19 -- accel/accel.sh@72 -- # IFS== 00:07:29.222 11:43:19 -- accel/accel.sh@72 -- # read -r opc module 00:07:29.222 11:43:19 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:29.222 11:43:19 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:29.222 11:43:19 -- accel/accel.sh@72 -- # IFS== 00:07:29.222 11:43:19 -- accel/accel.sh@72 -- # read -r opc module 00:07:29.222 11:43:19 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:29.222 11:43:19 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:29.222 11:43:19 -- accel/accel.sh@72 -- # IFS== 00:07:29.222 11:43:19 -- accel/accel.sh@72 -- # read -r opc module 00:07:29.222 11:43:19 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:29.222 11:43:19 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:29.222 11:43:19 -- accel/accel.sh@72 -- # IFS== 00:07:29.222 11:43:19 -- accel/accel.sh@72 -- # read -r opc module 00:07:29.222 11:43:19 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:29.222 11:43:19 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:29.222 11:43:19 -- accel/accel.sh@72 -- # IFS== 00:07:29.222 11:43:19 -- accel/accel.sh@72 -- # read -r opc module 00:07:29.222 11:43:19 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:29.222 11:43:19 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:29.222 11:43:19 -- accel/accel.sh@72 -- # IFS== 00:07:29.222 11:43:19 -- accel/accel.sh@72 -- # read -r opc module 00:07:29.222 11:43:19 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:29.222 11:43:19 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:29.222 11:43:19 -- accel/accel.sh@72 -- # IFS== 00:07:29.222 11:43:19 -- accel/accel.sh@72 -- # read -r opc module 00:07:29.222 11:43:19 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:29.222 11:43:19 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:29.222 11:43:19 -- accel/accel.sh@72 -- # IFS== 00:07:29.222 11:43:19 -- accel/accel.sh@72 -- # read -r opc module 00:07:29.222 11:43:19 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:29.222 11:43:19 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:29.222 11:43:19 -- accel/accel.sh@72 -- # IFS== 00:07:29.222 11:43:19 -- accel/accel.sh@72 -- # read -r opc module 00:07:29.223 11:43:19 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:29.223 11:43:19 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:29.223 11:43:19 -- accel/accel.sh@72 -- # IFS== 00:07:29.223 11:43:19 -- accel/accel.sh@72 -- # read -r opc module 00:07:29.223 11:43:19 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:29.223 11:43:19 -- accel/accel.sh@75 -- # killprocess 2318011 00:07:29.223 11:43:19 -- common/autotest_common.sh@936 -- # '[' -z 2318011 ']' 00:07:29.223 11:43:19 -- common/autotest_common.sh@940 -- # kill -0 2318011 00:07:29.223 11:43:19 -- common/autotest_common.sh@941 -- # uname 00:07:29.223 11:43:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:29.223 11:43:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2318011 00:07:29.481 11:43:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:29.481 11:43:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:29.481 11:43:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2318011' 00:07:29.481 killing process with pid 2318011 00:07:29.481 11:43:19 -- common/autotest_common.sh@955 -- # kill 2318011 00:07:29.481 11:43:19 -- common/autotest_common.sh@960 -- # wait 2318011 00:07:32.009 11:43:22 -- accel/accel.sh@76 -- # trap - ERR 00:07:32.009 11:43:22 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:32.009 11:43:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:32.009 11:43:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:32.009 11:43:22 -- common/autotest_common.sh@10 -- # set +x 00:07:32.009 11:43:22 -- common/autotest_common.sh@1111 -- # accel_perf -h 00:07:32.009 11:43:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:32.009 11:43:22 -- accel/accel.sh@12 -- # build_accel_config 00:07:32.009 11:43:22 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:32.009 11:43:22 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:32.009 11:43:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.009 11:43:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.009 11:43:22 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:32.009 11:43:22 -- accel/accel.sh@40 -- # local IFS=, 00:07:32.009 11:43:22 -- accel/accel.sh@41 -- # jq -r . 00:07:32.009 11:43:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:32.009 11:43:22 -- common/autotest_common.sh@10 -- # set +x 00:07:32.009 11:43:22 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:32.009 11:43:22 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:32.009 11:43:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:32.009 11:43:22 -- common/autotest_common.sh@10 -- # set +x 00:07:32.009 ************************************ 00:07:32.009 START TEST accel_missing_filename 00:07:32.009 ************************************ 00:07:32.268 11:43:22 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress 00:07:32.268 11:43:22 -- common/autotest_common.sh@638 -- # local es=0 00:07:32.268 11:43:22 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:32.268 11:43:22 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:07:32.268 11:43:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:32.268 11:43:22 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:07:32.268 11:43:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:32.268 11:43:22 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:07:32.268 11:43:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:32.268 11:43:22 -- accel/accel.sh@12 -- # build_accel_config 00:07:32.268 11:43:22 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:32.268 11:43:22 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:32.268 11:43:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.268 11:43:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.268 11:43:22 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:32.268 11:43:22 -- accel/accel.sh@40 -- # local IFS=, 00:07:32.268 11:43:22 -- accel/accel.sh@41 -- # jq -r . 00:07:32.268 [2024-04-18 11:43:22.613408] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:07:32.268 [2024-04-18 11:43:22.613499] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2318854 ] 00:07:32.268 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.268 [2024-04-18 11:43:22.737416] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.525 [2024-04-18 11:43:22.944875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.782 [2024-04-18 11:43:23.173921] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:33.347 [2024-04-18 11:43:23.707610] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:07:33.606 A filename is required. 00:07:33.606 11:43:24 -- common/autotest_common.sh@641 -- # es=234 00:07:33.606 11:43:24 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:33.606 11:43:24 -- common/autotest_common.sh@650 -- # es=106 00:07:33.606 11:43:24 -- common/autotest_common.sh@651 -- # case "$es" in 00:07:33.606 11:43:24 -- common/autotest_common.sh@658 -- # es=1 00:07:33.606 11:43:24 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:33.606 00:07:33.606 real 0m1.549s 00:07:33.606 user 0m1.376s 00:07:33.606 sys 0m0.205s 00:07:33.606 11:43:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:33.606 11:43:24 -- common/autotest_common.sh@10 -- # set +x 00:07:33.606 ************************************ 00:07:33.607 END TEST accel_missing_filename 00:07:33.607 ************************************ 00:07:33.607 11:43:24 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:33.607 11:43:24 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:07:33.607 11:43:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:33.607 11:43:24 -- common/autotest_common.sh@10 -- # set +x 00:07:33.865 ************************************ 00:07:33.865 START TEST accel_compress_verify 00:07:33.865 ************************************ 00:07:33.865 11:43:24 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:33.865 11:43:24 -- common/autotest_common.sh@638 -- # local es=0 00:07:33.865 11:43:24 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:33.865 11:43:24 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:07:33.865 11:43:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:33.865 11:43:24 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:07:33.865 11:43:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:33.865 11:43:24 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:33.865 11:43:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:33.865 11:43:24 -- accel/accel.sh@12 -- # build_accel_config 00:07:33.865 11:43:24 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:33.865 11:43:24 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:33.865 11:43:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.865 11:43:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.865 11:43:24 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:33.865 11:43:24 -- accel/accel.sh@40 -- # local IFS=, 00:07:33.865 11:43:24 -- accel/accel.sh@41 -- # jq -r . 00:07:33.865 [2024-04-18 11:43:24.357537] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:07:33.865 [2024-04-18 11:43:24.357630] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2319151 ] 00:07:34.124 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.124 [2024-04-18 11:43:24.473333] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.383 [2024-04-18 11:43:24.674027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.383 [2024-04-18 11:43:24.904426] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:34.949 [2024-04-18 11:43:25.427725] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:07:35.516 00:07:35.516 Compression does not support the verify option, aborting. 00:07:35.516 11:43:25 -- common/autotest_common.sh@641 -- # es=161 00:07:35.516 11:43:25 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:35.516 11:43:25 -- common/autotest_common.sh@650 -- # es=33 00:07:35.516 11:43:25 -- common/autotest_common.sh@651 -- # case "$es" in 00:07:35.516 11:43:25 -- common/autotest_common.sh@658 -- # es=1 00:07:35.516 11:43:25 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:35.516 00:07:35.516 real 0m1.511s 00:07:35.516 user 0m1.336s 00:07:35.516 sys 0m0.207s 00:07:35.516 11:43:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:35.516 11:43:25 -- common/autotest_common.sh@10 -- # set +x 00:07:35.516 ************************************ 00:07:35.516 END TEST accel_compress_verify 00:07:35.516 ************************************ 00:07:35.516 11:43:25 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:35.516 11:43:25 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:35.516 11:43:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:35.516 11:43:25 -- common/autotest_common.sh@10 -- # set +x 00:07:35.516 ************************************ 00:07:35.516 START TEST accel_wrong_workload 00:07:35.516 ************************************ 00:07:35.516 11:43:26 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w foobar 00:07:35.516 11:43:26 -- common/autotest_common.sh@638 -- # local es=0 00:07:35.516 11:43:26 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:35.516 11:43:26 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:07:35.516 11:43:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:35.516 11:43:26 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:07:35.516 11:43:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:35.516 11:43:26 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:07:35.516 11:43:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:35.516 11:43:26 -- accel/accel.sh@12 -- # build_accel_config 00:07:35.516 11:43:26 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:35.516 11:43:26 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:35.516 11:43:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.516 11:43:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.516 11:43:26 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:35.516 11:43:26 -- accel/accel.sh@40 -- # local IFS=, 00:07:35.516 11:43:26 -- accel/accel.sh@41 -- # jq -r . 00:07:35.775 Unsupported workload type: foobar 00:07:35.775 [2024-04-18 11:43:26.078217] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:35.775 accel_perf options: 00:07:35.775 [-h help message] 00:07:35.775 [-q queue depth per core] 00:07:35.775 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:35.775 [-T number of threads per core 00:07:35.775 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:35.775 [-t time in seconds] 00:07:35.775 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:35.775 [ dif_verify, , dif_generate, dif_generate_copy 00:07:35.775 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:35.775 [-l for compress/decompress workloads, name of uncompressed input file 00:07:35.775 [-S for crc32c workload, use this seed value (default 0) 00:07:35.775 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:35.775 [-f for fill workload, use this BYTE value (default 255) 00:07:35.775 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:35.775 [-y verify result if this switch is on] 00:07:35.775 [-a tasks to allocate per core (default: same value as -q)] 00:07:35.775 Can be used to spread operations across a wider range of memory. 00:07:35.776 11:43:26 -- common/autotest_common.sh@641 -- # es=1 00:07:35.776 11:43:26 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:35.776 11:43:26 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:35.776 11:43:26 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:35.776 00:07:35.776 real 0m0.081s 00:07:35.776 user 0m0.073s 00:07:35.776 sys 0m0.049s 00:07:35.776 11:43:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:35.776 11:43:26 -- common/autotest_common.sh@10 -- # set +x 00:07:35.776 ************************************ 00:07:35.776 END TEST accel_wrong_workload 00:07:35.776 ************************************ 00:07:35.776 11:43:26 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:35.776 11:43:26 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:07:35.776 11:43:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:35.776 11:43:26 -- common/autotest_common.sh@10 -- # set +x 00:07:35.776 ************************************ 00:07:35.776 START TEST accel_negative_buffers 00:07:35.776 ************************************ 00:07:35.776 11:43:26 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:35.776 11:43:26 -- common/autotest_common.sh@638 -- # local es=0 00:07:35.776 11:43:26 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:35.776 11:43:26 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:07:35.776 11:43:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:35.776 11:43:26 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:07:35.776 11:43:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:35.776 11:43:26 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:07:35.776 11:43:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:35.776 11:43:26 -- accel/accel.sh@12 -- # build_accel_config 00:07:35.776 11:43:26 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:35.776 11:43:26 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:35.776 11:43:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.776 11:43:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.776 11:43:26 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:35.776 11:43:26 -- accel/accel.sh@40 -- # local IFS=, 00:07:35.776 11:43:26 -- accel/accel.sh@41 -- # jq -r . 00:07:36.035 -x option must be non-negative. 00:07:36.035 [2024-04-18 11:43:26.357747] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:36.035 accel_perf options: 00:07:36.035 [-h help message] 00:07:36.035 [-q queue depth per core] 00:07:36.035 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:36.035 [-T number of threads per core 00:07:36.035 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:36.035 [-t time in seconds] 00:07:36.035 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:36.035 [ dif_verify, , dif_generate, dif_generate_copy 00:07:36.035 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:36.035 [-l for compress/decompress workloads, name of uncompressed input file 00:07:36.035 [-S for crc32c workload, use this seed value (default 0) 00:07:36.035 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:36.035 [-f for fill workload, use this BYTE value (default 255) 00:07:36.035 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:36.035 [-y verify result if this switch is on] 00:07:36.035 [-a tasks to allocate per core (default: same value as -q)] 00:07:36.035 Can be used to spread operations across a wider range of memory. 00:07:36.035 11:43:26 -- common/autotest_common.sh@641 -- # es=1 00:07:36.035 11:43:26 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:36.035 11:43:26 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:36.035 11:43:26 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:36.035 00:07:36.035 real 0m0.075s 00:07:36.035 user 0m0.072s 00:07:36.035 sys 0m0.040s 00:07:36.035 11:43:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:36.035 11:43:26 -- common/autotest_common.sh@10 -- # set +x 00:07:36.035 ************************************ 00:07:36.035 END TEST accel_negative_buffers 00:07:36.035 ************************************ 00:07:36.035 11:43:26 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:36.035 11:43:26 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:36.035 11:43:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:36.035 11:43:26 -- common/autotest_common.sh@10 -- # set +x 00:07:36.035 ************************************ 00:07:36.035 START TEST accel_crc32c 00:07:36.035 ************************************ 00:07:36.035 11:43:26 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:36.035 11:43:26 -- accel/accel.sh@16 -- # local accel_opc 00:07:36.035 11:43:26 -- accel/accel.sh@17 -- # local accel_module 00:07:36.035 11:43:26 -- accel/accel.sh@19 -- # IFS=: 00:07:36.035 11:43:26 -- accel/accel.sh@19 -- # read -r var val 00:07:36.035 11:43:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:36.035 11:43:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:36.035 11:43:26 -- accel/accel.sh@12 -- # build_accel_config 00:07:36.035 11:43:26 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:36.293 11:43:26 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:36.293 11:43:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.293 11:43:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.294 11:43:26 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:36.294 11:43:26 -- accel/accel.sh@40 -- # local IFS=, 00:07:36.294 11:43:26 -- accel/accel.sh@41 -- # jq -r . 00:07:36.294 [2024-04-18 11:43:26.628391] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:07:36.294 [2024-04-18 11:43:26.628479] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2319508 ] 00:07:36.294 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.294 [2024-04-18 11:43:26.757175] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.552 [2024-04-18 11:43:26.972136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.811 11:43:27 -- accel/accel.sh@20 -- # val= 00:07:36.811 11:43:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.811 11:43:27 -- accel/accel.sh@19 -- # IFS=: 00:07:36.811 11:43:27 -- accel/accel.sh@19 -- # read -r var val 00:07:36.811 11:43:27 -- accel/accel.sh@20 -- # val= 00:07:36.811 11:43:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.811 11:43:27 -- accel/accel.sh@19 -- # IFS=: 00:07:36.811 11:43:27 -- accel/accel.sh@19 -- # read -r var val 00:07:36.811 11:43:27 -- accel/accel.sh@20 -- # val=0x1 00:07:36.811 11:43:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.811 11:43:27 -- accel/accel.sh@19 -- # IFS=: 00:07:36.811 11:43:27 -- accel/accel.sh@19 -- # read -r var val 00:07:36.811 11:43:27 -- accel/accel.sh@20 -- # val= 00:07:36.811 11:43:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.811 11:43:27 -- accel/accel.sh@19 -- # IFS=: 00:07:36.811 11:43:27 -- accel/accel.sh@19 -- # read -r var val 00:07:36.811 11:43:27 -- accel/accel.sh@20 -- # val= 00:07:36.811 11:43:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.811 11:43:27 -- accel/accel.sh@19 -- # IFS=: 00:07:36.811 11:43:27 -- accel/accel.sh@19 -- # read -r var val 00:07:36.811 11:43:27 -- accel/accel.sh@20 -- # val=crc32c 00:07:36.811 11:43:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.811 11:43:27 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:36.811 11:43:27 -- accel/accel.sh@19 -- # IFS=: 00:07:36.811 11:43:27 -- accel/accel.sh@19 -- # read -r var val 00:07:36.811 11:43:27 -- accel/accel.sh@20 -- # val=32 00:07:36.811 11:43:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.811 11:43:27 -- accel/accel.sh@19 -- # IFS=: 00:07:36.811 11:43:27 -- accel/accel.sh@19 -- # read -r var val 00:07:36.811 11:43:27 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:36.811 11:43:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.811 11:43:27 -- accel/accel.sh@19 -- # IFS=: 00:07:36.811 11:43:27 -- accel/accel.sh@19 -- # read -r var val 00:07:36.811 11:43:27 -- accel/accel.sh@20 -- # val= 00:07:36.811 11:43:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.811 11:43:27 -- accel/accel.sh@19 -- # IFS=: 00:07:36.811 11:43:27 -- accel/accel.sh@19 -- # read -r var val 00:07:36.811 11:43:27 -- accel/accel.sh@20 -- # val=software 00:07:36.811 11:43:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.811 11:43:27 -- accel/accel.sh@22 -- # accel_module=software 00:07:36.811 11:43:27 -- accel/accel.sh@19 -- # IFS=: 00:07:36.811 11:43:27 -- accel/accel.sh@19 -- # read -r var val 00:07:36.811 11:43:27 -- accel/accel.sh@20 -- # val=32 00:07:36.811 11:43:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.811 11:43:27 -- accel/accel.sh@19 -- # IFS=: 00:07:36.811 11:43:27 -- accel/accel.sh@19 -- # read -r var val 00:07:36.811 11:43:27 -- accel/accel.sh@20 -- # val=32 00:07:36.811 11:43:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.811 11:43:27 -- accel/accel.sh@19 -- # IFS=: 00:07:36.811 11:43:27 -- accel/accel.sh@19 -- # read -r var val 00:07:36.811 11:43:27 -- accel/accel.sh@20 -- # val=1 00:07:36.811 11:43:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.811 11:43:27 -- accel/accel.sh@19 -- # IFS=: 00:07:36.811 11:43:27 -- accel/accel.sh@19 -- # read -r var val 00:07:36.811 11:43:27 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:36.811 11:43:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.811 11:43:27 -- accel/accel.sh@19 -- # IFS=: 00:07:36.811 11:43:27 -- accel/accel.sh@19 -- # read -r var val 00:07:36.811 11:43:27 -- accel/accel.sh@20 -- # val=Yes 00:07:36.811 11:43:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.811 11:43:27 -- accel/accel.sh@19 -- # IFS=: 00:07:36.811 11:43:27 -- accel/accel.sh@19 -- # read -r var val 00:07:36.811 11:43:27 -- accel/accel.sh@20 -- # val= 00:07:36.811 11:43:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.811 11:43:27 -- accel/accel.sh@19 -- # IFS=: 00:07:36.811 11:43:27 -- accel/accel.sh@19 -- # read -r var val 00:07:36.811 11:43:27 -- accel/accel.sh@20 -- # val= 00:07:36.811 11:43:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.811 11:43:27 -- accel/accel.sh@19 -- # IFS=: 00:07:36.811 11:43:27 -- accel/accel.sh@19 -- # read -r var val 00:07:38.715 11:43:29 -- accel/accel.sh@20 -- # val= 00:07:38.715 11:43:29 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.715 11:43:29 -- accel/accel.sh@19 -- # IFS=: 00:07:38.715 11:43:29 -- accel/accel.sh@19 -- # read -r var val 00:07:38.715 11:43:29 -- accel/accel.sh@20 -- # val= 00:07:38.715 11:43:29 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.715 11:43:29 -- accel/accel.sh@19 -- # IFS=: 00:07:38.715 11:43:29 -- accel/accel.sh@19 -- # read -r var val 00:07:38.715 11:43:29 -- accel/accel.sh@20 -- # val= 00:07:38.715 11:43:29 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.715 11:43:29 -- accel/accel.sh@19 -- # IFS=: 00:07:38.715 11:43:29 -- accel/accel.sh@19 -- # read -r var val 00:07:38.715 11:43:29 -- accel/accel.sh@20 -- # val= 00:07:38.715 11:43:29 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.715 11:43:29 -- accel/accel.sh@19 -- # IFS=: 00:07:38.715 11:43:29 -- accel/accel.sh@19 -- # read -r var val 00:07:38.715 11:43:29 -- accel/accel.sh@20 -- # val= 00:07:38.715 11:43:29 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.715 11:43:29 -- accel/accel.sh@19 -- # IFS=: 00:07:38.715 11:43:29 -- accel/accel.sh@19 -- # read -r var val 00:07:38.715 11:43:29 -- accel/accel.sh@20 -- # val= 00:07:38.715 11:43:29 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.715 11:43:29 -- accel/accel.sh@19 -- # IFS=: 00:07:38.715 11:43:29 -- accel/accel.sh@19 -- # read -r var val 00:07:38.715 11:43:29 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:38.715 11:43:29 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:38.715 11:43:29 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:38.715 00:07:38.715 real 0m2.549s 00:07:38.715 user 0m2.338s 00:07:38.715 sys 0m0.224s 00:07:38.715 11:43:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:38.715 11:43:29 -- common/autotest_common.sh@10 -- # set +x 00:07:38.715 ************************************ 00:07:38.715 END TEST accel_crc32c 00:07:38.715 ************************************ 00:07:38.715 11:43:29 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:38.715 11:43:29 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:38.715 11:43:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:38.715 11:43:29 -- common/autotest_common.sh@10 -- # set +x 00:07:38.974 ************************************ 00:07:38.974 START TEST accel_crc32c_C2 00:07:38.974 ************************************ 00:07:38.974 11:43:29 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:38.974 11:43:29 -- accel/accel.sh@16 -- # local accel_opc 00:07:38.974 11:43:29 -- accel/accel.sh@17 -- # local accel_module 00:07:38.974 11:43:29 -- accel/accel.sh@19 -- # IFS=: 00:07:38.974 11:43:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:38.974 11:43:29 -- accel/accel.sh@19 -- # read -r var val 00:07:38.974 11:43:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:38.974 11:43:29 -- accel/accel.sh@12 -- # build_accel_config 00:07:38.974 11:43:29 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:38.974 11:43:29 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:38.974 11:43:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.974 11:43:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.974 11:43:29 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:38.974 11:43:29 -- accel/accel.sh@40 -- # local IFS=, 00:07:38.974 11:43:29 -- accel/accel.sh@41 -- # jq -r . 00:07:38.974 [2024-04-18 11:43:29.369675] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:07:38.974 [2024-04-18 11:43:29.369756] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2320060 ] 00:07:38.974 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.974 [2024-04-18 11:43:29.492505] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.232 [2024-04-18 11:43:29.689766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.491 11:43:29 -- accel/accel.sh@20 -- # val= 00:07:39.491 11:43:29 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.491 11:43:29 -- accel/accel.sh@19 -- # IFS=: 00:07:39.491 11:43:29 -- accel/accel.sh@19 -- # read -r var val 00:07:39.491 11:43:29 -- accel/accel.sh@20 -- # val= 00:07:39.491 11:43:29 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.491 11:43:29 -- accel/accel.sh@19 -- # IFS=: 00:07:39.491 11:43:29 -- accel/accel.sh@19 -- # read -r var val 00:07:39.491 11:43:29 -- accel/accel.sh@20 -- # val=0x1 00:07:39.491 11:43:29 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.491 11:43:29 -- accel/accel.sh@19 -- # IFS=: 00:07:39.491 11:43:29 -- accel/accel.sh@19 -- # read -r var val 00:07:39.491 11:43:29 -- accel/accel.sh@20 -- # val= 00:07:39.491 11:43:29 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.491 11:43:29 -- accel/accel.sh@19 -- # IFS=: 00:07:39.491 11:43:29 -- accel/accel.sh@19 -- # read -r var val 00:07:39.491 11:43:29 -- accel/accel.sh@20 -- # val= 00:07:39.491 11:43:29 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.491 11:43:29 -- accel/accel.sh@19 -- # IFS=: 00:07:39.491 11:43:29 -- accel/accel.sh@19 -- # read -r var val 00:07:39.491 11:43:29 -- accel/accel.sh@20 -- # val=crc32c 00:07:39.491 11:43:29 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.491 11:43:29 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:39.491 11:43:29 -- accel/accel.sh@19 -- # IFS=: 00:07:39.491 11:43:29 -- accel/accel.sh@19 -- # read -r var val 00:07:39.491 11:43:29 -- accel/accel.sh@20 -- # val=0 00:07:39.491 11:43:29 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.491 11:43:29 -- accel/accel.sh@19 -- # IFS=: 00:07:39.491 11:43:29 -- accel/accel.sh@19 -- # read -r var val 00:07:39.491 11:43:29 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:39.491 11:43:29 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.491 11:43:29 -- accel/accel.sh@19 -- # IFS=: 00:07:39.491 11:43:29 -- accel/accel.sh@19 -- # read -r var val 00:07:39.491 11:43:29 -- accel/accel.sh@20 -- # val= 00:07:39.491 11:43:29 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.491 11:43:29 -- accel/accel.sh@19 -- # IFS=: 00:07:39.491 11:43:29 -- accel/accel.sh@19 -- # read -r var val 00:07:39.491 11:43:29 -- accel/accel.sh@20 -- # val=software 00:07:39.491 11:43:29 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.491 11:43:29 -- accel/accel.sh@22 -- # accel_module=software 00:07:39.491 11:43:29 -- accel/accel.sh@19 -- # IFS=: 00:07:39.491 11:43:29 -- accel/accel.sh@19 -- # read -r var val 00:07:39.491 11:43:29 -- accel/accel.sh@20 -- # val=32 00:07:39.491 11:43:29 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.491 11:43:29 -- accel/accel.sh@19 -- # IFS=: 00:07:39.491 11:43:29 -- accel/accel.sh@19 -- # read -r var val 00:07:39.491 11:43:29 -- accel/accel.sh@20 -- # val=32 00:07:39.491 11:43:29 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.491 11:43:29 -- accel/accel.sh@19 -- # IFS=: 00:07:39.491 11:43:29 -- accel/accel.sh@19 -- # read -r var val 00:07:39.491 11:43:29 -- accel/accel.sh@20 -- # val=1 00:07:39.491 11:43:29 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.491 11:43:29 -- accel/accel.sh@19 -- # IFS=: 00:07:39.491 11:43:29 -- accel/accel.sh@19 -- # read -r var val 00:07:39.491 11:43:29 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:39.491 11:43:29 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.491 11:43:29 -- accel/accel.sh@19 -- # IFS=: 00:07:39.491 11:43:29 -- accel/accel.sh@19 -- # read -r var val 00:07:39.491 11:43:29 -- accel/accel.sh@20 -- # val=Yes 00:07:39.491 11:43:29 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.491 11:43:29 -- accel/accel.sh@19 -- # IFS=: 00:07:39.491 11:43:29 -- accel/accel.sh@19 -- # read -r var val 00:07:39.491 11:43:29 -- accel/accel.sh@20 -- # val= 00:07:39.491 11:43:29 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.491 11:43:29 -- accel/accel.sh@19 -- # IFS=: 00:07:39.491 11:43:29 -- accel/accel.sh@19 -- # read -r var val 00:07:39.491 11:43:29 -- accel/accel.sh@20 -- # val= 00:07:39.491 11:43:29 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.491 11:43:29 -- accel/accel.sh@19 -- # IFS=: 00:07:39.491 11:43:29 -- accel/accel.sh@19 -- # read -r var val 00:07:41.392 11:43:31 -- accel/accel.sh@20 -- # val= 00:07:41.392 11:43:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.392 11:43:31 -- accel/accel.sh@19 -- # IFS=: 00:07:41.392 11:43:31 -- accel/accel.sh@19 -- # read -r var val 00:07:41.392 11:43:31 -- accel/accel.sh@20 -- # val= 00:07:41.392 11:43:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.392 11:43:31 -- accel/accel.sh@19 -- # IFS=: 00:07:41.392 11:43:31 -- accel/accel.sh@19 -- # read -r var val 00:07:41.392 11:43:31 -- accel/accel.sh@20 -- # val= 00:07:41.392 11:43:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.392 11:43:31 -- accel/accel.sh@19 -- # IFS=: 00:07:41.392 11:43:31 -- accel/accel.sh@19 -- # read -r var val 00:07:41.392 11:43:31 -- accel/accel.sh@20 -- # val= 00:07:41.392 11:43:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.392 11:43:31 -- accel/accel.sh@19 -- # IFS=: 00:07:41.392 11:43:31 -- accel/accel.sh@19 -- # read -r var val 00:07:41.392 11:43:31 -- accel/accel.sh@20 -- # val= 00:07:41.392 11:43:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.392 11:43:31 -- accel/accel.sh@19 -- # IFS=: 00:07:41.392 11:43:31 -- accel/accel.sh@19 -- # read -r var val 00:07:41.392 11:43:31 -- accel/accel.sh@20 -- # val= 00:07:41.392 11:43:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.392 11:43:31 -- accel/accel.sh@19 -- # IFS=: 00:07:41.392 11:43:31 -- accel/accel.sh@19 -- # read -r var val 00:07:41.392 11:43:31 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:41.392 11:43:31 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:41.392 11:43:31 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:41.392 00:07:41.392 real 0m2.550s 00:07:41.392 user 0m2.345s 00:07:41.392 sys 0m0.219s 00:07:41.392 11:43:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:41.392 11:43:31 -- common/autotest_common.sh@10 -- # set +x 00:07:41.392 ************************************ 00:07:41.392 END TEST accel_crc32c_C2 00:07:41.392 ************************************ 00:07:41.392 11:43:31 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:41.393 11:43:31 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:41.393 11:43:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:41.393 11:43:31 -- common/autotest_common.sh@10 -- # set +x 00:07:41.651 ************************************ 00:07:41.651 START TEST accel_copy 00:07:41.651 ************************************ 00:07:41.651 11:43:32 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy -y 00:07:41.651 11:43:32 -- accel/accel.sh@16 -- # local accel_opc 00:07:41.651 11:43:32 -- accel/accel.sh@17 -- # local accel_module 00:07:41.651 11:43:32 -- accel/accel.sh@19 -- # IFS=: 00:07:41.651 11:43:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:41.651 11:43:32 -- accel/accel.sh@19 -- # read -r var val 00:07:41.651 11:43:32 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:41.651 11:43:32 -- accel/accel.sh@12 -- # build_accel_config 00:07:41.651 11:43:32 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:41.651 11:43:32 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:41.651 11:43:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.651 11:43:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.651 11:43:32 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:41.651 11:43:32 -- accel/accel.sh@40 -- # local IFS=, 00:07:41.651 11:43:32 -- accel/accel.sh@41 -- # jq -r . 00:07:41.651 [2024-04-18 11:43:32.119228] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:07:41.651 [2024-04-18 11:43:32.119308] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2320615 ] 00:07:41.651 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.910 [2024-04-18 11:43:32.243121] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.910 [2024-04-18 11:43:32.444793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.169 11:43:32 -- accel/accel.sh@20 -- # val= 00:07:42.169 11:43:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.169 11:43:32 -- accel/accel.sh@19 -- # IFS=: 00:07:42.169 11:43:32 -- accel/accel.sh@19 -- # read -r var val 00:07:42.169 11:43:32 -- accel/accel.sh@20 -- # val= 00:07:42.169 11:43:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.169 11:43:32 -- accel/accel.sh@19 -- # IFS=: 00:07:42.169 11:43:32 -- accel/accel.sh@19 -- # read -r var val 00:07:42.169 11:43:32 -- accel/accel.sh@20 -- # val=0x1 00:07:42.169 11:43:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.169 11:43:32 -- accel/accel.sh@19 -- # IFS=: 00:07:42.169 11:43:32 -- accel/accel.sh@19 -- # read -r var val 00:07:42.169 11:43:32 -- accel/accel.sh@20 -- # val= 00:07:42.169 11:43:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.169 11:43:32 -- accel/accel.sh@19 -- # IFS=: 00:07:42.169 11:43:32 -- accel/accel.sh@19 -- # read -r var val 00:07:42.169 11:43:32 -- accel/accel.sh@20 -- # val= 00:07:42.169 11:43:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.169 11:43:32 -- accel/accel.sh@19 -- # IFS=: 00:07:42.169 11:43:32 -- accel/accel.sh@19 -- # read -r var val 00:07:42.169 11:43:32 -- accel/accel.sh@20 -- # val=copy 00:07:42.169 11:43:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.169 11:43:32 -- accel/accel.sh@23 -- # accel_opc=copy 00:07:42.169 11:43:32 -- accel/accel.sh@19 -- # IFS=: 00:07:42.169 11:43:32 -- accel/accel.sh@19 -- # read -r var val 00:07:42.169 11:43:32 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:42.169 11:43:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.169 11:43:32 -- accel/accel.sh@19 -- # IFS=: 00:07:42.169 11:43:32 -- accel/accel.sh@19 -- # read -r var val 00:07:42.169 11:43:32 -- accel/accel.sh@20 -- # val= 00:07:42.169 11:43:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.169 11:43:32 -- accel/accel.sh@19 -- # IFS=: 00:07:42.169 11:43:32 -- accel/accel.sh@19 -- # read -r var val 00:07:42.169 11:43:32 -- accel/accel.sh@20 -- # val=software 00:07:42.169 11:43:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.169 11:43:32 -- accel/accel.sh@22 -- # accel_module=software 00:07:42.169 11:43:32 -- accel/accel.sh@19 -- # IFS=: 00:07:42.169 11:43:32 -- accel/accel.sh@19 -- # read -r var val 00:07:42.169 11:43:32 -- accel/accel.sh@20 -- # val=32 00:07:42.169 11:43:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.169 11:43:32 -- accel/accel.sh@19 -- # IFS=: 00:07:42.169 11:43:32 -- accel/accel.sh@19 -- # read -r var val 00:07:42.169 11:43:32 -- accel/accel.sh@20 -- # val=32 00:07:42.169 11:43:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.169 11:43:32 -- accel/accel.sh@19 -- # IFS=: 00:07:42.169 11:43:32 -- accel/accel.sh@19 -- # read -r var val 00:07:42.169 11:43:32 -- accel/accel.sh@20 -- # val=1 00:07:42.169 11:43:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.169 11:43:32 -- accel/accel.sh@19 -- # IFS=: 00:07:42.169 11:43:32 -- accel/accel.sh@19 -- # read -r var val 00:07:42.169 11:43:32 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:42.169 11:43:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.169 11:43:32 -- accel/accel.sh@19 -- # IFS=: 00:07:42.169 11:43:32 -- accel/accel.sh@19 -- # read -r var val 00:07:42.169 11:43:32 -- accel/accel.sh@20 -- # val=Yes 00:07:42.169 11:43:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.169 11:43:32 -- accel/accel.sh@19 -- # IFS=: 00:07:42.169 11:43:32 -- accel/accel.sh@19 -- # read -r var val 00:07:42.169 11:43:32 -- accel/accel.sh@20 -- # val= 00:07:42.169 11:43:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.169 11:43:32 -- accel/accel.sh@19 -- # IFS=: 00:07:42.169 11:43:32 -- accel/accel.sh@19 -- # read -r var val 00:07:42.169 11:43:32 -- accel/accel.sh@20 -- # val= 00:07:42.169 11:43:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.169 11:43:32 -- accel/accel.sh@19 -- # IFS=: 00:07:42.169 11:43:32 -- accel/accel.sh@19 -- # read -r var val 00:07:44.073 11:43:34 -- accel/accel.sh@20 -- # val= 00:07:44.073 11:43:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.073 11:43:34 -- accel/accel.sh@19 -- # IFS=: 00:07:44.073 11:43:34 -- accel/accel.sh@19 -- # read -r var val 00:07:44.073 11:43:34 -- accel/accel.sh@20 -- # val= 00:07:44.073 11:43:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.073 11:43:34 -- accel/accel.sh@19 -- # IFS=: 00:07:44.073 11:43:34 -- accel/accel.sh@19 -- # read -r var val 00:07:44.073 11:43:34 -- accel/accel.sh@20 -- # val= 00:07:44.073 11:43:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.073 11:43:34 -- accel/accel.sh@19 -- # IFS=: 00:07:44.073 11:43:34 -- accel/accel.sh@19 -- # read -r var val 00:07:44.073 11:43:34 -- accel/accel.sh@20 -- # val= 00:07:44.073 11:43:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.073 11:43:34 -- accel/accel.sh@19 -- # IFS=: 00:07:44.073 11:43:34 -- accel/accel.sh@19 -- # read -r var val 00:07:44.073 11:43:34 -- accel/accel.sh@20 -- # val= 00:07:44.073 11:43:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.073 11:43:34 -- accel/accel.sh@19 -- # IFS=: 00:07:44.073 11:43:34 -- accel/accel.sh@19 -- # read -r var val 00:07:44.073 11:43:34 -- accel/accel.sh@20 -- # val= 00:07:44.073 11:43:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.073 11:43:34 -- accel/accel.sh@19 -- # IFS=: 00:07:44.073 11:43:34 -- accel/accel.sh@19 -- # read -r var val 00:07:44.073 11:43:34 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:44.073 11:43:34 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:44.073 11:43:34 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:44.073 00:07:44.073 real 0m2.525s 00:07:44.073 user 0m2.323s 00:07:44.073 sys 0m0.214s 00:07:44.073 11:43:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:44.074 11:43:34 -- common/autotest_common.sh@10 -- # set +x 00:07:44.074 ************************************ 00:07:44.074 END TEST accel_copy 00:07:44.074 ************************************ 00:07:44.331 11:43:34 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:44.332 11:43:34 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:44.332 11:43:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:44.332 11:43:34 -- common/autotest_common.sh@10 -- # set +x 00:07:44.332 ************************************ 00:07:44.332 START TEST accel_fill 00:07:44.332 ************************************ 00:07:44.332 11:43:34 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:44.332 11:43:34 -- accel/accel.sh@16 -- # local accel_opc 00:07:44.332 11:43:34 -- accel/accel.sh@17 -- # local accel_module 00:07:44.332 11:43:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:44.332 11:43:34 -- accel/accel.sh@19 -- # IFS=: 00:07:44.332 11:43:34 -- accel/accel.sh@19 -- # read -r var val 00:07:44.332 11:43:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:44.332 11:43:34 -- accel/accel.sh@12 -- # build_accel_config 00:07:44.332 11:43:34 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:44.332 11:43:34 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:44.332 11:43:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.332 11:43:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.332 11:43:34 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:44.332 11:43:34 -- accel/accel.sh@40 -- # local IFS=, 00:07:44.332 11:43:34 -- accel/accel.sh@41 -- # jq -r . 00:07:44.332 [2024-04-18 11:43:34.818907] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:07:44.332 [2024-04-18 11:43:34.818982] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2321164 ] 00:07:44.589 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.589 [2024-04-18 11:43:34.937332] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.589 [2024-04-18 11:43:35.130384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.853 11:43:35 -- accel/accel.sh@20 -- # val= 00:07:44.853 11:43:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.853 11:43:35 -- accel/accel.sh@19 -- # IFS=: 00:07:44.853 11:43:35 -- accel/accel.sh@19 -- # read -r var val 00:07:44.853 11:43:35 -- accel/accel.sh@20 -- # val= 00:07:44.853 11:43:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.853 11:43:35 -- accel/accel.sh@19 -- # IFS=: 00:07:44.853 11:43:35 -- accel/accel.sh@19 -- # read -r var val 00:07:44.853 11:43:35 -- accel/accel.sh@20 -- # val=0x1 00:07:44.853 11:43:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.853 11:43:35 -- accel/accel.sh@19 -- # IFS=: 00:07:44.853 11:43:35 -- accel/accel.sh@19 -- # read -r var val 00:07:44.853 11:43:35 -- accel/accel.sh@20 -- # val= 00:07:44.853 11:43:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.853 11:43:35 -- accel/accel.sh@19 -- # IFS=: 00:07:44.853 11:43:35 -- accel/accel.sh@19 -- # read -r var val 00:07:44.853 11:43:35 -- accel/accel.sh@20 -- # val= 00:07:44.853 11:43:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.853 11:43:35 -- accel/accel.sh@19 -- # IFS=: 00:07:44.853 11:43:35 -- accel/accel.sh@19 -- # read -r var val 00:07:44.853 11:43:35 -- accel/accel.sh@20 -- # val=fill 00:07:44.853 11:43:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.853 11:43:35 -- accel/accel.sh@23 -- # accel_opc=fill 00:07:44.853 11:43:35 -- accel/accel.sh@19 -- # IFS=: 00:07:44.853 11:43:35 -- accel/accel.sh@19 -- # read -r var val 00:07:44.853 11:43:35 -- accel/accel.sh@20 -- # val=0x80 00:07:44.853 11:43:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.853 11:43:35 -- accel/accel.sh@19 -- # IFS=: 00:07:44.853 11:43:35 -- accel/accel.sh@19 -- # read -r var val 00:07:44.853 11:43:35 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:44.853 11:43:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.853 11:43:35 -- accel/accel.sh@19 -- # IFS=: 00:07:44.853 11:43:35 -- accel/accel.sh@19 -- # read -r var val 00:07:44.853 11:43:35 -- accel/accel.sh@20 -- # val= 00:07:44.853 11:43:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.853 11:43:35 -- accel/accel.sh@19 -- # IFS=: 00:07:44.853 11:43:35 -- accel/accel.sh@19 -- # read -r var val 00:07:44.853 11:43:35 -- accel/accel.sh@20 -- # val=software 00:07:44.853 11:43:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.853 11:43:35 -- accel/accel.sh@22 -- # accel_module=software 00:07:44.853 11:43:35 -- accel/accel.sh@19 -- # IFS=: 00:07:44.853 11:43:35 -- accel/accel.sh@19 -- # read -r var val 00:07:44.853 11:43:35 -- accel/accel.sh@20 -- # val=64 00:07:44.853 11:43:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.853 11:43:35 -- accel/accel.sh@19 -- # IFS=: 00:07:44.853 11:43:35 -- accel/accel.sh@19 -- # read -r var val 00:07:44.853 11:43:35 -- accel/accel.sh@20 -- # val=64 00:07:44.853 11:43:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.853 11:43:35 -- accel/accel.sh@19 -- # IFS=: 00:07:44.853 11:43:35 -- accel/accel.sh@19 -- # read -r var val 00:07:44.853 11:43:35 -- accel/accel.sh@20 -- # val=1 00:07:44.853 11:43:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.853 11:43:35 -- accel/accel.sh@19 -- # IFS=: 00:07:44.853 11:43:35 -- accel/accel.sh@19 -- # read -r var val 00:07:44.853 11:43:35 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:44.853 11:43:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.853 11:43:35 -- accel/accel.sh@19 -- # IFS=: 00:07:44.853 11:43:35 -- accel/accel.sh@19 -- # read -r var val 00:07:44.853 11:43:35 -- accel/accel.sh@20 -- # val=Yes 00:07:44.853 11:43:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.853 11:43:35 -- accel/accel.sh@19 -- # IFS=: 00:07:44.853 11:43:35 -- accel/accel.sh@19 -- # read -r var val 00:07:44.853 11:43:35 -- accel/accel.sh@20 -- # val= 00:07:44.853 11:43:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.853 11:43:35 -- accel/accel.sh@19 -- # IFS=: 00:07:44.853 11:43:35 -- accel/accel.sh@19 -- # read -r var val 00:07:44.853 11:43:35 -- accel/accel.sh@20 -- # val= 00:07:44.853 11:43:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.853 11:43:35 -- accel/accel.sh@19 -- # IFS=: 00:07:44.853 11:43:35 -- accel/accel.sh@19 -- # read -r var val 00:07:46.807 11:43:37 -- accel/accel.sh@20 -- # val= 00:07:46.808 11:43:37 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.808 11:43:37 -- accel/accel.sh@19 -- # IFS=: 00:07:46.808 11:43:37 -- accel/accel.sh@19 -- # read -r var val 00:07:46.808 11:43:37 -- accel/accel.sh@20 -- # val= 00:07:46.808 11:43:37 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.808 11:43:37 -- accel/accel.sh@19 -- # IFS=: 00:07:46.808 11:43:37 -- accel/accel.sh@19 -- # read -r var val 00:07:46.808 11:43:37 -- accel/accel.sh@20 -- # val= 00:07:46.808 11:43:37 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.808 11:43:37 -- accel/accel.sh@19 -- # IFS=: 00:07:46.808 11:43:37 -- accel/accel.sh@19 -- # read -r var val 00:07:46.808 11:43:37 -- accel/accel.sh@20 -- # val= 00:07:46.808 11:43:37 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.808 11:43:37 -- accel/accel.sh@19 -- # IFS=: 00:07:46.808 11:43:37 -- accel/accel.sh@19 -- # read -r var val 00:07:46.808 11:43:37 -- accel/accel.sh@20 -- # val= 00:07:46.808 11:43:37 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.808 11:43:37 -- accel/accel.sh@19 -- # IFS=: 00:07:46.808 11:43:37 -- accel/accel.sh@19 -- # read -r var val 00:07:46.808 11:43:37 -- accel/accel.sh@20 -- # val= 00:07:46.808 11:43:37 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.808 11:43:37 -- accel/accel.sh@19 -- # IFS=: 00:07:46.808 11:43:37 -- accel/accel.sh@19 -- # read -r var val 00:07:46.808 11:43:37 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:46.808 11:43:37 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:46.808 11:43:37 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:46.808 00:07:46.808 real 0m2.476s 00:07:46.808 user 0m2.267s 00:07:46.808 sys 0m0.222s 00:07:46.808 11:43:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:46.808 11:43:37 -- common/autotest_common.sh@10 -- # set +x 00:07:46.808 ************************************ 00:07:46.808 END TEST accel_fill 00:07:46.808 ************************************ 00:07:46.808 11:43:37 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:46.808 11:43:37 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:46.808 11:43:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:46.808 11:43:37 -- common/autotest_common.sh@10 -- # set +x 00:07:47.066 ************************************ 00:07:47.066 START TEST accel_copy_crc32c 00:07:47.066 ************************************ 00:07:47.066 11:43:37 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y 00:07:47.066 11:43:37 -- accel/accel.sh@16 -- # local accel_opc 00:07:47.066 11:43:37 -- accel/accel.sh@17 -- # local accel_module 00:07:47.066 11:43:37 -- accel/accel.sh@19 -- # IFS=: 00:07:47.066 11:43:37 -- accel/accel.sh@19 -- # read -r var val 00:07:47.066 11:43:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:47.066 11:43:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:47.066 11:43:37 -- accel/accel.sh@12 -- # build_accel_config 00:07:47.066 11:43:37 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:47.066 11:43:37 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:47.066 11:43:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.066 11:43:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.066 11:43:37 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:47.066 11:43:37 -- accel/accel.sh@40 -- # local IFS=, 00:07:47.066 11:43:37 -- accel/accel.sh@41 -- # jq -r . 00:07:47.066 [2024-04-18 11:43:37.510697] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:07:47.066 [2024-04-18 11:43:37.510776] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2321505 ] 00:07:47.066 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.323 [2024-04-18 11:43:37.635197] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.323 [2024-04-18 11:43:37.840989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.580 11:43:38 -- accel/accel.sh@20 -- # val= 00:07:47.580 11:43:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.580 11:43:38 -- accel/accel.sh@19 -- # IFS=: 00:07:47.580 11:43:38 -- accel/accel.sh@19 -- # read -r var val 00:07:47.580 11:43:38 -- accel/accel.sh@20 -- # val= 00:07:47.580 11:43:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.580 11:43:38 -- accel/accel.sh@19 -- # IFS=: 00:07:47.581 11:43:38 -- accel/accel.sh@19 -- # read -r var val 00:07:47.581 11:43:38 -- accel/accel.sh@20 -- # val=0x1 00:07:47.581 11:43:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.581 11:43:38 -- accel/accel.sh@19 -- # IFS=: 00:07:47.581 11:43:38 -- accel/accel.sh@19 -- # read -r var val 00:07:47.581 11:43:38 -- accel/accel.sh@20 -- # val= 00:07:47.581 11:43:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.581 11:43:38 -- accel/accel.sh@19 -- # IFS=: 00:07:47.581 11:43:38 -- accel/accel.sh@19 -- # read -r var val 00:07:47.581 11:43:38 -- accel/accel.sh@20 -- # val= 00:07:47.581 11:43:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.581 11:43:38 -- accel/accel.sh@19 -- # IFS=: 00:07:47.581 11:43:38 -- accel/accel.sh@19 -- # read -r var val 00:07:47.581 11:43:38 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:47.581 11:43:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.581 11:43:38 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:47.581 11:43:38 -- accel/accel.sh@19 -- # IFS=: 00:07:47.581 11:43:38 -- accel/accel.sh@19 -- # read -r var val 00:07:47.581 11:43:38 -- accel/accel.sh@20 -- # val=0 00:07:47.581 11:43:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.581 11:43:38 -- accel/accel.sh@19 -- # IFS=: 00:07:47.581 11:43:38 -- accel/accel.sh@19 -- # read -r var val 00:07:47.581 11:43:38 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:47.581 11:43:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.581 11:43:38 -- accel/accel.sh@19 -- # IFS=: 00:07:47.581 11:43:38 -- accel/accel.sh@19 -- # read -r var val 00:07:47.581 11:43:38 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:47.581 11:43:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.581 11:43:38 -- accel/accel.sh@19 -- # IFS=: 00:07:47.581 11:43:38 -- accel/accel.sh@19 -- # read -r var val 00:07:47.581 11:43:38 -- accel/accel.sh@20 -- # val= 00:07:47.581 11:43:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.581 11:43:38 -- accel/accel.sh@19 -- # IFS=: 00:07:47.581 11:43:38 -- accel/accel.sh@19 -- # read -r var val 00:07:47.581 11:43:38 -- accel/accel.sh@20 -- # val=software 00:07:47.581 11:43:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.581 11:43:38 -- accel/accel.sh@22 -- # accel_module=software 00:07:47.581 11:43:38 -- accel/accel.sh@19 -- # IFS=: 00:07:47.581 11:43:38 -- accel/accel.sh@19 -- # read -r var val 00:07:47.581 11:43:38 -- accel/accel.sh@20 -- # val=32 00:07:47.581 11:43:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.581 11:43:38 -- accel/accel.sh@19 -- # IFS=: 00:07:47.581 11:43:38 -- accel/accel.sh@19 -- # read -r var val 00:07:47.581 11:43:38 -- accel/accel.sh@20 -- # val=32 00:07:47.581 11:43:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.581 11:43:38 -- accel/accel.sh@19 -- # IFS=: 00:07:47.581 11:43:38 -- accel/accel.sh@19 -- # read -r var val 00:07:47.581 11:43:38 -- accel/accel.sh@20 -- # val=1 00:07:47.581 11:43:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.581 11:43:38 -- accel/accel.sh@19 -- # IFS=: 00:07:47.581 11:43:38 -- accel/accel.sh@19 -- # read -r var val 00:07:47.581 11:43:38 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:47.581 11:43:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.581 11:43:38 -- accel/accel.sh@19 -- # IFS=: 00:07:47.581 11:43:38 -- accel/accel.sh@19 -- # read -r var val 00:07:47.581 11:43:38 -- accel/accel.sh@20 -- # val=Yes 00:07:47.581 11:43:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.581 11:43:38 -- accel/accel.sh@19 -- # IFS=: 00:07:47.581 11:43:38 -- accel/accel.sh@19 -- # read -r var val 00:07:47.581 11:43:38 -- accel/accel.sh@20 -- # val= 00:07:47.581 11:43:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.581 11:43:38 -- accel/accel.sh@19 -- # IFS=: 00:07:47.581 11:43:38 -- accel/accel.sh@19 -- # read -r var val 00:07:47.581 11:43:38 -- accel/accel.sh@20 -- # val= 00:07:47.581 11:43:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.581 11:43:38 -- accel/accel.sh@19 -- # IFS=: 00:07:47.581 11:43:38 -- accel/accel.sh@19 -- # read -r var val 00:07:49.481 11:43:40 -- accel/accel.sh@20 -- # val= 00:07:49.481 11:43:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.481 11:43:40 -- accel/accel.sh@19 -- # IFS=: 00:07:49.481 11:43:40 -- accel/accel.sh@19 -- # read -r var val 00:07:49.481 11:43:40 -- accel/accel.sh@20 -- # val= 00:07:49.481 11:43:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.481 11:43:40 -- accel/accel.sh@19 -- # IFS=: 00:07:49.481 11:43:40 -- accel/accel.sh@19 -- # read -r var val 00:07:49.481 11:43:40 -- accel/accel.sh@20 -- # val= 00:07:49.481 11:43:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.481 11:43:40 -- accel/accel.sh@19 -- # IFS=: 00:07:49.481 11:43:40 -- accel/accel.sh@19 -- # read -r var val 00:07:49.481 11:43:40 -- accel/accel.sh@20 -- # val= 00:07:49.481 11:43:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.481 11:43:40 -- accel/accel.sh@19 -- # IFS=: 00:07:49.481 11:43:40 -- accel/accel.sh@19 -- # read -r var val 00:07:49.481 11:43:40 -- accel/accel.sh@20 -- # val= 00:07:49.481 11:43:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.481 11:43:40 -- accel/accel.sh@19 -- # IFS=: 00:07:49.481 11:43:40 -- accel/accel.sh@19 -- # read -r var val 00:07:49.481 11:43:40 -- accel/accel.sh@20 -- # val= 00:07:49.481 11:43:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.481 11:43:40 -- accel/accel.sh@19 -- # IFS=: 00:07:49.481 11:43:40 -- accel/accel.sh@19 -- # read -r var val 00:07:49.481 11:43:40 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:49.481 11:43:40 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:49.481 11:43:40 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:49.481 00:07:49.481 real 0m2.553s 00:07:49.481 user 0m2.349s 00:07:49.481 sys 0m0.219s 00:07:49.481 11:43:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:49.481 11:43:40 -- common/autotest_common.sh@10 -- # set +x 00:07:49.481 ************************************ 00:07:49.481 END TEST accel_copy_crc32c 00:07:49.481 ************************************ 00:07:49.739 11:43:40 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:49.739 11:43:40 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:49.739 11:43:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:49.739 11:43:40 -- common/autotest_common.sh@10 -- # set +x 00:07:49.739 ************************************ 00:07:49.739 START TEST accel_copy_crc32c_C2 00:07:49.739 ************************************ 00:07:49.739 11:43:40 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:49.739 11:43:40 -- accel/accel.sh@16 -- # local accel_opc 00:07:49.739 11:43:40 -- accel/accel.sh@17 -- # local accel_module 00:07:49.739 11:43:40 -- accel/accel.sh@19 -- # IFS=: 00:07:49.739 11:43:40 -- accel/accel.sh@19 -- # read -r var val 00:07:49.739 11:43:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:49.739 11:43:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:49.739 11:43:40 -- accel/accel.sh@12 -- # build_accel_config 00:07:49.739 11:43:40 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:49.739 11:43:40 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:49.739 11:43:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:49.739 11:43:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:49.739 11:43:40 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:49.739 11:43:40 -- accel/accel.sh@40 -- # local IFS=, 00:07:49.739 11:43:40 -- accel/accel.sh@41 -- # jq -r . 00:07:49.739 [2024-04-18 11:43:40.260899] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:07:49.739 [2024-04-18 11:43:40.260981] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2322033 ] 00:07:49.998 EAL: No free 2048 kB hugepages reported on node 1 00:07:49.998 [2024-04-18 11:43:40.387338] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.255 [2024-04-18 11:43:40.600692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.513 11:43:40 -- accel/accel.sh@20 -- # val= 00:07:50.513 11:43:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.513 11:43:40 -- accel/accel.sh@19 -- # IFS=: 00:07:50.513 11:43:40 -- accel/accel.sh@19 -- # read -r var val 00:07:50.513 11:43:40 -- accel/accel.sh@20 -- # val= 00:07:50.513 11:43:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.513 11:43:40 -- accel/accel.sh@19 -- # IFS=: 00:07:50.513 11:43:40 -- accel/accel.sh@19 -- # read -r var val 00:07:50.513 11:43:40 -- accel/accel.sh@20 -- # val=0x1 00:07:50.513 11:43:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.513 11:43:40 -- accel/accel.sh@19 -- # IFS=: 00:07:50.513 11:43:40 -- accel/accel.sh@19 -- # read -r var val 00:07:50.513 11:43:40 -- accel/accel.sh@20 -- # val= 00:07:50.513 11:43:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.513 11:43:40 -- accel/accel.sh@19 -- # IFS=: 00:07:50.513 11:43:40 -- accel/accel.sh@19 -- # read -r var val 00:07:50.513 11:43:40 -- accel/accel.sh@20 -- # val= 00:07:50.513 11:43:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.513 11:43:40 -- accel/accel.sh@19 -- # IFS=: 00:07:50.513 11:43:40 -- accel/accel.sh@19 -- # read -r var val 00:07:50.513 11:43:40 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:50.513 11:43:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.513 11:43:40 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:50.513 11:43:40 -- accel/accel.sh@19 -- # IFS=: 00:07:50.513 11:43:40 -- accel/accel.sh@19 -- # read -r var val 00:07:50.513 11:43:40 -- accel/accel.sh@20 -- # val=0 00:07:50.513 11:43:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.513 11:43:40 -- accel/accel.sh@19 -- # IFS=: 00:07:50.513 11:43:40 -- accel/accel.sh@19 -- # read -r var val 00:07:50.513 11:43:40 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:50.513 11:43:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.513 11:43:40 -- accel/accel.sh@19 -- # IFS=: 00:07:50.513 11:43:40 -- accel/accel.sh@19 -- # read -r var val 00:07:50.513 11:43:40 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:50.513 11:43:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.513 11:43:40 -- accel/accel.sh@19 -- # IFS=: 00:07:50.513 11:43:40 -- accel/accel.sh@19 -- # read -r var val 00:07:50.513 11:43:40 -- accel/accel.sh@20 -- # val= 00:07:50.513 11:43:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.513 11:43:40 -- accel/accel.sh@19 -- # IFS=: 00:07:50.513 11:43:40 -- accel/accel.sh@19 -- # read -r var val 00:07:50.513 11:43:40 -- accel/accel.sh@20 -- # val=software 00:07:50.513 11:43:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.513 11:43:40 -- accel/accel.sh@22 -- # accel_module=software 00:07:50.513 11:43:40 -- accel/accel.sh@19 -- # IFS=: 00:07:50.513 11:43:40 -- accel/accel.sh@19 -- # read -r var val 00:07:50.513 11:43:40 -- accel/accel.sh@20 -- # val=32 00:07:50.513 11:43:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.513 11:43:40 -- accel/accel.sh@19 -- # IFS=: 00:07:50.513 11:43:40 -- accel/accel.sh@19 -- # read -r var val 00:07:50.513 11:43:40 -- accel/accel.sh@20 -- # val=32 00:07:50.513 11:43:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.513 11:43:40 -- accel/accel.sh@19 -- # IFS=: 00:07:50.513 11:43:40 -- accel/accel.sh@19 -- # read -r var val 00:07:50.513 11:43:40 -- accel/accel.sh@20 -- # val=1 00:07:50.513 11:43:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.513 11:43:40 -- accel/accel.sh@19 -- # IFS=: 00:07:50.513 11:43:40 -- accel/accel.sh@19 -- # read -r var val 00:07:50.513 11:43:40 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:50.513 11:43:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.513 11:43:40 -- accel/accel.sh@19 -- # IFS=: 00:07:50.513 11:43:40 -- accel/accel.sh@19 -- # read -r var val 00:07:50.513 11:43:40 -- accel/accel.sh@20 -- # val=Yes 00:07:50.513 11:43:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.513 11:43:40 -- accel/accel.sh@19 -- # IFS=: 00:07:50.513 11:43:40 -- accel/accel.sh@19 -- # read -r var val 00:07:50.513 11:43:40 -- accel/accel.sh@20 -- # val= 00:07:50.513 11:43:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.513 11:43:40 -- accel/accel.sh@19 -- # IFS=: 00:07:50.513 11:43:40 -- accel/accel.sh@19 -- # read -r var val 00:07:50.513 11:43:40 -- accel/accel.sh@20 -- # val= 00:07:50.513 11:43:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.513 11:43:40 -- accel/accel.sh@19 -- # IFS=: 00:07:50.513 11:43:40 -- accel/accel.sh@19 -- # read -r var val 00:07:52.413 11:43:42 -- accel/accel.sh@20 -- # val= 00:07:52.413 11:43:42 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.413 11:43:42 -- accel/accel.sh@19 -- # IFS=: 00:07:52.413 11:43:42 -- accel/accel.sh@19 -- # read -r var val 00:07:52.413 11:43:42 -- accel/accel.sh@20 -- # val= 00:07:52.413 11:43:42 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.413 11:43:42 -- accel/accel.sh@19 -- # IFS=: 00:07:52.413 11:43:42 -- accel/accel.sh@19 -- # read -r var val 00:07:52.413 11:43:42 -- accel/accel.sh@20 -- # val= 00:07:52.413 11:43:42 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.413 11:43:42 -- accel/accel.sh@19 -- # IFS=: 00:07:52.413 11:43:42 -- accel/accel.sh@19 -- # read -r var val 00:07:52.413 11:43:42 -- accel/accel.sh@20 -- # val= 00:07:52.413 11:43:42 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.413 11:43:42 -- accel/accel.sh@19 -- # IFS=: 00:07:52.413 11:43:42 -- accel/accel.sh@19 -- # read -r var val 00:07:52.413 11:43:42 -- accel/accel.sh@20 -- # val= 00:07:52.413 11:43:42 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.413 11:43:42 -- accel/accel.sh@19 -- # IFS=: 00:07:52.413 11:43:42 -- accel/accel.sh@19 -- # read -r var val 00:07:52.413 11:43:42 -- accel/accel.sh@20 -- # val= 00:07:52.413 11:43:42 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.413 11:43:42 -- accel/accel.sh@19 -- # IFS=: 00:07:52.413 11:43:42 -- accel/accel.sh@19 -- # read -r var val 00:07:52.413 11:43:42 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:52.413 11:43:42 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:52.413 11:43:42 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:52.413 00:07:52.413 real 0m2.539s 00:07:52.413 user 0m2.336s 00:07:52.413 sys 0m0.215s 00:07:52.413 11:43:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:52.413 11:43:42 -- common/autotest_common.sh@10 -- # set +x 00:07:52.413 ************************************ 00:07:52.413 END TEST accel_copy_crc32c_C2 00:07:52.413 ************************************ 00:07:52.413 11:43:42 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:52.413 11:43:42 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:52.413 11:43:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:52.413 11:43:42 -- common/autotest_common.sh@10 -- # set +x 00:07:52.413 ************************************ 00:07:52.413 START TEST accel_dualcast 00:07:52.413 ************************************ 00:07:52.413 11:43:42 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dualcast -y 00:07:52.413 11:43:42 -- accel/accel.sh@16 -- # local accel_opc 00:07:52.413 11:43:42 -- accel/accel.sh@17 -- # local accel_module 00:07:52.413 11:43:42 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:52.413 11:43:42 -- accel/accel.sh@19 -- # IFS=: 00:07:52.413 11:43:42 -- accel/accel.sh@19 -- # read -r var val 00:07:52.413 11:43:42 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:52.413 11:43:42 -- accel/accel.sh@12 -- # build_accel_config 00:07:52.413 11:43:42 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:52.413 11:43:42 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:52.413 11:43:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:52.413 11:43:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:52.413 11:43:42 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:52.413 11:43:42 -- accel/accel.sh@40 -- # local IFS=, 00:07:52.413 11:43:42 -- accel/accel.sh@41 -- # jq -r . 00:07:52.671 [2024-04-18 11:43:42.987123] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:07:52.671 [2024-04-18 11:43:42.987205] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2322591 ] 00:07:52.671 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.671 [2024-04-18 11:43:43.104688] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.929 [2024-04-18 11:43:43.305287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.187 11:43:43 -- accel/accel.sh@20 -- # val= 00:07:53.187 11:43:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.187 11:43:43 -- accel/accel.sh@19 -- # IFS=: 00:07:53.187 11:43:43 -- accel/accel.sh@19 -- # read -r var val 00:07:53.187 11:43:43 -- accel/accel.sh@20 -- # val= 00:07:53.187 11:43:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.187 11:43:43 -- accel/accel.sh@19 -- # IFS=: 00:07:53.187 11:43:43 -- accel/accel.sh@19 -- # read -r var val 00:07:53.187 11:43:43 -- accel/accel.sh@20 -- # val=0x1 00:07:53.187 11:43:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.187 11:43:43 -- accel/accel.sh@19 -- # IFS=: 00:07:53.187 11:43:43 -- accel/accel.sh@19 -- # read -r var val 00:07:53.187 11:43:43 -- accel/accel.sh@20 -- # val= 00:07:53.187 11:43:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.187 11:43:43 -- accel/accel.sh@19 -- # IFS=: 00:07:53.187 11:43:43 -- accel/accel.sh@19 -- # read -r var val 00:07:53.187 11:43:43 -- accel/accel.sh@20 -- # val= 00:07:53.187 11:43:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.187 11:43:43 -- accel/accel.sh@19 -- # IFS=: 00:07:53.187 11:43:43 -- accel/accel.sh@19 -- # read -r var val 00:07:53.187 11:43:43 -- accel/accel.sh@20 -- # val=dualcast 00:07:53.187 11:43:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.187 11:43:43 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:53.187 11:43:43 -- accel/accel.sh@19 -- # IFS=: 00:07:53.187 11:43:43 -- accel/accel.sh@19 -- # read -r var val 00:07:53.187 11:43:43 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:53.187 11:43:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.187 11:43:43 -- accel/accel.sh@19 -- # IFS=: 00:07:53.187 11:43:43 -- accel/accel.sh@19 -- # read -r var val 00:07:53.187 11:43:43 -- accel/accel.sh@20 -- # val= 00:07:53.187 11:43:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.187 11:43:43 -- accel/accel.sh@19 -- # IFS=: 00:07:53.187 11:43:43 -- accel/accel.sh@19 -- # read -r var val 00:07:53.187 11:43:43 -- accel/accel.sh@20 -- # val=software 00:07:53.187 11:43:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.187 11:43:43 -- accel/accel.sh@22 -- # accel_module=software 00:07:53.187 11:43:43 -- accel/accel.sh@19 -- # IFS=: 00:07:53.187 11:43:43 -- accel/accel.sh@19 -- # read -r var val 00:07:53.187 11:43:43 -- accel/accel.sh@20 -- # val=32 00:07:53.187 11:43:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.187 11:43:43 -- accel/accel.sh@19 -- # IFS=: 00:07:53.187 11:43:43 -- accel/accel.sh@19 -- # read -r var val 00:07:53.187 11:43:43 -- accel/accel.sh@20 -- # val=32 00:07:53.187 11:43:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.187 11:43:43 -- accel/accel.sh@19 -- # IFS=: 00:07:53.187 11:43:43 -- accel/accel.sh@19 -- # read -r var val 00:07:53.187 11:43:43 -- accel/accel.sh@20 -- # val=1 00:07:53.187 11:43:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.187 11:43:43 -- accel/accel.sh@19 -- # IFS=: 00:07:53.187 11:43:43 -- accel/accel.sh@19 -- # read -r var val 00:07:53.187 11:43:43 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:53.187 11:43:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.187 11:43:43 -- accel/accel.sh@19 -- # IFS=: 00:07:53.187 11:43:43 -- accel/accel.sh@19 -- # read -r var val 00:07:53.187 11:43:43 -- accel/accel.sh@20 -- # val=Yes 00:07:53.187 11:43:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.187 11:43:43 -- accel/accel.sh@19 -- # IFS=: 00:07:53.187 11:43:43 -- accel/accel.sh@19 -- # read -r var val 00:07:53.187 11:43:43 -- accel/accel.sh@20 -- # val= 00:07:53.187 11:43:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.187 11:43:43 -- accel/accel.sh@19 -- # IFS=: 00:07:53.187 11:43:43 -- accel/accel.sh@19 -- # read -r var val 00:07:53.187 11:43:43 -- accel/accel.sh@20 -- # val= 00:07:53.187 11:43:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.187 11:43:43 -- accel/accel.sh@19 -- # IFS=: 00:07:53.187 11:43:43 -- accel/accel.sh@19 -- # read -r var val 00:07:55.088 11:43:45 -- accel/accel.sh@20 -- # val= 00:07:55.088 11:43:45 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.088 11:43:45 -- accel/accel.sh@19 -- # IFS=: 00:07:55.088 11:43:45 -- accel/accel.sh@19 -- # read -r var val 00:07:55.088 11:43:45 -- accel/accel.sh@20 -- # val= 00:07:55.088 11:43:45 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.088 11:43:45 -- accel/accel.sh@19 -- # IFS=: 00:07:55.088 11:43:45 -- accel/accel.sh@19 -- # read -r var val 00:07:55.088 11:43:45 -- accel/accel.sh@20 -- # val= 00:07:55.088 11:43:45 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.088 11:43:45 -- accel/accel.sh@19 -- # IFS=: 00:07:55.088 11:43:45 -- accel/accel.sh@19 -- # read -r var val 00:07:55.088 11:43:45 -- accel/accel.sh@20 -- # val= 00:07:55.088 11:43:45 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.088 11:43:45 -- accel/accel.sh@19 -- # IFS=: 00:07:55.088 11:43:45 -- accel/accel.sh@19 -- # read -r var val 00:07:55.088 11:43:45 -- accel/accel.sh@20 -- # val= 00:07:55.088 11:43:45 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.088 11:43:45 -- accel/accel.sh@19 -- # IFS=: 00:07:55.088 11:43:45 -- accel/accel.sh@19 -- # read -r var val 00:07:55.088 11:43:45 -- accel/accel.sh@20 -- # val= 00:07:55.088 11:43:45 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.088 11:43:45 -- accel/accel.sh@19 -- # IFS=: 00:07:55.088 11:43:45 -- accel/accel.sh@19 -- # read -r var val 00:07:55.088 11:43:45 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:55.088 11:43:45 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:55.088 11:43:45 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:55.088 00:07:55.088 real 0m2.528s 00:07:55.088 user 0m2.345s 00:07:55.088 sys 0m0.197s 00:07:55.088 11:43:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:55.088 11:43:45 -- common/autotest_common.sh@10 -- # set +x 00:07:55.088 ************************************ 00:07:55.088 END TEST accel_dualcast 00:07:55.088 ************************************ 00:07:55.088 11:43:45 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:55.088 11:43:45 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:55.088 11:43:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:55.088 11:43:45 -- common/autotest_common.sh@10 -- # set +x 00:07:55.346 ************************************ 00:07:55.346 START TEST accel_compare 00:07:55.346 ************************************ 00:07:55.346 11:43:45 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compare -y 00:07:55.346 11:43:45 -- accel/accel.sh@16 -- # local accel_opc 00:07:55.346 11:43:45 -- accel/accel.sh@17 -- # local accel_module 00:07:55.346 11:43:45 -- accel/accel.sh@19 -- # IFS=: 00:07:55.346 11:43:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:55.346 11:43:45 -- accel/accel.sh@19 -- # read -r var val 00:07:55.346 11:43:45 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:55.346 11:43:45 -- accel/accel.sh@12 -- # build_accel_config 00:07:55.346 11:43:45 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:55.346 11:43:45 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:55.346 11:43:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:55.346 11:43:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:55.346 11:43:45 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:55.346 11:43:45 -- accel/accel.sh@40 -- # local IFS=, 00:07:55.346 11:43:45 -- accel/accel.sh@41 -- # jq -r . 00:07:55.346 [2024-04-18 11:43:45.721229] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:07:55.346 [2024-04-18 11:43:45.721305] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2323132 ] 00:07:55.346 EAL: No free 2048 kB hugepages reported on node 1 00:07:55.346 [2024-04-18 11:43:45.844629] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.605 [2024-04-18 11:43:46.054983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.864 11:43:46 -- accel/accel.sh@20 -- # val= 00:07:55.864 11:43:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.864 11:43:46 -- accel/accel.sh@19 -- # IFS=: 00:07:55.864 11:43:46 -- accel/accel.sh@19 -- # read -r var val 00:07:55.864 11:43:46 -- accel/accel.sh@20 -- # val= 00:07:55.864 11:43:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.864 11:43:46 -- accel/accel.sh@19 -- # IFS=: 00:07:55.864 11:43:46 -- accel/accel.sh@19 -- # read -r var val 00:07:55.864 11:43:46 -- accel/accel.sh@20 -- # val=0x1 00:07:55.864 11:43:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.864 11:43:46 -- accel/accel.sh@19 -- # IFS=: 00:07:55.864 11:43:46 -- accel/accel.sh@19 -- # read -r var val 00:07:55.864 11:43:46 -- accel/accel.sh@20 -- # val= 00:07:55.864 11:43:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.864 11:43:46 -- accel/accel.sh@19 -- # IFS=: 00:07:55.864 11:43:46 -- accel/accel.sh@19 -- # read -r var val 00:07:55.864 11:43:46 -- accel/accel.sh@20 -- # val= 00:07:55.864 11:43:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.864 11:43:46 -- accel/accel.sh@19 -- # IFS=: 00:07:55.864 11:43:46 -- accel/accel.sh@19 -- # read -r var val 00:07:55.864 11:43:46 -- accel/accel.sh@20 -- # val=compare 00:07:55.864 11:43:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.864 11:43:46 -- accel/accel.sh@23 -- # accel_opc=compare 00:07:55.864 11:43:46 -- accel/accel.sh@19 -- # IFS=: 00:07:55.864 11:43:46 -- accel/accel.sh@19 -- # read -r var val 00:07:55.864 11:43:46 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:55.864 11:43:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.864 11:43:46 -- accel/accel.sh@19 -- # IFS=: 00:07:55.864 11:43:46 -- accel/accel.sh@19 -- # read -r var val 00:07:55.864 11:43:46 -- accel/accel.sh@20 -- # val= 00:07:55.864 11:43:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.864 11:43:46 -- accel/accel.sh@19 -- # IFS=: 00:07:55.864 11:43:46 -- accel/accel.sh@19 -- # read -r var val 00:07:55.864 11:43:46 -- accel/accel.sh@20 -- # val=software 00:07:55.864 11:43:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.864 11:43:46 -- accel/accel.sh@22 -- # accel_module=software 00:07:55.864 11:43:46 -- accel/accel.sh@19 -- # IFS=: 00:07:55.864 11:43:46 -- accel/accel.sh@19 -- # read -r var val 00:07:55.864 11:43:46 -- accel/accel.sh@20 -- # val=32 00:07:55.864 11:43:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.864 11:43:46 -- accel/accel.sh@19 -- # IFS=: 00:07:55.864 11:43:46 -- accel/accel.sh@19 -- # read -r var val 00:07:55.864 11:43:46 -- accel/accel.sh@20 -- # val=32 00:07:55.864 11:43:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.864 11:43:46 -- accel/accel.sh@19 -- # IFS=: 00:07:55.864 11:43:46 -- accel/accel.sh@19 -- # read -r var val 00:07:55.864 11:43:46 -- accel/accel.sh@20 -- # val=1 00:07:55.864 11:43:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.864 11:43:46 -- accel/accel.sh@19 -- # IFS=: 00:07:55.864 11:43:46 -- accel/accel.sh@19 -- # read -r var val 00:07:55.864 11:43:46 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:55.864 11:43:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.864 11:43:46 -- accel/accel.sh@19 -- # IFS=: 00:07:55.864 11:43:46 -- accel/accel.sh@19 -- # read -r var val 00:07:55.864 11:43:46 -- accel/accel.sh@20 -- # val=Yes 00:07:55.864 11:43:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.864 11:43:46 -- accel/accel.sh@19 -- # IFS=: 00:07:55.864 11:43:46 -- accel/accel.sh@19 -- # read -r var val 00:07:55.864 11:43:46 -- accel/accel.sh@20 -- # val= 00:07:55.864 11:43:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.864 11:43:46 -- accel/accel.sh@19 -- # IFS=: 00:07:55.864 11:43:46 -- accel/accel.sh@19 -- # read -r var val 00:07:55.864 11:43:46 -- accel/accel.sh@20 -- # val= 00:07:55.864 11:43:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.864 11:43:46 -- accel/accel.sh@19 -- # IFS=: 00:07:55.864 11:43:46 -- accel/accel.sh@19 -- # read -r var val 00:07:57.766 11:43:48 -- accel/accel.sh@20 -- # val= 00:07:57.766 11:43:48 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.766 11:43:48 -- accel/accel.sh@19 -- # IFS=: 00:07:57.766 11:43:48 -- accel/accel.sh@19 -- # read -r var val 00:07:57.766 11:43:48 -- accel/accel.sh@20 -- # val= 00:07:57.766 11:43:48 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.766 11:43:48 -- accel/accel.sh@19 -- # IFS=: 00:07:57.766 11:43:48 -- accel/accel.sh@19 -- # read -r var val 00:07:57.766 11:43:48 -- accel/accel.sh@20 -- # val= 00:07:57.766 11:43:48 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.766 11:43:48 -- accel/accel.sh@19 -- # IFS=: 00:07:57.766 11:43:48 -- accel/accel.sh@19 -- # read -r var val 00:07:57.766 11:43:48 -- accel/accel.sh@20 -- # val= 00:07:57.766 11:43:48 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.766 11:43:48 -- accel/accel.sh@19 -- # IFS=: 00:07:57.766 11:43:48 -- accel/accel.sh@19 -- # read -r var val 00:07:57.766 11:43:48 -- accel/accel.sh@20 -- # val= 00:07:57.766 11:43:48 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.766 11:43:48 -- accel/accel.sh@19 -- # IFS=: 00:07:57.766 11:43:48 -- accel/accel.sh@19 -- # read -r var val 00:07:57.766 11:43:48 -- accel/accel.sh@20 -- # val= 00:07:57.766 11:43:48 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.766 11:43:48 -- accel/accel.sh@19 -- # IFS=: 00:07:57.766 11:43:48 -- accel/accel.sh@19 -- # read -r var val 00:07:57.766 11:43:48 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:57.766 11:43:48 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:57.766 11:43:48 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:57.766 00:07:57.766 real 0m2.548s 00:07:57.766 user 0m2.367s 00:07:57.766 sys 0m0.196s 00:07:57.766 11:43:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:57.766 11:43:48 -- common/autotest_common.sh@10 -- # set +x 00:07:57.766 ************************************ 00:07:57.766 END TEST accel_compare 00:07:57.766 ************************************ 00:07:57.766 11:43:48 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:57.766 11:43:48 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:57.766 11:43:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:57.766 11:43:48 -- common/autotest_common.sh@10 -- # set +x 00:07:58.025 ************************************ 00:07:58.025 START TEST accel_xor 00:07:58.025 ************************************ 00:07:58.025 11:43:48 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y 00:07:58.025 11:43:48 -- accel/accel.sh@16 -- # local accel_opc 00:07:58.025 11:43:48 -- accel/accel.sh@17 -- # local accel_module 00:07:58.025 11:43:48 -- accel/accel.sh@19 -- # IFS=: 00:07:58.025 11:43:48 -- accel/accel.sh@19 -- # read -r var val 00:07:58.025 11:43:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:58.025 11:43:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:58.025 11:43:48 -- accel/accel.sh@12 -- # build_accel_config 00:07:58.025 11:43:48 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:58.025 11:43:48 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:58.025 11:43:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:58.025 11:43:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:58.025 11:43:48 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:58.025 11:43:48 -- accel/accel.sh@40 -- # local IFS=, 00:07:58.025 11:43:48 -- accel/accel.sh@41 -- # jq -r . 00:07:58.025 [2024-04-18 11:43:48.462398] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:07:58.025 [2024-04-18 11:43:48.462480] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2323507 ] 00:07:58.025 EAL: No free 2048 kB hugepages reported on node 1 00:07:58.283 [2024-04-18 11:43:48.583336] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.283 [2024-04-18 11:43:48.785251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.540 11:43:49 -- accel/accel.sh@20 -- # val= 00:07:58.540 11:43:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.540 11:43:49 -- accel/accel.sh@19 -- # IFS=: 00:07:58.540 11:43:49 -- accel/accel.sh@19 -- # read -r var val 00:07:58.540 11:43:49 -- accel/accel.sh@20 -- # val= 00:07:58.540 11:43:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.540 11:43:49 -- accel/accel.sh@19 -- # IFS=: 00:07:58.540 11:43:49 -- accel/accel.sh@19 -- # read -r var val 00:07:58.540 11:43:49 -- accel/accel.sh@20 -- # val=0x1 00:07:58.540 11:43:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.540 11:43:49 -- accel/accel.sh@19 -- # IFS=: 00:07:58.540 11:43:49 -- accel/accel.sh@19 -- # read -r var val 00:07:58.540 11:43:49 -- accel/accel.sh@20 -- # val= 00:07:58.540 11:43:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.541 11:43:49 -- accel/accel.sh@19 -- # IFS=: 00:07:58.541 11:43:49 -- accel/accel.sh@19 -- # read -r var val 00:07:58.541 11:43:49 -- accel/accel.sh@20 -- # val= 00:07:58.541 11:43:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.541 11:43:49 -- accel/accel.sh@19 -- # IFS=: 00:07:58.541 11:43:49 -- accel/accel.sh@19 -- # read -r var val 00:07:58.541 11:43:49 -- accel/accel.sh@20 -- # val=xor 00:07:58.541 11:43:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.541 11:43:49 -- accel/accel.sh@23 -- # accel_opc=xor 00:07:58.541 11:43:49 -- accel/accel.sh@19 -- # IFS=: 00:07:58.541 11:43:49 -- accel/accel.sh@19 -- # read -r var val 00:07:58.541 11:43:49 -- accel/accel.sh@20 -- # val=2 00:07:58.541 11:43:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.541 11:43:49 -- accel/accel.sh@19 -- # IFS=: 00:07:58.541 11:43:49 -- accel/accel.sh@19 -- # read -r var val 00:07:58.541 11:43:49 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:58.541 11:43:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.541 11:43:49 -- accel/accel.sh@19 -- # IFS=: 00:07:58.541 11:43:49 -- accel/accel.sh@19 -- # read -r var val 00:07:58.541 11:43:49 -- accel/accel.sh@20 -- # val= 00:07:58.541 11:43:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.541 11:43:49 -- accel/accel.sh@19 -- # IFS=: 00:07:58.541 11:43:49 -- accel/accel.sh@19 -- # read -r var val 00:07:58.541 11:43:49 -- accel/accel.sh@20 -- # val=software 00:07:58.541 11:43:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.541 11:43:49 -- accel/accel.sh@22 -- # accel_module=software 00:07:58.541 11:43:49 -- accel/accel.sh@19 -- # IFS=: 00:07:58.541 11:43:49 -- accel/accel.sh@19 -- # read -r var val 00:07:58.541 11:43:49 -- accel/accel.sh@20 -- # val=32 00:07:58.541 11:43:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.541 11:43:49 -- accel/accel.sh@19 -- # IFS=: 00:07:58.541 11:43:49 -- accel/accel.sh@19 -- # read -r var val 00:07:58.541 11:43:49 -- accel/accel.sh@20 -- # val=32 00:07:58.541 11:43:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.541 11:43:49 -- accel/accel.sh@19 -- # IFS=: 00:07:58.541 11:43:49 -- accel/accel.sh@19 -- # read -r var val 00:07:58.541 11:43:49 -- accel/accel.sh@20 -- # val=1 00:07:58.541 11:43:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.541 11:43:49 -- accel/accel.sh@19 -- # IFS=: 00:07:58.541 11:43:49 -- accel/accel.sh@19 -- # read -r var val 00:07:58.541 11:43:49 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:58.541 11:43:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.541 11:43:49 -- accel/accel.sh@19 -- # IFS=: 00:07:58.541 11:43:49 -- accel/accel.sh@19 -- # read -r var val 00:07:58.541 11:43:49 -- accel/accel.sh@20 -- # val=Yes 00:07:58.541 11:43:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.541 11:43:49 -- accel/accel.sh@19 -- # IFS=: 00:07:58.541 11:43:49 -- accel/accel.sh@19 -- # read -r var val 00:07:58.541 11:43:49 -- accel/accel.sh@20 -- # val= 00:07:58.541 11:43:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.541 11:43:49 -- accel/accel.sh@19 -- # IFS=: 00:07:58.541 11:43:49 -- accel/accel.sh@19 -- # read -r var val 00:07:58.541 11:43:49 -- accel/accel.sh@20 -- # val= 00:07:58.541 11:43:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.541 11:43:49 -- accel/accel.sh@19 -- # IFS=: 00:07:58.541 11:43:49 -- accel/accel.sh@19 -- # read -r var val 00:08:00.441 11:43:50 -- accel/accel.sh@20 -- # val= 00:08:00.441 11:43:50 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.441 11:43:50 -- accel/accel.sh@19 -- # IFS=: 00:08:00.441 11:43:50 -- accel/accel.sh@19 -- # read -r var val 00:08:00.441 11:43:50 -- accel/accel.sh@20 -- # val= 00:08:00.441 11:43:50 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.441 11:43:50 -- accel/accel.sh@19 -- # IFS=: 00:08:00.441 11:43:50 -- accel/accel.sh@19 -- # read -r var val 00:08:00.441 11:43:50 -- accel/accel.sh@20 -- # val= 00:08:00.441 11:43:50 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.441 11:43:50 -- accel/accel.sh@19 -- # IFS=: 00:08:00.441 11:43:50 -- accel/accel.sh@19 -- # read -r var val 00:08:00.441 11:43:50 -- accel/accel.sh@20 -- # val= 00:08:00.441 11:43:50 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.441 11:43:50 -- accel/accel.sh@19 -- # IFS=: 00:08:00.441 11:43:50 -- accel/accel.sh@19 -- # read -r var val 00:08:00.441 11:43:50 -- accel/accel.sh@20 -- # val= 00:08:00.441 11:43:50 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.441 11:43:50 -- accel/accel.sh@19 -- # IFS=: 00:08:00.441 11:43:50 -- accel/accel.sh@19 -- # read -r var val 00:08:00.441 11:43:50 -- accel/accel.sh@20 -- # val= 00:08:00.441 11:43:50 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.441 11:43:50 -- accel/accel.sh@19 -- # IFS=: 00:08:00.441 11:43:50 -- accel/accel.sh@19 -- # read -r var val 00:08:00.441 11:43:50 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:00.441 11:43:50 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:00.441 11:43:50 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:00.441 00:08:00.441 real 0m2.557s 00:08:00.441 user 0m2.368s 00:08:00.441 sys 0m0.203s 00:08:00.441 11:43:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:00.441 11:43:50 -- common/autotest_common.sh@10 -- # set +x 00:08:00.441 ************************************ 00:08:00.441 END TEST accel_xor 00:08:00.441 ************************************ 00:08:00.699 11:43:51 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:08:00.699 11:43:51 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:08:00.699 11:43:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:00.700 11:43:51 -- common/autotest_common.sh@10 -- # set +x 00:08:00.700 ************************************ 00:08:00.700 START TEST accel_xor 00:08:00.700 ************************************ 00:08:00.700 11:43:51 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y -x 3 00:08:00.700 11:43:51 -- accel/accel.sh@16 -- # local accel_opc 00:08:00.700 11:43:51 -- accel/accel.sh@17 -- # local accel_module 00:08:00.700 11:43:51 -- accel/accel.sh@19 -- # IFS=: 00:08:00.700 11:43:51 -- accel/accel.sh@19 -- # read -r var val 00:08:00.700 11:43:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:08:00.700 11:43:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:08:00.700 11:43:51 -- accel/accel.sh@12 -- # build_accel_config 00:08:00.700 11:43:51 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:00.700 11:43:51 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:00.700 11:43:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:00.700 11:43:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:00.700 11:43:51 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:00.700 11:43:51 -- accel/accel.sh@40 -- # local IFS=, 00:08:00.700 11:43:51 -- accel/accel.sh@41 -- # jq -r . 00:08:00.700 [2024-04-18 11:43:51.200178] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:08:00.700 [2024-04-18 11:43:51.200252] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2324009 ] 00:08:00.958 EAL: No free 2048 kB hugepages reported on node 1 00:08:00.958 [2024-04-18 11:43:51.318765] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.217 [2024-04-18 11:43:51.535807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.217 11:43:51 -- accel/accel.sh@20 -- # val= 00:08:01.217 11:43:51 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.217 11:43:51 -- accel/accel.sh@19 -- # IFS=: 00:08:01.217 11:43:51 -- accel/accel.sh@19 -- # read -r var val 00:08:01.217 11:43:51 -- accel/accel.sh@20 -- # val= 00:08:01.217 11:43:51 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.217 11:43:51 -- accel/accel.sh@19 -- # IFS=: 00:08:01.217 11:43:51 -- accel/accel.sh@19 -- # read -r var val 00:08:01.217 11:43:51 -- accel/accel.sh@20 -- # val=0x1 00:08:01.217 11:43:51 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.217 11:43:51 -- accel/accel.sh@19 -- # IFS=: 00:08:01.217 11:43:51 -- accel/accel.sh@19 -- # read -r var val 00:08:01.217 11:43:51 -- accel/accel.sh@20 -- # val= 00:08:01.217 11:43:51 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.217 11:43:51 -- accel/accel.sh@19 -- # IFS=: 00:08:01.217 11:43:51 -- accel/accel.sh@19 -- # read -r var val 00:08:01.217 11:43:51 -- accel/accel.sh@20 -- # val= 00:08:01.217 11:43:51 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.217 11:43:51 -- accel/accel.sh@19 -- # IFS=: 00:08:01.217 11:43:51 -- accel/accel.sh@19 -- # read -r var val 00:08:01.217 11:43:51 -- accel/accel.sh@20 -- # val=xor 00:08:01.217 11:43:51 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.217 11:43:51 -- accel/accel.sh@23 -- # accel_opc=xor 00:08:01.217 11:43:51 -- accel/accel.sh@19 -- # IFS=: 00:08:01.217 11:43:51 -- accel/accel.sh@19 -- # read -r var val 00:08:01.217 11:43:51 -- accel/accel.sh@20 -- # val=3 00:08:01.217 11:43:51 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.217 11:43:51 -- accel/accel.sh@19 -- # IFS=: 00:08:01.217 11:43:51 -- accel/accel.sh@19 -- # read -r var val 00:08:01.217 11:43:51 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:01.217 11:43:51 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.217 11:43:51 -- accel/accel.sh@19 -- # IFS=: 00:08:01.217 11:43:51 -- accel/accel.sh@19 -- # read -r var val 00:08:01.217 11:43:51 -- accel/accel.sh@20 -- # val= 00:08:01.217 11:43:51 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.217 11:43:51 -- accel/accel.sh@19 -- # IFS=: 00:08:01.217 11:43:51 -- accel/accel.sh@19 -- # read -r var val 00:08:01.475 11:43:51 -- accel/accel.sh@20 -- # val=software 00:08:01.475 11:43:51 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.475 11:43:51 -- accel/accel.sh@22 -- # accel_module=software 00:08:01.475 11:43:51 -- accel/accel.sh@19 -- # IFS=: 00:08:01.475 11:43:51 -- accel/accel.sh@19 -- # read -r var val 00:08:01.475 11:43:51 -- accel/accel.sh@20 -- # val=32 00:08:01.475 11:43:51 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.475 11:43:51 -- accel/accel.sh@19 -- # IFS=: 00:08:01.475 11:43:51 -- accel/accel.sh@19 -- # read -r var val 00:08:01.475 11:43:51 -- accel/accel.sh@20 -- # val=32 00:08:01.475 11:43:51 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.475 11:43:51 -- accel/accel.sh@19 -- # IFS=: 00:08:01.475 11:43:51 -- accel/accel.sh@19 -- # read -r var val 00:08:01.475 11:43:51 -- accel/accel.sh@20 -- # val=1 00:08:01.475 11:43:51 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.475 11:43:51 -- accel/accel.sh@19 -- # IFS=: 00:08:01.475 11:43:51 -- accel/accel.sh@19 -- # read -r var val 00:08:01.475 11:43:51 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:01.475 11:43:51 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.475 11:43:51 -- accel/accel.sh@19 -- # IFS=: 00:08:01.475 11:43:51 -- accel/accel.sh@19 -- # read -r var val 00:08:01.475 11:43:51 -- accel/accel.sh@20 -- # val=Yes 00:08:01.475 11:43:51 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.475 11:43:51 -- accel/accel.sh@19 -- # IFS=: 00:08:01.475 11:43:51 -- accel/accel.sh@19 -- # read -r var val 00:08:01.475 11:43:51 -- accel/accel.sh@20 -- # val= 00:08:01.475 11:43:51 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.475 11:43:51 -- accel/accel.sh@19 -- # IFS=: 00:08:01.475 11:43:51 -- accel/accel.sh@19 -- # read -r var val 00:08:01.475 11:43:51 -- accel/accel.sh@20 -- # val= 00:08:01.475 11:43:51 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.475 11:43:51 -- accel/accel.sh@19 -- # IFS=: 00:08:01.475 11:43:51 -- accel/accel.sh@19 -- # read -r var val 00:08:03.375 11:43:53 -- accel/accel.sh@20 -- # val= 00:08:03.375 11:43:53 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.375 11:43:53 -- accel/accel.sh@19 -- # IFS=: 00:08:03.375 11:43:53 -- accel/accel.sh@19 -- # read -r var val 00:08:03.375 11:43:53 -- accel/accel.sh@20 -- # val= 00:08:03.375 11:43:53 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.375 11:43:53 -- accel/accel.sh@19 -- # IFS=: 00:08:03.375 11:43:53 -- accel/accel.sh@19 -- # read -r var val 00:08:03.375 11:43:53 -- accel/accel.sh@20 -- # val= 00:08:03.375 11:43:53 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.375 11:43:53 -- accel/accel.sh@19 -- # IFS=: 00:08:03.375 11:43:53 -- accel/accel.sh@19 -- # read -r var val 00:08:03.375 11:43:53 -- accel/accel.sh@20 -- # val= 00:08:03.375 11:43:53 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.375 11:43:53 -- accel/accel.sh@19 -- # IFS=: 00:08:03.375 11:43:53 -- accel/accel.sh@19 -- # read -r var val 00:08:03.375 11:43:53 -- accel/accel.sh@20 -- # val= 00:08:03.375 11:43:53 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.375 11:43:53 -- accel/accel.sh@19 -- # IFS=: 00:08:03.375 11:43:53 -- accel/accel.sh@19 -- # read -r var val 00:08:03.375 11:43:53 -- accel/accel.sh@20 -- # val= 00:08:03.375 11:43:53 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.375 11:43:53 -- accel/accel.sh@19 -- # IFS=: 00:08:03.375 11:43:53 -- accel/accel.sh@19 -- # read -r var val 00:08:03.375 11:43:53 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:03.375 11:43:53 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:03.375 11:43:53 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:03.375 00:08:03.375 real 0m2.554s 00:08:03.375 user 0m2.366s 00:08:03.376 sys 0m0.198s 00:08:03.376 11:43:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:03.376 11:43:53 -- common/autotest_common.sh@10 -- # set +x 00:08:03.376 ************************************ 00:08:03.376 END TEST accel_xor 00:08:03.376 ************************************ 00:08:03.376 11:43:53 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:08:03.376 11:43:53 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:08:03.376 11:43:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:03.376 11:43:53 -- common/autotest_common.sh@10 -- # set +x 00:08:03.376 ************************************ 00:08:03.376 START TEST accel_dif_verify 00:08:03.376 ************************************ 00:08:03.376 11:43:53 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_verify 00:08:03.376 11:43:53 -- accel/accel.sh@16 -- # local accel_opc 00:08:03.376 11:43:53 -- accel/accel.sh@17 -- # local accel_module 00:08:03.376 11:43:53 -- accel/accel.sh@19 -- # IFS=: 00:08:03.376 11:43:53 -- accel/accel.sh@19 -- # read -r var val 00:08:03.376 11:43:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:08:03.376 11:43:53 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:08:03.376 11:43:53 -- accel/accel.sh@12 -- # build_accel_config 00:08:03.376 11:43:53 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:03.376 11:43:53 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:03.376 11:43:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:03.376 11:43:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:03.376 11:43:53 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:03.376 11:43:53 -- accel/accel.sh@40 -- # local IFS=, 00:08:03.376 11:43:53 -- accel/accel.sh@41 -- # jq -r . 00:08:03.690 [2024-04-18 11:43:53.938887] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:08:03.690 [2024-04-18 11:43:53.938962] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2324554 ] 00:08:03.690 EAL: No free 2048 kB hugepages reported on node 1 00:08:03.690 [2024-04-18 11:43:54.058426] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.948 [2024-04-18 11:43:54.264950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.948 11:43:54 -- accel/accel.sh@20 -- # val= 00:08:03.948 11:43:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.948 11:43:54 -- accel/accel.sh@19 -- # IFS=: 00:08:03.948 11:43:54 -- accel/accel.sh@19 -- # read -r var val 00:08:03.948 11:43:54 -- accel/accel.sh@20 -- # val= 00:08:03.949 11:43:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.949 11:43:54 -- accel/accel.sh@19 -- # IFS=: 00:08:03.949 11:43:54 -- accel/accel.sh@19 -- # read -r var val 00:08:03.949 11:43:54 -- accel/accel.sh@20 -- # val=0x1 00:08:03.949 11:43:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.949 11:43:54 -- accel/accel.sh@19 -- # IFS=: 00:08:03.949 11:43:54 -- accel/accel.sh@19 -- # read -r var val 00:08:03.949 11:43:54 -- accel/accel.sh@20 -- # val= 00:08:03.949 11:43:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.949 11:43:54 -- accel/accel.sh@19 -- # IFS=: 00:08:03.949 11:43:54 -- accel/accel.sh@19 -- # read -r var val 00:08:03.949 11:43:54 -- accel/accel.sh@20 -- # val= 00:08:03.949 11:43:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.949 11:43:54 -- accel/accel.sh@19 -- # IFS=: 00:08:03.949 11:43:54 -- accel/accel.sh@19 -- # read -r var val 00:08:03.949 11:43:54 -- accel/accel.sh@20 -- # val=dif_verify 00:08:04.207 11:43:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.207 11:43:54 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:08:04.207 11:43:54 -- accel/accel.sh@19 -- # IFS=: 00:08:04.207 11:43:54 -- accel/accel.sh@19 -- # read -r var val 00:08:04.207 11:43:54 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:04.207 11:43:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.207 11:43:54 -- accel/accel.sh@19 -- # IFS=: 00:08:04.207 11:43:54 -- accel/accel.sh@19 -- # read -r var val 00:08:04.207 11:43:54 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:04.207 11:43:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.207 11:43:54 -- accel/accel.sh@19 -- # IFS=: 00:08:04.207 11:43:54 -- accel/accel.sh@19 -- # read -r var val 00:08:04.207 11:43:54 -- accel/accel.sh@20 -- # val='512 bytes' 00:08:04.207 11:43:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.207 11:43:54 -- accel/accel.sh@19 -- # IFS=: 00:08:04.207 11:43:54 -- accel/accel.sh@19 -- # read -r var val 00:08:04.207 11:43:54 -- accel/accel.sh@20 -- # val='8 bytes' 00:08:04.207 11:43:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.207 11:43:54 -- accel/accel.sh@19 -- # IFS=: 00:08:04.207 11:43:54 -- accel/accel.sh@19 -- # read -r var val 00:08:04.207 11:43:54 -- accel/accel.sh@20 -- # val= 00:08:04.207 11:43:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.207 11:43:54 -- accel/accel.sh@19 -- # IFS=: 00:08:04.207 11:43:54 -- accel/accel.sh@19 -- # read -r var val 00:08:04.207 11:43:54 -- accel/accel.sh@20 -- # val=software 00:08:04.207 11:43:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.207 11:43:54 -- accel/accel.sh@22 -- # accel_module=software 00:08:04.207 11:43:54 -- accel/accel.sh@19 -- # IFS=: 00:08:04.207 11:43:54 -- accel/accel.sh@19 -- # read -r var val 00:08:04.207 11:43:54 -- accel/accel.sh@20 -- # val=32 00:08:04.207 11:43:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.207 11:43:54 -- accel/accel.sh@19 -- # IFS=: 00:08:04.207 11:43:54 -- accel/accel.sh@19 -- # read -r var val 00:08:04.207 11:43:54 -- accel/accel.sh@20 -- # val=32 00:08:04.207 11:43:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.207 11:43:54 -- accel/accel.sh@19 -- # IFS=: 00:08:04.207 11:43:54 -- accel/accel.sh@19 -- # read -r var val 00:08:04.207 11:43:54 -- accel/accel.sh@20 -- # val=1 00:08:04.207 11:43:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.207 11:43:54 -- accel/accel.sh@19 -- # IFS=: 00:08:04.207 11:43:54 -- accel/accel.sh@19 -- # read -r var val 00:08:04.207 11:43:54 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:04.207 11:43:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.207 11:43:54 -- accel/accel.sh@19 -- # IFS=: 00:08:04.207 11:43:54 -- accel/accel.sh@19 -- # read -r var val 00:08:04.207 11:43:54 -- accel/accel.sh@20 -- # val=No 00:08:04.207 11:43:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.207 11:43:54 -- accel/accel.sh@19 -- # IFS=: 00:08:04.207 11:43:54 -- accel/accel.sh@19 -- # read -r var val 00:08:04.207 11:43:54 -- accel/accel.sh@20 -- # val= 00:08:04.207 11:43:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.207 11:43:54 -- accel/accel.sh@19 -- # IFS=: 00:08:04.207 11:43:54 -- accel/accel.sh@19 -- # read -r var val 00:08:04.207 11:43:54 -- accel/accel.sh@20 -- # val= 00:08:04.207 11:43:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.207 11:43:54 -- accel/accel.sh@19 -- # IFS=: 00:08:04.207 11:43:54 -- accel/accel.sh@19 -- # read -r var val 00:08:06.105 11:43:56 -- accel/accel.sh@20 -- # val= 00:08:06.105 11:43:56 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.105 11:43:56 -- accel/accel.sh@19 -- # IFS=: 00:08:06.105 11:43:56 -- accel/accel.sh@19 -- # read -r var val 00:08:06.105 11:43:56 -- accel/accel.sh@20 -- # val= 00:08:06.105 11:43:56 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.105 11:43:56 -- accel/accel.sh@19 -- # IFS=: 00:08:06.105 11:43:56 -- accel/accel.sh@19 -- # read -r var val 00:08:06.105 11:43:56 -- accel/accel.sh@20 -- # val= 00:08:06.105 11:43:56 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.105 11:43:56 -- accel/accel.sh@19 -- # IFS=: 00:08:06.105 11:43:56 -- accel/accel.sh@19 -- # read -r var val 00:08:06.105 11:43:56 -- accel/accel.sh@20 -- # val= 00:08:06.105 11:43:56 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.105 11:43:56 -- accel/accel.sh@19 -- # IFS=: 00:08:06.105 11:43:56 -- accel/accel.sh@19 -- # read -r var val 00:08:06.105 11:43:56 -- accel/accel.sh@20 -- # val= 00:08:06.105 11:43:56 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.105 11:43:56 -- accel/accel.sh@19 -- # IFS=: 00:08:06.105 11:43:56 -- accel/accel.sh@19 -- # read -r var val 00:08:06.105 11:43:56 -- accel/accel.sh@20 -- # val= 00:08:06.105 11:43:56 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.105 11:43:56 -- accel/accel.sh@19 -- # IFS=: 00:08:06.105 11:43:56 -- accel/accel.sh@19 -- # read -r var val 00:08:06.105 11:43:56 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:06.105 11:43:56 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:08:06.105 11:43:56 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:06.105 00:08:06.105 real 0m2.526s 00:08:06.105 user 0m2.324s 00:08:06.105 sys 0m0.217s 00:08:06.105 11:43:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:06.105 11:43:56 -- common/autotest_common.sh@10 -- # set +x 00:08:06.105 ************************************ 00:08:06.105 END TEST accel_dif_verify 00:08:06.105 ************************************ 00:08:06.105 11:43:56 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:08:06.105 11:43:56 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:08:06.105 11:43:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:06.105 11:43:56 -- common/autotest_common.sh@10 -- # set +x 00:08:06.105 ************************************ 00:08:06.105 START TEST accel_dif_generate 00:08:06.105 ************************************ 00:08:06.105 11:43:56 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate 00:08:06.105 11:43:56 -- accel/accel.sh@16 -- # local accel_opc 00:08:06.105 11:43:56 -- accel/accel.sh@17 -- # local accel_module 00:08:06.105 11:43:56 -- accel/accel.sh@19 -- # IFS=: 00:08:06.105 11:43:56 -- accel/accel.sh@19 -- # read -r var val 00:08:06.105 11:43:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:08:06.105 11:43:56 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:08:06.105 11:43:56 -- accel/accel.sh@12 -- # build_accel_config 00:08:06.105 11:43:56 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:06.105 11:43:56 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:06.105 11:43:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:06.105 11:43:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:06.105 11:43:56 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:06.105 11:43:56 -- accel/accel.sh@40 -- # local IFS=, 00:08:06.105 11:43:56 -- accel/accel.sh@41 -- # jq -r . 00:08:06.363 [2024-04-18 11:43:56.655997] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:08:06.363 [2024-04-18 11:43:56.656071] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2325114 ] 00:08:06.363 EAL: No free 2048 kB hugepages reported on node 1 00:08:06.363 [2024-04-18 11:43:56.773650] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.621 [2024-04-18 11:43:56.973245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.878 11:43:57 -- accel/accel.sh@20 -- # val= 00:08:06.878 11:43:57 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.878 11:43:57 -- accel/accel.sh@19 -- # IFS=: 00:08:06.878 11:43:57 -- accel/accel.sh@19 -- # read -r var val 00:08:06.878 11:43:57 -- accel/accel.sh@20 -- # val= 00:08:06.878 11:43:57 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.878 11:43:57 -- accel/accel.sh@19 -- # IFS=: 00:08:06.878 11:43:57 -- accel/accel.sh@19 -- # read -r var val 00:08:06.878 11:43:57 -- accel/accel.sh@20 -- # val=0x1 00:08:06.878 11:43:57 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.878 11:43:57 -- accel/accel.sh@19 -- # IFS=: 00:08:06.878 11:43:57 -- accel/accel.sh@19 -- # read -r var val 00:08:06.878 11:43:57 -- accel/accel.sh@20 -- # val= 00:08:06.878 11:43:57 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.878 11:43:57 -- accel/accel.sh@19 -- # IFS=: 00:08:06.878 11:43:57 -- accel/accel.sh@19 -- # read -r var val 00:08:06.878 11:43:57 -- accel/accel.sh@20 -- # val= 00:08:06.878 11:43:57 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.878 11:43:57 -- accel/accel.sh@19 -- # IFS=: 00:08:06.878 11:43:57 -- accel/accel.sh@19 -- # read -r var val 00:08:06.878 11:43:57 -- accel/accel.sh@20 -- # val=dif_generate 00:08:06.878 11:43:57 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.878 11:43:57 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:08:06.878 11:43:57 -- accel/accel.sh@19 -- # IFS=: 00:08:06.878 11:43:57 -- accel/accel.sh@19 -- # read -r var val 00:08:06.878 11:43:57 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:06.878 11:43:57 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.878 11:43:57 -- accel/accel.sh@19 -- # IFS=: 00:08:06.878 11:43:57 -- accel/accel.sh@19 -- # read -r var val 00:08:06.878 11:43:57 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:06.878 11:43:57 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.878 11:43:57 -- accel/accel.sh@19 -- # IFS=: 00:08:06.878 11:43:57 -- accel/accel.sh@19 -- # read -r var val 00:08:06.878 11:43:57 -- accel/accel.sh@20 -- # val='512 bytes' 00:08:06.878 11:43:57 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.878 11:43:57 -- accel/accel.sh@19 -- # IFS=: 00:08:06.878 11:43:57 -- accel/accel.sh@19 -- # read -r var val 00:08:06.878 11:43:57 -- accel/accel.sh@20 -- # val='8 bytes' 00:08:06.878 11:43:57 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.878 11:43:57 -- accel/accel.sh@19 -- # IFS=: 00:08:06.878 11:43:57 -- accel/accel.sh@19 -- # read -r var val 00:08:06.878 11:43:57 -- accel/accel.sh@20 -- # val= 00:08:06.878 11:43:57 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.878 11:43:57 -- accel/accel.sh@19 -- # IFS=: 00:08:06.878 11:43:57 -- accel/accel.sh@19 -- # read -r var val 00:08:06.878 11:43:57 -- accel/accel.sh@20 -- # val=software 00:08:06.878 11:43:57 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.878 11:43:57 -- accel/accel.sh@22 -- # accel_module=software 00:08:06.878 11:43:57 -- accel/accel.sh@19 -- # IFS=: 00:08:06.878 11:43:57 -- accel/accel.sh@19 -- # read -r var val 00:08:06.878 11:43:57 -- accel/accel.sh@20 -- # val=32 00:08:06.878 11:43:57 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.878 11:43:57 -- accel/accel.sh@19 -- # IFS=: 00:08:06.878 11:43:57 -- accel/accel.sh@19 -- # read -r var val 00:08:06.878 11:43:57 -- accel/accel.sh@20 -- # val=32 00:08:06.878 11:43:57 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.878 11:43:57 -- accel/accel.sh@19 -- # IFS=: 00:08:06.878 11:43:57 -- accel/accel.sh@19 -- # read -r var val 00:08:06.878 11:43:57 -- accel/accel.sh@20 -- # val=1 00:08:06.878 11:43:57 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.878 11:43:57 -- accel/accel.sh@19 -- # IFS=: 00:08:06.878 11:43:57 -- accel/accel.sh@19 -- # read -r var val 00:08:06.878 11:43:57 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:06.878 11:43:57 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.878 11:43:57 -- accel/accel.sh@19 -- # IFS=: 00:08:06.878 11:43:57 -- accel/accel.sh@19 -- # read -r var val 00:08:06.878 11:43:57 -- accel/accel.sh@20 -- # val=No 00:08:06.878 11:43:57 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.878 11:43:57 -- accel/accel.sh@19 -- # IFS=: 00:08:06.878 11:43:57 -- accel/accel.sh@19 -- # read -r var val 00:08:06.879 11:43:57 -- accel/accel.sh@20 -- # val= 00:08:06.879 11:43:57 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.879 11:43:57 -- accel/accel.sh@19 -- # IFS=: 00:08:06.879 11:43:57 -- accel/accel.sh@19 -- # read -r var val 00:08:06.879 11:43:57 -- accel/accel.sh@20 -- # val= 00:08:06.879 11:43:57 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.879 11:43:57 -- accel/accel.sh@19 -- # IFS=: 00:08:06.879 11:43:57 -- accel/accel.sh@19 -- # read -r var val 00:08:08.777 11:43:59 -- accel/accel.sh@20 -- # val= 00:08:08.777 11:43:59 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.777 11:43:59 -- accel/accel.sh@19 -- # IFS=: 00:08:08.777 11:43:59 -- accel/accel.sh@19 -- # read -r var val 00:08:08.777 11:43:59 -- accel/accel.sh@20 -- # val= 00:08:08.777 11:43:59 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.777 11:43:59 -- accel/accel.sh@19 -- # IFS=: 00:08:08.777 11:43:59 -- accel/accel.sh@19 -- # read -r var val 00:08:08.777 11:43:59 -- accel/accel.sh@20 -- # val= 00:08:08.777 11:43:59 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.777 11:43:59 -- accel/accel.sh@19 -- # IFS=: 00:08:08.777 11:43:59 -- accel/accel.sh@19 -- # read -r var val 00:08:08.777 11:43:59 -- accel/accel.sh@20 -- # val= 00:08:08.777 11:43:59 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.777 11:43:59 -- accel/accel.sh@19 -- # IFS=: 00:08:08.777 11:43:59 -- accel/accel.sh@19 -- # read -r var val 00:08:08.777 11:43:59 -- accel/accel.sh@20 -- # val= 00:08:08.777 11:43:59 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.777 11:43:59 -- accel/accel.sh@19 -- # IFS=: 00:08:08.777 11:43:59 -- accel/accel.sh@19 -- # read -r var val 00:08:08.777 11:43:59 -- accel/accel.sh@20 -- # val= 00:08:08.777 11:43:59 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.777 11:43:59 -- accel/accel.sh@19 -- # IFS=: 00:08:08.777 11:43:59 -- accel/accel.sh@19 -- # read -r var val 00:08:08.777 11:43:59 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:08.777 11:43:59 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:08:08.777 11:43:59 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:08.777 00:08:08.777 real 0m2.519s 00:08:08.777 user 0m2.312s 00:08:08.777 sys 0m0.223s 00:08:08.777 11:43:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:08.777 11:43:59 -- common/autotest_common.sh@10 -- # set +x 00:08:08.777 ************************************ 00:08:08.777 END TEST accel_dif_generate 00:08:08.777 ************************************ 00:08:08.777 11:43:59 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:08:08.777 11:43:59 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:08:08.777 11:43:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:08.777 11:43:59 -- common/autotest_common.sh@10 -- # set +x 00:08:08.777 ************************************ 00:08:08.777 START TEST accel_dif_generate_copy 00:08:08.777 ************************************ 00:08:08.777 11:43:59 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate_copy 00:08:08.777 11:43:59 -- accel/accel.sh@16 -- # local accel_opc 00:08:08.777 11:43:59 -- accel/accel.sh@17 -- # local accel_module 00:08:08.777 11:43:59 -- accel/accel.sh@19 -- # IFS=: 00:08:08.777 11:43:59 -- accel/accel.sh@19 -- # read -r var val 00:08:08.777 11:43:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:08:08.777 11:43:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:08:08.777 11:43:59 -- accel/accel.sh@12 -- # build_accel_config 00:08:08.777 11:43:59 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:08.777 11:43:59 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:08.777 11:43:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:08.777 11:43:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:08.777 11:43:59 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:08.777 11:43:59 -- accel/accel.sh@40 -- # local IFS=, 00:08:08.777 11:43:59 -- accel/accel.sh@41 -- # jq -r . 00:08:09.036 [2024-04-18 11:43:59.364546] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:08:09.036 [2024-04-18 11:43:59.364619] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2325509 ] 00:08:09.036 EAL: No free 2048 kB hugepages reported on node 1 00:08:09.036 [2024-04-18 11:43:59.486111] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.294 [2024-04-18 11:43:59.691091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.553 11:43:59 -- accel/accel.sh@20 -- # val= 00:08:09.553 11:43:59 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.553 11:43:59 -- accel/accel.sh@19 -- # IFS=: 00:08:09.553 11:43:59 -- accel/accel.sh@19 -- # read -r var val 00:08:09.553 11:43:59 -- accel/accel.sh@20 -- # val= 00:08:09.553 11:43:59 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.553 11:43:59 -- accel/accel.sh@19 -- # IFS=: 00:08:09.553 11:43:59 -- accel/accel.sh@19 -- # read -r var val 00:08:09.553 11:43:59 -- accel/accel.sh@20 -- # val=0x1 00:08:09.553 11:43:59 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.553 11:43:59 -- accel/accel.sh@19 -- # IFS=: 00:08:09.553 11:43:59 -- accel/accel.sh@19 -- # read -r var val 00:08:09.553 11:43:59 -- accel/accel.sh@20 -- # val= 00:08:09.553 11:43:59 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.553 11:43:59 -- accel/accel.sh@19 -- # IFS=: 00:08:09.553 11:43:59 -- accel/accel.sh@19 -- # read -r var val 00:08:09.553 11:43:59 -- accel/accel.sh@20 -- # val= 00:08:09.553 11:43:59 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.553 11:43:59 -- accel/accel.sh@19 -- # IFS=: 00:08:09.553 11:43:59 -- accel/accel.sh@19 -- # read -r var val 00:08:09.553 11:43:59 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:08:09.553 11:43:59 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.553 11:43:59 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:08:09.553 11:43:59 -- accel/accel.sh@19 -- # IFS=: 00:08:09.553 11:43:59 -- accel/accel.sh@19 -- # read -r var val 00:08:09.553 11:43:59 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:09.553 11:43:59 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.553 11:43:59 -- accel/accel.sh@19 -- # IFS=: 00:08:09.553 11:43:59 -- accel/accel.sh@19 -- # read -r var val 00:08:09.553 11:43:59 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:09.553 11:43:59 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.553 11:43:59 -- accel/accel.sh@19 -- # IFS=: 00:08:09.553 11:43:59 -- accel/accel.sh@19 -- # read -r var val 00:08:09.553 11:43:59 -- accel/accel.sh@20 -- # val= 00:08:09.553 11:43:59 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.553 11:43:59 -- accel/accel.sh@19 -- # IFS=: 00:08:09.553 11:43:59 -- accel/accel.sh@19 -- # read -r var val 00:08:09.553 11:43:59 -- accel/accel.sh@20 -- # val=software 00:08:09.553 11:43:59 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.553 11:43:59 -- accel/accel.sh@22 -- # accel_module=software 00:08:09.553 11:43:59 -- accel/accel.sh@19 -- # IFS=: 00:08:09.553 11:43:59 -- accel/accel.sh@19 -- # read -r var val 00:08:09.553 11:43:59 -- accel/accel.sh@20 -- # val=32 00:08:09.553 11:43:59 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.553 11:43:59 -- accel/accel.sh@19 -- # IFS=: 00:08:09.553 11:43:59 -- accel/accel.sh@19 -- # read -r var val 00:08:09.553 11:43:59 -- accel/accel.sh@20 -- # val=32 00:08:09.553 11:43:59 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.553 11:43:59 -- accel/accel.sh@19 -- # IFS=: 00:08:09.553 11:43:59 -- accel/accel.sh@19 -- # read -r var val 00:08:09.553 11:43:59 -- accel/accel.sh@20 -- # val=1 00:08:09.553 11:43:59 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.553 11:43:59 -- accel/accel.sh@19 -- # IFS=: 00:08:09.553 11:43:59 -- accel/accel.sh@19 -- # read -r var val 00:08:09.553 11:43:59 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:09.553 11:43:59 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.553 11:43:59 -- accel/accel.sh@19 -- # IFS=: 00:08:09.553 11:43:59 -- accel/accel.sh@19 -- # read -r var val 00:08:09.553 11:43:59 -- accel/accel.sh@20 -- # val=No 00:08:09.553 11:43:59 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.553 11:43:59 -- accel/accel.sh@19 -- # IFS=: 00:08:09.553 11:43:59 -- accel/accel.sh@19 -- # read -r var val 00:08:09.553 11:43:59 -- accel/accel.sh@20 -- # val= 00:08:09.553 11:43:59 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.553 11:43:59 -- accel/accel.sh@19 -- # IFS=: 00:08:09.553 11:43:59 -- accel/accel.sh@19 -- # read -r var val 00:08:09.553 11:43:59 -- accel/accel.sh@20 -- # val= 00:08:09.553 11:43:59 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.553 11:43:59 -- accel/accel.sh@19 -- # IFS=: 00:08:09.553 11:43:59 -- accel/accel.sh@19 -- # read -r var val 00:08:11.454 11:44:01 -- accel/accel.sh@20 -- # val= 00:08:11.454 11:44:01 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.454 11:44:01 -- accel/accel.sh@19 -- # IFS=: 00:08:11.454 11:44:01 -- accel/accel.sh@19 -- # read -r var val 00:08:11.454 11:44:01 -- accel/accel.sh@20 -- # val= 00:08:11.454 11:44:01 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.454 11:44:01 -- accel/accel.sh@19 -- # IFS=: 00:08:11.454 11:44:01 -- accel/accel.sh@19 -- # read -r var val 00:08:11.454 11:44:01 -- accel/accel.sh@20 -- # val= 00:08:11.454 11:44:01 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.454 11:44:01 -- accel/accel.sh@19 -- # IFS=: 00:08:11.454 11:44:01 -- accel/accel.sh@19 -- # read -r var val 00:08:11.454 11:44:01 -- accel/accel.sh@20 -- # val= 00:08:11.454 11:44:01 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.454 11:44:01 -- accel/accel.sh@19 -- # IFS=: 00:08:11.454 11:44:01 -- accel/accel.sh@19 -- # read -r var val 00:08:11.454 11:44:01 -- accel/accel.sh@20 -- # val= 00:08:11.454 11:44:01 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.454 11:44:01 -- accel/accel.sh@19 -- # IFS=: 00:08:11.454 11:44:01 -- accel/accel.sh@19 -- # read -r var val 00:08:11.454 11:44:01 -- accel/accel.sh@20 -- # val= 00:08:11.454 11:44:01 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.454 11:44:01 -- accel/accel.sh@19 -- # IFS=: 00:08:11.454 11:44:01 -- accel/accel.sh@19 -- # read -r var val 00:08:11.454 11:44:01 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:11.454 11:44:01 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:08:11.454 11:44:01 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:11.454 00:08:11.454 real 0m2.541s 00:08:11.454 user 0m2.345s 00:08:11.454 sys 0m0.211s 00:08:11.454 11:44:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:11.454 11:44:01 -- common/autotest_common.sh@10 -- # set +x 00:08:11.454 ************************************ 00:08:11.454 END TEST accel_dif_generate_copy 00:08:11.454 ************************************ 00:08:11.454 11:44:01 -- accel/accel.sh@115 -- # [[ y == y ]] 00:08:11.454 11:44:01 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:11.454 11:44:01 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:08:11.454 11:44:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:11.454 11:44:01 -- common/autotest_common.sh@10 -- # set +x 00:08:11.712 ************************************ 00:08:11.712 START TEST accel_comp 00:08:11.712 ************************************ 00:08:11.712 11:44:02 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:11.712 11:44:02 -- accel/accel.sh@16 -- # local accel_opc 00:08:11.712 11:44:02 -- accel/accel.sh@17 -- # local accel_module 00:08:11.713 11:44:02 -- accel/accel.sh@19 -- # IFS=: 00:08:11.713 11:44:02 -- accel/accel.sh@19 -- # read -r var val 00:08:11.713 11:44:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:11.713 11:44:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:11.713 11:44:02 -- accel/accel.sh@12 -- # build_accel_config 00:08:11.713 11:44:02 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:11.713 11:44:02 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:11.713 11:44:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:11.713 11:44:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:11.713 11:44:02 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:11.713 11:44:02 -- accel/accel.sh@40 -- # local IFS=, 00:08:11.713 11:44:02 -- accel/accel.sh@41 -- # jq -r . 00:08:11.713 [2024-04-18 11:44:02.094130] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:08:11.713 [2024-04-18 11:44:02.094220] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2325975 ] 00:08:11.713 EAL: No free 2048 kB hugepages reported on node 1 00:08:11.713 [2024-04-18 11:44:02.215515] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.972 [2024-04-18 11:44:02.424415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.231 11:44:02 -- accel/accel.sh@20 -- # val= 00:08:12.231 11:44:02 -- accel/accel.sh@21 -- # case "$var" in 00:08:12.231 11:44:02 -- accel/accel.sh@19 -- # IFS=: 00:08:12.231 11:44:02 -- accel/accel.sh@19 -- # read -r var val 00:08:12.231 11:44:02 -- accel/accel.sh@20 -- # val= 00:08:12.231 11:44:02 -- accel/accel.sh@21 -- # case "$var" in 00:08:12.231 11:44:02 -- accel/accel.sh@19 -- # IFS=: 00:08:12.231 11:44:02 -- accel/accel.sh@19 -- # read -r var val 00:08:12.231 11:44:02 -- accel/accel.sh@20 -- # val= 00:08:12.231 11:44:02 -- accel/accel.sh@21 -- # case "$var" in 00:08:12.231 11:44:02 -- accel/accel.sh@19 -- # IFS=: 00:08:12.231 11:44:02 -- accel/accel.sh@19 -- # read -r var val 00:08:12.231 11:44:02 -- accel/accel.sh@20 -- # val=0x1 00:08:12.231 11:44:02 -- accel/accel.sh@21 -- # case "$var" in 00:08:12.231 11:44:02 -- accel/accel.sh@19 -- # IFS=: 00:08:12.231 11:44:02 -- accel/accel.sh@19 -- # read -r var val 00:08:12.231 11:44:02 -- accel/accel.sh@20 -- # val= 00:08:12.231 11:44:02 -- accel/accel.sh@21 -- # case "$var" in 00:08:12.231 11:44:02 -- accel/accel.sh@19 -- # IFS=: 00:08:12.231 11:44:02 -- accel/accel.sh@19 -- # read -r var val 00:08:12.231 11:44:02 -- accel/accel.sh@20 -- # val= 00:08:12.231 11:44:02 -- accel/accel.sh@21 -- # case "$var" in 00:08:12.231 11:44:02 -- accel/accel.sh@19 -- # IFS=: 00:08:12.231 11:44:02 -- accel/accel.sh@19 -- # read -r var val 00:08:12.231 11:44:02 -- accel/accel.sh@20 -- # val=compress 00:08:12.231 11:44:02 -- accel/accel.sh@21 -- # case "$var" in 00:08:12.231 11:44:02 -- accel/accel.sh@23 -- # accel_opc=compress 00:08:12.231 11:44:02 -- accel/accel.sh@19 -- # IFS=: 00:08:12.231 11:44:02 -- accel/accel.sh@19 -- # read -r var val 00:08:12.231 11:44:02 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:12.231 11:44:02 -- accel/accel.sh@21 -- # case "$var" in 00:08:12.231 11:44:02 -- accel/accel.sh@19 -- # IFS=: 00:08:12.231 11:44:02 -- accel/accel.sh@19 -- # read -r var val 00:08:12.231 11:44:02 -- accel/accel.sh@20 -- # val= 00:08:12.231 11:44:02 -- accel/accel.sh@21 -- # case "$var" in 00:08:12.231 11:44:02 -- accel/accel.sh@19 -- # IFS=: 00:08:12.231 11:44:02 -- accel/accel.sh@19 -- # read -r var val 00:08:12.231 11:44:02 -- accel/accel.sh@20 -- # val=software 00:08:12.231 11:44:02 -- accel/accel.sh@21 -- # case "$var" in 00:08:12.231 11:44:02 -- accel/accel.sh@22 -- # accel_module=software 00:08:12.231 11:44:02 -- accel/accel.sh@19 -- # IFS=: 00:08:12.231 11:44:02 -- accel/accel.sh@19 -- # read -r var val 00:08:12.231 11:44:02 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:12.231 11:44:02 -- accel/accel.sh@21 -- # case "$var" in 00:08:12.231 11:44:02 -- accel/accel.sh@19 -- # IFS=: 00:08:12.231 11:44:02 -- accel/accel.sh@19 -- # read -r var val 00:08:12.231 11:44:02 -- accel/accel.sh@20 -- # val=32 00:08:12.231 11:44:02 -- accel/accel.sh@21 -- # case "$var" in 00:08:12.231 11:44:02 -- accel/accel.sh@19 -- # IFS=: 00:08:12.231 11:44:02 -- accel/accel.sh@19 -- # read -r var val 00:08:12.231 11:44:02 -- accel/accel.sh@20 -- # val=32 00:08:12.231 11:44:02 -- accel/accel.sh@21 -- # case "$var" in 00:08:12.231 11:44:02 -- accel/accel.sh@19 -- # IFS=: 00:08:12.231 11:44:02 -- accel/accel.sh@19 -- # read -r var val 00:08:12.231 11:44:02 -- accel/accel.sh@20 -- # val=1 00:08:12.231 11:44:02 -- accel/accel.sh@21 -- # case "$var" in 00:08:12.231 11:44:02 -- accel/accel.sh@19 -- # IFS=: 00:08:12.231 11:44:02 -- accel/accel.sh@19 -- # read -r var val 00:08:12.231 11:44:02 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:12.231 11:44:02 -- accel/accel.sh@21 -- # case "$var" in 00:08:12.231 11:44:02 -- accel/accel.sh@19 -- # IFS=: 00:08:12.231 11:44:02 -- accel/accel.sh@19 -- # read -r var val 00:08:12.231 11:44:02 -- accel/accel.sh@20 -- # val=No 00:08:12.231 11:44:02 -- accel/accel.sh@21 -- # case "$var" in 00:08:12.231 11:44:02 -- accel/accel.sh@19 -- # IFS=: 00:08:12.231 11:44:02 -- accel/accel.sh@19 -- # read -r var val 00:08:12.231 11:44:02 -- accel/accel.sh@20 -- # val= 00:08:12.231 11:44:02 -- accel/accel.sh@21 -- # case "$var" in 00:08:12.231 11:44:02 -- accel/accel.sh@19 -- # IFS=: 00:08:12.231 11:44:02 -- accel/accel.sh@19 -- # read -r var val 00:08:12.231 11:44:02 -- accel/accel.sh@20 -- # val= 00:08:12.231 11:44:02 -- accel/accel.sh@21 -- # case "$var" in 00:08:12.231 11:44:02 -- accel/accel.sh@19 -- # IFS=: 00:08:12.231 11:44:02 -- accel/accel.sh@19 -- # read -r var val 00:08:14.140 11:44:04 -- accel/accel.sh@20 -- # val= 00:08:14.140 11:44:04 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.140 11:44:04 -- accel/accel.sh@19 -- # IFS=: 00:08:14.140 11:44:04 -- accel/accel.sh@19 -- # read -r var val 00:08:14.140 11:44:04 -- accel/accel.sh@20 -- # val= 00:08:14.140 11:44:04 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.140 11:44:04 -- accel/accel.sh@19 -- # IFS=: 00:08:14.140 11:44:04 -- accel/accel.sh@19 -- # read -r var val 00:08:14.140 11:44:04 -- accel/accel.sh@20 -- # val= 00:08:14.140 11:44:04 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.140 11:44:04 -- accel/accel.sh@19 -- # IFS=: 00:08:14.140 11:44:04 -- accel/accel.sh@19 -- # read -r var val 00:08:14.140 11:44:04 -- accel/accel.sh@20 -- # val= 00:08:14.140 11:44:04 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.140 11:44:04 -- accel/accel.sh@19 -- # IFS=: 00:08:14.140 11:44:04 -- accel/accel.sh@19 -- # read -r var val 00:08:14.140 11:44:04 -- accel/accel.sh@20 -- # val= 00:08:14.140 11:44:04 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.140 11:44:04 -- accel/accel.sh@19 -- # IFS=: 00:08:14.140 11:44:04 -- accel/accel.sh@19 -- # read -r var val 00:08:14.140 11:44:04 -- accel/accel.sh@20 -- # val= 00:08:14.140 11:44:04 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.140 11:44:04 -- accel/accel.sh@19 -- # IFS=: 00:08:14.140 11:44:04 -- accel/accel.sh@19 -- # read -r var val 00:08:14.140 11:44:04 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:14.140 11:44:04 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:08:14.140 11:44:04 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:14.140 00:08:14.140 real 0m2.558s 00:08:14.140 user 0m2.354s 00:08:14.140 sys 0m0.218s 00:08:14.140 11:44:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:14.140 11:44:04 -- common/autotest_common.sh@10 -- # set +x 00:08:14.140 ************************************ 00:08:14.140 END TEST accel_comp 00:08:14.140 ************************************ 00:08:14.140 11:44:04 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:14.140 11:44:04 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:08:14.140 11:44:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:14.140 11:44:04 -- common/autotest_common.sh@10 -- # set +x 00:08:14.399 ************************************ 00:08:14.399 START TEST accel_decomp 00:08:14.399 ************************************ 00:08:14.399 11:44:04 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:14.399 11:44:04 -- accel/accel.sh@16 -- # local accel_opc 00:08:14.399 11:44:04 -- accel/accel.sh@17 -- # local accel_module 00:08:14.399 11:44:04 -- accel/accel.sh@19 -- # IFS=: 00:08:14.399 11:44:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:14.399 11:44:04 -- accel/accel.sh@19 -- # read -r var val 00:08:14.399 11:44:04 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:14.399 11:44:04 -- accel/accel.sh@12 -- # build_accel_config 00:08:14.399 11:44:04 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:14.399 11:44:04 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:14.399 11:44:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:14.399 11:44:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:14.399 11:44:04 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:14.399 11:44:04 -- accel/accel.sh@40 -- # local IFS=, 00:08:14.399 11:44:04 -- accel/accel.sh@41 -- # jq -r . 00:08:14.399 [2024-04-18 11:44:04.844364] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:08:14.399 [2024-04-18 11:44:04.844441] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2326532 ] 00:08:14.399 EAL: No free 2048 kB hugepages reported on node 1 00:08:14.658 [2024-04-18 11:44:04.969362] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.658 [2024-04-18 11:44:05.176869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.918 11:44:05 -- accel/accel.sh@20 -- # val= 00:08:14.918 11:44:05 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.918 11:44:05 -- accel/accel.sh@19 -- # IFS=: 00:08:14.918 11:44:05 -- accel/accel.sh@19 -- # read -r var val 00:08:14.918 11:44:05 -- accel/accel.sh@20 -- # val= 00:08:14.918 11:44:05 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.918 11:44:05 -- accel/accel.sh@19 -- # IFS=: 00:08:14.918 11:44:05 -- accel/accel.sh@19 -- # read -r var val 00:08:14.918 11:44:05 -- accel/accel.sh@20 -- # val= 00:08:14.918 11:44:05 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.918 11:44:05 -- accel/accel.sh@19 -- # IFS=: 00:08:14.918 11:44:05 -- accel/accel.sh@19 -- # read -r var val 00:08:14.918 11:44:05 -- accel/accel.sh@20 -- # val=0x1 00:08:14.918 11:44:05 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.918 11:44:05 -- accel/accel.sh@19 -- # IFS=: 00:08:14.918 11:44:05 -- accel/accel.sh@19 -- # read -r var val 00:08:14.918 11:44:05 -- accel/accel.sh@20 -- # val= 00:08:14.918 11:44:05 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.918 11:44:05 -- accel/accel.sh@19 -- # IFS=: 00:08:14.918 11:44:05 -- accel/accel.sh@19 -- # read -r var val 00:08:14.918 11:44:05 -- accel/accel.sh@20 -- # val= 00:08:14.918 11:44:05 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.918 11:44:05 -- accel/accel.sh@19 -- # IFS=: 00:08:14.918 11:44:05 -- accel/accel.sh@19 -- # read -r var val 00:08:14.918 11:44:05 -- accel/accel.sh@20 -- # val=decompress 00:08:14.918 11:44:05 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.918 11:44:05 -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:14.918 11:44:05 -- accel/accel.sh@19 -- # IFS=: 00:08:14.918 11:44:05 -- accel/accel.sh@19 -- # read -r var val 00:08:14.918 11:44:05 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:14.918 11:44:05 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.918 11:44:05 -- accel/accel.sh@19 -- # IFS=: 00:08:14.918 11:44:05 -- accel/accel.sh@19 -- # read -r var val 00:08:14.918 11:44:05 -- accel/accel.sh@20 -- # val= 00:08:14.918 11:44:05 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.918 11:44:05 -- accel/accel.sh@19 -- # IFS=: 00:08:14.918 11:44:05 -- accel/accel.sh@19 -- # read -r var val 00:08:14.918 11:44:05 -- accel/accel.sh@20 -- # val=software 00:08:14.918 11:44:05 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.918 11:44:05 -- accel/accel.sh@22 -- # accel_module=software 00:08:14.918 11:44:05 -- accel/accel.sh@19 -- # IFS=: 00:08:14.918 11:44:05 -- accel/accel.sh@19 -- # read -r var val 00:08:14.918 11:44:05 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:14.918 11:44:05 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.918 11:44:05 -- accel/accel.sh@19 -- # IFS=: 00:08:14.918 11:44:05 -- accel/accel.sh@19 -- # read -r var val 00:08:14.918 11:44:05 -- accel/accel.sh@20 -- # val=32 00:08:14.918 11:44:05 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.918 11:44:05 -- accel/accel.sh@19 -- # IFS=: 00:08:14.918 11:44:05 -- accel/accel.sh@19 -- # read -r var val 00:08:14.918 11:44:05 -- accel/accel.sh@20 -- # val=32 00:08:14.918 11:44:05 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.918 11:44:05 -- accel/accel.sh@19 -- # IFS=: 00:08:14.918 11:44:05 -- accel/accel.sh@19 -- # read -r var val 00:08:14.918 11:44:05 -- accel/accel.sh@20 -- # val=1 00:08:14.918 11:44:05 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.918 11:44:05 -- accel/accel.sh@19 -- # IFS=: 00:08:14.918 11:44:05 -- accel/accel.sh@19 -- # read -r var val 00:08:14.918 11:44:05 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:14.918 11:44:05 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.918 11:44:05 -- accel/accel.sh@19 -- # IFS=: 00:08:14.918 11:44:05 -- accel/accel.sh@19 -- # read -r var val 00:08:14.918 11:44:05 -- accel/accel.sh@20 -- # val=Yes 00:08:14.918 11:44:05 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.918 11:44:05 -- accel/accel.sh@19 -- # IFS=: 00:08:14.918 11:44:05 -- accel/accel.sh@19 -- # read -r var val 00:08:14.918 11:44:05 -- accel/accel.sh@20 -- # val= 00:08:14.918 11:44:05 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.918 11:44:05 -- accel/accel.sh@19 -- # IFS=: 00:08:14.918 11:44:05 -- accel/accel.sh@19 -- # read -r var val 00:08:14.918 11:44:05 -- accel/accel.sh@20 -- # val= 00:08:14.918 11:44:05 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.918 11:44:05 -- accel/accel.sh@19 -- # IFS=: 00:08:14.918 11:44:05 -- accel/accel.sh@19 -- # read -r var val 00:08:16.818 11:44:07 -- accel/accel.sh@20 -- # val= 00:08:16.818 11:44:07 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.818 11:44:07 -- accel/accel.sh@19 -- # IFS=: 00:08:16.818 11:44:07 -- accel/accel.sh@19 -- # read -r var val 00:08:16.818 11:44:07 -- accel/accel.sh@20 -- # val= 00:08:16.818 11:44:07 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.818 11:44:07 -- accel/accel.sh@19 -- # IFS=: 00:08:16.818 11:44:07 -- accel/accel.sh@19 -- # read -r var val 00:08:16.818 11:44:07 -- accel/accel.sh@20 -- # val= 00:08:16.818 11:44:07 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.818 11:44:07 -- accel/accel.sh@19 -- # IFS=: 00:08:16.818 11:44:07 -- accel/accel.sh@19 -- # read -r var val 00:08:16.818 11:44:07 -- accel/accel.sh@20 -- # val= 00:08:16.818 11:44:07 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.818 11:44:07 -- accel/accel.sh@19 -- # IFS=: 00:08:16.818 11:44:07 -- accel/accel.sh@19 -- # read -r var val 00:08:16.818 11:44:07 -- accel/accel.sh@20 -- # val= 00:08:16.819 11:44:07 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.819 11:44:07 -- accel/accel.sh@19 -- # IFS=: 00:08:16.819 11:44:07 -- accel/accel.sh@19 -- # read -r var val 00:08:16.819 11:44:07 -- accel/accel.sh@20 -- # val= 00:08:16.819 11:44:07 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.819 11:44:07 -- accel/accel.sh@19 -- # IFS=: 00:08:16.819 11:44:07 -- accel/accel.sh@19 -- # read -r var val 00:08:17.077 11:44:07 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:17.077 11:44:07 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:17.077 11:44:07 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:17.077 00:08:17.077 real 0m2.575s 00:08:17.077 user 0m2.377s 00:08:17.077 sys 0m0.215s 00:08:17.077 11:44:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:17.077 11:44:07 -- common/autotest_common.sh@10 -- # set +x 00:08:17.077 ************************************ 00:08:17.077 END TEST accel_decomp 00:08:17.077 ************************************ 00:08:17.077 11:44:07 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:17.077 11:44:07 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:08:17.077 11:44:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:17.077 11:44:07 -- common/autotest_common.sh@10 -- # set +x 00:08:17.077 ************************************ 00:08:17.077 START TEST accel_decmop_full 00:08:17.077 ************************************ 00:08:17.078 11:44:07 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:17.078 11:44:07 -- accel/accel.sh@16 -- # local accel_opc 00:08:17.078 11:44:07 -- accel/accel.sh@17 -- # local accel_module 00:08:17.078 11:44:07 -- accel/accel.sh@19 -- # IFS=: 00:08:17.078 11:44:07 -- accel/accel.sh@19 -- # read -r var val 00:08:17.078 11:44:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:17.078 11:44:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:17.078 11:44:07 -- accel/accel.sh@12 -- # build_accel_config 00:08:17.078 11:44:07 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:17.078 11:44:07 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:17.078 11:44:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:17.078 11:44:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:17.078 11:44:07 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:17.078 11:44:07 -- accel/accel.sh@40 -- # local IFS=, 00:08:17.078 11:44:07 -- accel/accel.sh@41 -- # jq -r . 00:08:17.078 [2024-04-18 11:44:07.605478] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:08:17.078 [2024-04-18 11:44:07.605575] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2327085 ] 00:08:17.336 EAL: No free 2048 kB hugepages reported on node 1 00:08:17.336 [2024-04-18 11:44:07.728606] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.594 [2024-04-18 11:44:07.934629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.853 11:44:08 -- accel/accel.sh@20 -- # val= 00:08:17.853 11:44:08 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.853 11:44:08 -- accel/accel.sh@19 -- # IFS=: 00:08:17.853 11:44:08 -- accel/accel.sh@19 -- # read -r var val 00:08:17.853 11:44:08 -- accel/accel.sh@20 -- # val= 00:08:17.853 11:44:08 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.853 11:44:08 -- accel/accel.sh@19 -- # IFS=: 00:08:17.853 11:44:08 -- accel/accel.sh@19 -- # read -r var val 00:08:17.853 11:44:08 -- accel/accel.sh@20 -- # val= 00:08:17.853 11:44:08 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.853 11:44:08 -- accel/accel.sh@19 -- # IFS=: 00:08:17.853 11:44:08 -- accel/accel.sh@19 -- # read -r var val 00:08:17.853 11:44:08 -- accel/accel.sh@20 -- # val=0x1 00:08:17.853 11:44:08 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.853 11:44:08 -- accel/accel.sh@19 -- # IFS=: 00:08:17.853 11:44:08 -- accel/accel.sh@19 -- # read -r var val 00:08:17.853 11:44:08 -- accel/accel.sh@20 -- # val= 00:08:17.853 11:44:08 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.853 11:44:08 -- accel/accel.sh@19 -- # IFS=: 00:08:17.853 11:44:08 -- accel/accel.sh@19 -- # read -r var val 00:08:17.853 11:44:08 -- accel/accel.sh@20 -- # val= 00:08:17.853 11:44:08 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.853 11:44:08 -- accel/accel.sh@19 -- # IFS=: 00:08:17.853 11:44:08 -- accel/accel.sh@19 -- # read -r var val 00:08:17.853 11:44:08 -- accel/accel.sh@20 -- # val=decompress 00:08:17.853 11:44:08 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.853 11:44:08 -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:17.853 11:44:08 -- accel/accel.sh@19 -- # IFS=: 00:08:17.853 11:44:08 -- accel/accel.sh@19 -- # read -r var val 00:08:17.853 11:44:08 -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:17.853 11:44:08 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.853 11:44:08 -- accel/accel.sh@19 -- # IFS=: 00:08:17.853 11:44:08 -- accel/accel.sh@19 -- # read -r var val 00:08:17.853 11:44:08 -- accel/accel.sh@20 -- # val= 00:08:17.853 11:44:08 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.853 11:44:08 -- accel/accel.sh@19 -- # IFS=: 00:08:17.853 11:44:08 -- accel/accel.sh@19 -- # read -r var val 00:08:17.853 11:44:08 -- accel/accel.sh@20 -- # val=software 00:08:17.853 11:44:08 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.853 11:44:08 -- accel/accel.sh@22 -- # accel_module=software 00:08:17.853 11:44:08 -- accel/accel.sh@19 -- # IFS=: 00:08:17.853 11:44:08 -- accel/accel.sh@19 -- # read -r var val 00:08:17.853 11:44:08 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:17.853 11:44:08 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.853 11:44:08 -- accel/accel.sh@19 -- # IFS=: 00:08:17.853 11:44:08 -- accel/accel.sh@19 -- # read -r var val 00:08:17.853 11:44:08 -- accel/accel.sh@20 -- # val=32 00:08:17.853 11:44:08 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.853 11:44:08 -- accel/accel.sh@19 -- # IFS=: 00:08:17.853 11:44:08 -- accel/accel.sh@19 -- # read -r var val 00:08:17.853 11:44:08 -- accel/accel.sh@20 -- # val=32 00:08:17.853 11:44:08 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.853 11:44:08 -- accel/accel.sh@19 -- # IFS=: 00:08:17.853 11:44:08 -- accel/accel.sh@19 -- # read -r var val 00:08:17.853 11:44:08 -- accel/accel.sh@20 -- # val=1 00:08:17.853 11:44:08 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.853 11:44:08 -- accel/accel.sh@19 -- # IFS=: 00:08:17.853 11:44:08 -- accel/accel.sh@19 -- # read -r var val 00:08:17.853 11:44:08 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:17.853 11:44:08 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.853 11:44:08 -- accel/accel.sh@19 -- # IFS=: 00:08:17.853 11:44:08 -- accel/accel.sh@19 -- # read -r var val 00:08:17.853 11:44:08 -- accel/accel.sh@20 -- # val=Yes 00:08:17.853 11:44:08 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.853 11:44:08 -- accel/accel.sh@19 -- # IFS=: 00:08:17.853 11:44:08 -- accel/accel.sh@19 -- # read -r var val 00:08:17.853 11:44:08 -- accel/accel.sh@20 -- # val= 00:08:17.853 11:44:08 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.853 11:44:08 -- accel/accel.sh@19 -- # IFS=: 00:08:17.853 11:44:08 -- accel/accel.sh@19 -- # read -r var val 00:08:17.853 11:44:08 -- accel/accel.sh@20 -- # val= 00:08:17.853 11:44:08 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.853 11:44:08 -- accel/accel.sh@19 -- # IFS=: 00:08:17.853 11:44:08 -- accel/accel.sh@19 -- # read -r var val 00:08:19.756 11:44:10 -- accel/accel.sh@20 -- # val= 00:08:19.756 11:44:10 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.756 11:44:10 -- accel/accel.sh@19 -- # IFS=: 00:08:19.756 11:44:10 -- accel/accel.sh@19 -- # read -r var val 00:08:19.757 11:44:10 -- accel/accel.sh@20 -- # val= 00:08:19.757 11:44:10 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.757 11:44:10 -- accel/accel.sh@19 -- # IFS=: 00:08:19.757 11:44:10 -- accel/accel.sh@19 -- # read -r var val 00:08:19.757 11:44:10 -- accel/accel.sh@20 -- # val= 00:08:19.757 11:44:10 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.757 11:44:10 -- accel/accel.sh@19 -- # IFS=: 00:08:19.757 11:44:10 -- accel/accel.sh@19 -- # read -r var val 00:08:19.757 11:44:10 -- accel/accel.sh@20 -- # val= 00:08:19.757 11:44:10 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.757 11:44:10 -- accel/accel.sh@19 -- # IFS=: 00:08:19.757 11:44:10 -- accel/accel.sh@19 -- # read -r var val 00:08:19.757 11:44:10 -- accel/accel.sh@20 -- # val= 00:08:19.757 11:44:10 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.757 11:44:10 -- accel/accel.sh@19 -- # IFS=: 00:08:19.757 11:44:10 -- accel/accel.sh@19 -- # read -r var val 00:08:19.757 11:44:10 -- accel/accel.sh@20 -- # val= 00:08:19.757 11:44:10 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.757 11:44:10 -- accel/accel.sh@19 -- # IFS=: 00:08:19.757 11:44:10 -- accel/accel.sh@19 -- # read -r var val 00:08:19.757 11:44:10 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:19.757 11:44:10 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:19.757 11:44:10 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:19.757 00:08:19.757 real 0m2.555s 00:08:19.757 user 0m2.355s 00:08:19.757 sys 0m0.214s 00:08:19.757 11:44:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:19.757 11:44:10 -- common/autotest_common.sh@10 -- # set +x 00:08:19.757 ************************************ 00:08:19.757 END TEST accel_decmop_full 00:08:19.757 ************************************ 00:08:19.757 11:44:10 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:19.757 11:44:10 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:08:19.757 11:44:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:19.757 11:44:10 -- common/autotest_common.sh@10 -- # set +x 00:08:19.757 ************************************ 00:08:19.757 START TEST accel_decomp_mcore 00:08:19.757 ************************************ 00:08:19.757 11:44:10 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:19.757 11:44:10 -- accel/accel.sh@16 -- # local accel_opc 00:08:19.757 11:44:10 -- accel/accel.sh@17 -- # local accel_module 00:08:19.757 11:44:10 -- accel/accel.sh@19 -- # IFS=: 00:08:19.757 11:44:10 -- accel/accel.sh@19 -- # read -r var val 00:08:19.757 11:44:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:20.015 11:44:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:20.015 11:44:10 -- accel/accel.sh@12 -- # build_accel_config 00:08:20.015 11:44:10 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:20.015 11:44:10 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:20.015 11:44:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:20.015 11:44:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:20.015 11:44:10 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:20.015 11:44:10 -- accel/accel.sh@40 -- # local IFS=, 00:08:20.015 11:44:10 -- accel/accel.sh@41 -- # jq -r . 00:08:20.015 [2024-04-18 11:44:10.352929] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:08:20.015 [2024-04-18 11:44:10.353002] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2327535 ] 00:08:20.015 EAL: No free 2048 kB hugepages reported on node 1 00:08:20.015 [2024-04-18 11:44:10.477962] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:20.273 [2024-04-18 11:44:10.699434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:20.273 [2024-04-18 11:44:10.699508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:20.273 [2024-04-18 11:44:10.699542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.273 [2024-04-18 11:44:10.699546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:20.531 11:44:10 -- accel/accel.sh@20 -- # val= 00:08:20.531 11:44:10 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.531 11:44:10 -- accel/accel.sh@19 -- # IFS=: 00:08:20.531 11:44:10 -- accel/accel.sh@19 -- # read -r var val 00:08:20.531 11:44:10 -- accel/accel.sh@20 -- # val= 00:08:20.531 11:44:10 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.531 11:44:10 -- accel/accel.sh@19 -- # IFS=: 00:08:20.531 11:44:10 -- accel/accel.sh@19 -- # read -r var val 00:08:20.531 11:44:10 -- accel/accel.sh@20 -- # val= 00:08:20.531 11:44:10 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.531 11:44:10 -- accel/accel.sh@19 -- # IFS=: 00:08:20.531 11:44:10 -- accel/accel.sh@19 -- # read -r var val 00:08:20.531 11:44:10 -- accel/accel.sh@20 -- # val=0xf 00:08:20.531 11:44:10 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.531 11:44:10 -- accel/accel.sh@19 -- # IFS=: 00:08:20.531 11:44:10 -- accel/accel.sh@19 -- # read -r var val 00:08:20.531 11:44:10 -- accel/accel.sh@20 -- # val= 00:08:20.531 11:44:10 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.531 11:44:10 -- accel/accel.sh@19 -- # IFS=: 00:08:20.531 11:44:10 -- accel/accel.sh@19 -- # read -r var val 00:08:20.531 11:44:10 -- accel/accel.sh@20 -- # val= 00:08:20.531 11:44:10 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.531 11:44:10 -- accel/accel.sh@19 -- # IFS=: 00:08:20.531 11:44:10 -- accel/accel.sh@19 -- # read -r var val 00:08:20.531 11:44:10 -- accel/accel.sh@20 -- # val=decompress 00:08:20.531 11:44:10 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.531 11:44:10 -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:20.531 11:44:10 -- accel/accel.sh@19 -- # IFS=: 00:08:20.531 11:44:10 -- accel/accel.sh@19 -- # read -r var val 00:08:20.531 11:44:10 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:20.531 11:44:10 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.531 11:44:10 -- accel/accel.sh@19 -- # IFS=: 00:08:20.531 11:44:10 -- accel/accel.sh@19 -- # read -r var val 00:08:20.531 11:44:10 -- accel/accel.sh@20 -- # val= 00:08:20.531 11:44:10 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.531 11:44:10 -- accel/accel.sh@19 -- # IFS=: 00:08:20.531 11:44:10 -- accel/accel.sh@19 -- # read -r var val 00:08:20.531 11:44:10 -- accel/accel.sh@20 -- # val=software 00:08:20.531 11:44:10 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.531 11:44:10 -- accel/accel.sh@22 -- # accel_module=software 00:08:20.531 11:44:10 -- accel/accel.sh@19 -- # IFS=: 00:08:20.531 11:44:10 -- accel/accel.sh@19 -- # read -r var val 00:08:20.531 11:44:10 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:20.531 11:44:10 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.531 11:44:10 -- accel/accel.sh@19 -- # IFS=: 00:08:20.531 11:44:10 -- accel/accel.sh@19 -- # read -r var val 00:08:20.531 11:44:10 -- accel/accel.sh@20 -- # val=32 00:08:20.531 11:44:10 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.531 11:44:10 -- accel/accel.sh@19 -- # IFS=: 00:08:20.531 11:44:10 -- accel/accel.sh@19 -- # read -r var val 00:08:20.531 11:44:10 -- accel/accel.sh@20 -- # val=32 00:08:20.531 11:44:10 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.531 11:44:10 -- accel/accel.sh@19 -- # IFS=: 00:08:20.531 11:44:10 -- accel/accel.sh@19 -- # read -r var val 00:08:20.531 11:44:10 -- accel/accel.sh@20 -- # val=1 00:08:20.531 11:44:10 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.531 11:44:10 -- accel/accel.sh@19 -- # IFS=: 00:08:20.531 11:44:10 -- accel/accel.sh@19 -- # read -r var val 00:08:20.531 11:44:10 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:20.531 11:44:10 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.531 11:44:10 -- accel/accel.sh@19 -- # IFS=: 00:08:20.531 11:44:10 -- accel/accel.sh@19 -- # read -r var val 00:08:20.531 11:44:10 -- accel/accel.sh@20 -- # val=Yes 00:08:20.531 11:44:10 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.531 11:44:10 -- accel/accel.sh@19 -- # IFS=: 00:08:20.531 11:44:10 -- accel/accel.sh@19 -- # read -r var val 00:08:20.531 11:44:10 -- accel/accel.sh@20 -- # val= 00:08:20.531 11:44:10 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.531 11:44:10 -- accel/accel.sh@19 -- # IFS=: 00:08:20.531 11:44:10 -- accel/accel.sh@19 -- # read -r var val 00:08:20.531 11:44:10 -- accel/accel.sh@20 -- # val= 00:08:20.531 11:44:10 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.531 11:44:10 -- accel/accel.sh@19 -- # IFS=: 00:08:20.531 11:44:10 -- accel/accel.sh@19 -- # read -r var val 00:08:22.498 11:44:12 -- accel/accel.sh@20 -- # val= 00:08:22.498 11:44:12 -- accel/accel.sh@21 -- # case "$var" in 00:08:22.498 11:44:12 -- accel/accel.sh@19 -- # IFS=: 00:08:22.498 11:44:12 -- accel/accel.sh@19 -- # read -r var val 00:08:22.498 11:44:12 -- accel/accel.sh@20 -- # val= 00:08:22.498 11:44:12 -- accel/accel.sh@21 -- # case "$var" in 00:08:22.498 11:44:12 -- accel/accel.sh@19 -- # IFS=: 00:08:22.498 11:44:12 -- accel/accel.sh@19 -- # read -r var val 00:08:22.498 11:44:12 -- accel/accel.sh@20 -- # val= 00:08:22.498 11:44:12 -- accel/accel.sh@21 -- # case "$var" in 00:08:22.498 11:44:12 -- accel/accel.sh@19 -- # IFS=: 00:08:22.498 11:44:12 -- accel/accel.sh@19 -- # read -r var val 00:08:22.498 11:44:12 -- accel/accel.sh@20 -- # val= 00:08:22.498 11:44:12 -- accel/accel.sh@21 -- # case "$var" in 00:08:22.498 11:44:12 -- accel/accel.sh@19 -- # IFS=: 00:08:22.498 11:44:12 -- accel/accel.sh@19 -- # read -r var val 00:08:22.498 11:44:12 -- accel/accel.sh@20 -- # val= 00:08:22.498 11:44:12 -- accel/accel.sh@21 -- # case "$var" in 00:08:22.498 11:44:12 -- accel/accel.sh@19 -- # IFS=: 00:08:22.498 11:44:12 -- accel/accel.sh@19 -- # read -r var val 00:08:22.498 11:44:12 -- accel/accel.sh@20 -- # val= 00:08:22.498 11:44:12 -- accel/accel.sh@21 -- # case "$var" in 00:08:22.498 11:44:12 -- accel/accel.sh@19 -- # IFS=: 00:08:22.498 11:44:12 -- accel/accel.sh@19 -- # read -r var val 00:08:22.498 11:44:12 -- accel/accel.sh@20 -- # val= 00:08:22.498 11:44:12 -- accel/accel.sh@21 -- # case "$var" in 00:08:22.498 11:44:12 -- accel/accel.sh@19 -- # IFS=: 00:08:22.498 11:44:12 -- accel/accel.sh@19 -- # read -r var val 00:08:22.498 11:44:12 -- accel/accel.sh@20 -- # val= 00:08:22.498 11:44:12 -- accel/accel.sh@21 -- # case "$var" in 00:08:22.498 11:44:12 -- accel/accel.sh@19 -- # IFS=: 00:08:22.498 11:44:12 -- accel/accel.sh@19 -- # read -r var val 00:08:22.498 11:44:12 -- accel/accel.sh@20 -- # val= 00:08:22.498 11:44:12 -- accel/accel.sh@21 -- # case "$var" in 00:08:22.498 11:44:12 -- accel/accel.sh@19 -- # IFS=: 00:08:22.498 11:44:12 -- accel/accel.sh@19 -- # read -r var val 00:08:22.498 11:44:12 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:22.498 11:44:12 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:22.498 11:44:12 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:22.498 00:08:22.498 real 0m2.630s 00:08:22.498 user 0m7.852s 00:08:22.498 sys 0m0.246s 00:08:22.498 11:44:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:22.498 11:44:12 -- common/autotest_common.sh@10 -- # set +x 00:08:22.498 ************************************ 00:08:22.498 END TEST accel_decomp_mcore 00:08:22.498 ************************************ 00:08:22.498 11:44:12 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:22.498 11:44:12 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:22.498 11:44:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:22.498 11:44:12 -- common/autotest_common.sh@10 -- # set +x 00:08:22.763 ************************************ 00:08:22.763 START TEST accel_decomp_full_mcore 00:08:22.763 ************************************ 00:08:22.763 11:44:13 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:22.763 11:44:13 -- accel/accel.sh@16 -- # local accel_opc 00:08:22.763 11:44:13 -- accel/accel.sh@17 -- # local accel_module 00:08:22.764 11:44:13 -- accel/accel.sh@19 -- # IFS=: 00:08:22.764 11:44:13 -- accel/accel.sh@19 -- # read -r var val 00:08:22.764 11:44:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:22.764 11:44:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:22.764 11:44:13 -- accel/accel.sh@12 -- # build_accel_config 00:08:22.764 11:44:13 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:22.764 11:44:13 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:22.764 11:44:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:22.764 11:44:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:22.764 11:44:13 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:22.764 11:44:13 -- accel/accel.sh@40 -- # local IFS=, 00:08:22.764 11:44:13 -- accel/accel.sh@41 -- # jq -r . 00:08:22.764 [2024-04-18 11:44:13.166138] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:08:22.764 [2024-04-18 11:44:13.166227] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2327961 ] 00:08:22.764 EAL: No free 2048 kB hugepages reported on node 1 00:08:22.764 [2024-04-18 11:44:13.286760] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:23.023 [2024-04-18 11:44:13.496987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.023 [2024-04-18 11:44:13.497061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:23.023 [2024-04-18 11:44:13.497141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.023 [2024-04-18 11:44:13.497143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:23.282 11:44:13 -- accel/accel.sh@20 -- # val= 00:08:23.282 11:44:13 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.282 11:44:13 -- accel/accel.sh@19 -- # IFS=: 00:08:23.282 11:44:13 -- accel/accel.sh@19 -- # read -r var val 00:08:23.282 11:44:13 -- accel/accel.sh@20 -- # val= 00:08:23.282 11:44:13 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.282 11:44:13 -- accel/accel.sh@19 -- # IFS=: 00:08:23.282 11:44:13 -- accel/accel.sh@19 -- # read -r var val 00:08:23.282 11:44:13 -- accel/accel.sh@20 -- # val= 00:08:23.282 11:44:13 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.282 11:44:13 -- accel/accel.sh@19 -- # IFS=: 00:08:23.282 11:44:13 -- accel/accel.sh@19 -- # read -r var val 00:08:23.282 11:44:13 -- accel/accel.sh@20 -- # val=0xf 00:08:23.282 11:44:13 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.282 11:44:13 -- accel/accel.sh@19 -- # IFS=: 00:08:23.282 11:44:13 -- accel/accel.sh@19 -- # read -r var val 00:08:23.282 11:44:13 -- accel/accel.sh@20 -- # val= 00:08:23.282 11:44:13 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.282 11:44:13 -- accel/accel.sh@19 -- # IFS=: 00:08:23.282 11:44:13 -- accel/accel.sh@19 -- # read -r var val 00:08:23.282 11:44:13 -- accel/accel.sh@20 -- # val= 00:08:23.282 11:44:13 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.282 11:44:13 -- accel/accel.sh@19 -- # IFS=: 00:08:23.282 11:44:13 -- accel/accel.sh@19 -- # read -r var val 00:08:23.282 11:44:13 -- accel/accel.sh@20 -- # val=decompress 00:08:23.282 11:44:13 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.282 11:44:13 -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:23.282 11:44:13 -- accel/accel.sh@19 -- # IFS=: 00:08:23.282 11:44:13 -- accel/accel.sh@19 -- # read -r var val 00:08:23.282 11:44:13 -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:23.282 11:44:13 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.282 11:44:13 -- accel/accel.sh@19 -- # IFS=: 00:08:23.282 11:44:13 -- accel/accel.sh@19 -- # read -r var val 00:08:23.282 11:44:13 -- accel/accel.sh@20 -- # val= 00:08:23.282 11:44:13 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.282 11:44:13 -- accel/accel.sh@19 -- # IFS=: 00:08:23.282 11:44:13 -- accel/accel.sh@19 -- # read -r var val 00:08:23.282 11:44:13 -- accel/accel.sh@20 -- # val=software 00:08:23.282 11:44:13 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.282 11:44:13 -- accel/accel.sh@22 -- # accel_module=software 00:08:23.282 11:44:13 -- accel/accel.sh@19 -- # IFS=: 00:08:23.282 11:44:13 -- accel/accel.sh@19 -- # read -r var val 00:08:23.282 11:44:13 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:23.282 11:44:13 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.282 11:44:13 -- accel/accel.sh@19 -- # IFS=: 00:08:23.282 11:44:13 -- accel/accel.sh@19 -- # read -r var val 00:08:23.282 11:44:13 -- accel/accel.sh@20 -- # val=32 00:08:23.282 11:44:13 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.282 11:44:13 -- accel/accel.sh@19 -- # IFS=: 00:08:23.282 11:44:13 -- accel/accel.sh@19 -- # read -r var val 00:08:23.282 11:44:13 -- accel/accel.sh@20 -- # val=32 00:08:23.282 11:44:13 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.282 11:44:13 -- accel/accel.sh@19 -- # IFS=: 00:08:23.282 11:44:13 -- accel/accel.sh@19 -- # read -r var val 00:08:23.282 11:44:13 -- accel/accel.sh@20 -- # val=1 00:08:23.282 11:44:13 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.282 11:44:13 -- accel/accel.sh@19 -- # IFS=: 00:08:23.282 11:44:13 -- accel/accel.sh@19 -- # read -r var val 00:08:23.282 11:44:13 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:23.282 11:44:13 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.282 11:44:13 -- accel/accel.sh@19 -- # IFS=: 00:08:23.282 11:44:13 -- accel/accel.sh@19 -- # read -r var val 00:08:23.282 11:44:13 -- accel/accel.sh@20 -- # val=Yes 00:08:23.282 11:44:13 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.282 11:44:13 -- accel/accel.sh@19 -- # IFS=: 00:08:23.282 11:44:13 -- accel/accel.sh@19 -- # read -r var val 00:08:23.282 11:44:13 -- accel/accel.sh@20 -- # val= 00:08:23.282 11:44:13 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.282 11:44:13 -- accel/accel.sh@19 -- # IFS=: 00:08:23.282 11:44:13 -- accel/accel.sh@19 -- # read -r var val 00:08:23.282 11:44:13 -- accel/accel.sh@20 -- # val= 00:08:23.282 11:44:13 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.282 11:44:13 -- accel/accel.sh@19 -- # IFS=: 00:08:23.282 11:44:13 -- accel/accel.sh@19 -- # read -r var val 00:08:25.814 11:44:15 -- accel/accel.sh@20 -- # val= 00:08:25.814 11:44:15 -- accel/accel.sh@21 -- # case "$var" in 00:08:25.814 11:44:15 -- accel/accel.sh@19 -- # IFS=: 00:08:25.814 11:44:15 -- accel/accel.sh@19 -- # read -r var val 00:08:25.814 11:44:15 -- accel/accel.sh@20 -- # val= 00:08:25.814 11:44:15 -- accel/accel.sh@21 -- # case "$var" in 00:08:25.814 11:44:15 -- accel/accel.sh@19 -- # IFS=: 00:08:25.814 11:44:15 -- accel/accel.sh@19 -- # read -r var val 00:08:25.814 11:44:15 -- accel/accel.sh@20 -- # val= 00:08:25.814 11:44:15 -- accel/accel.sh@21 -- # case "$var" in 00:08:25.814 11:44:15 -- accel/accel.sh@19 -- # IFS=: 00:08:25.814 11:44:15 -- accel/accel.sh@19 -- # read -r var val 00:08:25.814 11:44:15 -- accel/accel.sh@20 -- # val= 00:08:25.814 11:44:15 -- accel/accel.sh@21 -- # case "$var" in 00:08:25.814 11:44:15 -- accel/accel.sh@19 -- # IFS=: 00:08:25.814 11:44:15 -- accel/accel.sh@19 -- # read -r var val 00:08:25.814 11:44:15 -- accel/accel.sh@20 -- # val= 00:08:25.814 11:44:15 -- accel/accel.sh@21 -- # case "$var" in 00:08:25.814 11:44:15 -- accel/accel.sh@19 -- # IFS=: 00:08:25.814 11:44:15 -- accel/accel.sh@19 -- # read -r var val 00:08:25.814 11:44:15 -- accel/accel.sh@20 -- # val= 00:08:25.814 11:44:15 -- accel/accel.sh@21 -- # case "$var" in 00:08:25.814 11:44:15 -- accel/accel.sh@19 -- # IFS=: 00:08:25.814 11:44:15 -- accel/accel.sh@19 -- # read -r var val 00:08:25.814 11:44:15 -- accel/accel.sh@20 -- # val= 00:08:25.814 11:44:15 -- accel/accel.sh@21 -- # case "$var" in 00:08:25.814 11:44:15 -- accel/accel.sh@19 -- # IFS=: 00:08:25.814 11:44:15 -- accel/accel.sh@19 -- # read -r var val 00:08:25.814 11:44:15 -- accel/accel.sh@20 -- # val= 00:08:25.814 11:44:15 -- accel/accel.sh@21 -- # case "$var" in 00:08:25.814 11:44:15 -- accel/accel.sh@19 -- # IFS=: 00:08:25.814 11:44:15 -- accel/accel.sh@19 -- # read -r var val 00:08:25.814 11:44:15 -- accel/accel.sh@20 -- # val= 00:08:25.814 11:44:15 -- accel/accel.sh@21 -- # case "$var" in 00:08:25.814 11:44:15 -- accel/accel.sh@19 -- # IFS=: 00:08:25.814 11:44:15 -- accel/accel.sh@19 -- # read -r var val 00:08:25.814 11:44:15 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:25.814 11:44:15 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:25.814 11:44:15 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:25.814 00:08:25.814 real 0m2.650s 00:08:25.814 user 0m8.004s 00:08:25.814 sys 0m0.226s 00:08:25.814 11:44:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:25.814 11:44:15 -- common/autotest_common.sh@10 -- # set +x 00:08:25.814 ************************************ 00:08:25.814 END TEST accel_decomp_full_mcore 00:08:25.814 ************************************ 00:08:25.814 11:44:15 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:25.814 11:44:15 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:08:25.814 11:44:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:25.814 11:44:15 -- common/autotest_common.sh@10 -- # set +x 00:08:25.814 ************************************ 00:08:25.814 START TEST accel_decomp_mthread 00:08:25.814 ************************************ 00:08:25.814 11:44:15 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:25.814 11:44:15 -- accel/accel.sh@16 -- # local accel_opc 00:08:25.814 11:44:15 -- accel/accel.sh@17 -- # local accel_module 00:08:25.814 11:44:15 -- accel/accel.sh@19 -- # IFS=: 00:08:25.814 11:44:15 -- accel/accel.sh@19 -- # read -r var val 00:08:25.814 11:44:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:25.814 11:44:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:25.814 11:44:15 -- accel/accel.sh@12 -- # build_accel_config 00:08:25.814 11:44:15 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:25.814 11:44:15 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:25.814 11:44:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:25.814 11:44:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:25.814 11:44:15 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:25.814 11:44:15 -- accel/accel.sh@40 -- # local IFS=, 00:08:25.814 11:44:15 -- accel/accel.sh@41 -- # jq -r . 00:08:25.814 [2024-04-18 11:44:16.007563] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:08:25.814 [2024-04-18 11:44:16.007634] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2328513 ] 00:08:25.814 EAL: No free 2048 kB hugepages reported on node 1 00:08:25.814 [2024-04-18 11:44:16.127514] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.814 [2024-04-18 11:44:16.331339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.073 11:44:16 -- accel/accel.sh@20 -- # val= 00:08:26.073 11:44:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:26.073 11:44:16 -- accel/accel.sh@19 -- # IFS=: 00:08:26.073 11:44:16 -- accel/accel.sh@19 -- # read -r var val 00:08:26.073 11:44:16 -- accel/accel.sh@20 -- # val= 00:08:26.073 11:44:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:26.073 11:44:16 -- accel/accel.sh@19 -- # IFS=: 00:08:26.073 11:44:16 -- accel/accel.sh@19 -- # read -r var val 00:08:26.073 11:44:16 -- accel/accel.sh@20 -- # val= 00:08:26.073 11:44:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:26.073 11:44:16 -- accel/accel.sh@19 -- # IFS=: 00:08:26.073 11:44:16 -- accel/accel.sh@19 -- # read -r var val 00:08:26.073 11:44:16 -- accel/accel.sh@20 -- # val=0x1 00:08:26.073 11:44:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:26.073 11:44:16 -- accel/accel.sh@19 -- # IFS=: 00:08:26.073 11:44:16 -- accel/accel.sh@19 -- # read -r var val 00:08:26.073 11:44:16 -- accel/accel.sh@20 -- # val= 00:08:26.073 11:44:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:26.073 11:44:16 -- accel/accel.sh@19 -- # IFS=: 00:08:26.073 11:44:16 -- accel/accel.sh@19 -- # read -r var val 00:08:26.073 11:44:16 -- accel/accel.sh@20 -- # val= 00:08:26.073 11:44:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:26.073 11:44:16 -- accel/accel.sh@19 -- # IFS=: 00:08:26.073 11:44:16 -- accel/accel.sh@19 -- # read -r var val 00:08:26.073 11:44:16 -- accel/accel.sh@20 -- # val=decompress 00:08:26.073 11:44:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:26.073 11:44:16 -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:26.073 11:44:16 -- accel/accel.sh@19 -- # IFS=: 00:08:26.073 11:44:16 -- accel/accel.sh@19 -- # read -r var val 00:08:26.073 11:44:16 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:26.073 11:44:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:26.073 11:44:16 -- accel/accel.sh@19 -- # IFS=: 00:08:26.073 11:44:16 -- accel/accel.sh@19 -- # read -r var val 00:08:26.073 11:44:16 -- accel/accel.sh@20 -- # val= 00:08:26.074 11:44:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:26.074 11:44:16 -- accel/accel.sh@19 -- # IFS=: 00:08:26.074 11:44:16 -- accel/accel.sh@19 -- # read -r var val 00:08:26.074 11:44:16 -- accel/accel.sh@20 -- # val=software 00:08:26.074 11:44:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:26.074 11:44:16 -- accel/accel.sh@22 -- # accel_module=software 00:08:26.074 11:44:16 -- accel/accel.sh@19 -- # IFS=: 00:08:26.074 11:44:16 -- accel/accel.sh@19 -- # read -r var val 00:08:26.074 11:44:16 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:26.074 11:44:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:26.074 11:44:16 -- accel/accel.sh@19 -- # IFS=: 00:08:26.074 11:44:16 -- accel/accel.sh@19 -- # read -r var val 00:08:26.074 11:44:16 -- accel/accel.sh@20 -- # val=32 00:08:26.074 11:44:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:26.074 11:44:16 -- accel/accel.sh@19 -- # IFS=: 00:08:26.074 11:44:16 -- accel/accel.sh@19 -- # read -r var val 00:08:26.074 11:44:16 -- accel/accel.sh@20 -- # val=32 00:08:26.074 11:44:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:26.074 11:44:16 -- accel/accel.sh@19 -- # IFS=: 00:08:26.074 11:44:16 -- accel/accel.sh@19 -- # read -r var val 00:08:26.074 11:44:16 -- accel/accel.sh@20 -- # val=2 00:08:26.074 11:44:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:26.074 11:44:16 -- accel/accel.sh@19 -- # IFS=: 00:08:26.074 11:44:16 -- accel/accel.sh@19 -- # read -r var val 00:08:26.074 11:44:16 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:26.074 11:44:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:26.074 11:44:16 -- accel/accel.sh@19 -- # IFS=: 00:08:26.074 11:44:16 -- accel/accel.sh@19 -- # read -r var val 00:08:26.074 11:44:16 -- accel/accel.sh@20 -- # val=Yes 00:08:26.074 11:44:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:26.074 11:44:16 -- accel/accel.sh@19 -- # IFS=: 00:08:26.074 11:44:16 -- accel/accel.sh@19 -- # read -r var val 00:08:26.074 11:44:16 -- accel/accel.sh@20 -- # val= 00:08:26.074 11:44:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:26.074 11:44:16 -- accel/accel.sh@19 -- # IFS=: 00:08:26.074 11:44:16 -- accel/accel.sh@19 -- # read -r var val 00:08:26.074 11:44:16 -- accel/accel.sh@20 -- # val= 00:08:26.074 11:44:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:26.074 11:44:16 -- accel/accel.sh@19 -- # IFS=: 00:08:26.074 11:44:16 -- accel/accel.sh@19 -- # read -r var val 00:08:27.977 11:44:18 -- accel/accel.sh@20 -- # val= 00:08:27.977 11:44:18 -- accel/accel.sh@21 -- # case "$var" in 00:08:27.977 11:44:18 -- accel/accel.sh@19 -- # IFS=: 00:08:27.977 11:44:18 -- accel/accel.sh@19 -- # read -r var val 00:08:27.977 11:44:18 -- accel/accel.sh@20 -- # val= 00:08:27.977 11:44:18 -- accel/accel.sh@21 -- # case "$var" in 00:08:27.977 11:44:18 -- accel/accel.sh@19 -- # IFS=: 00:08:27.977 11:44:18 -- accel/accel.sh@19 -- # read -r var val 00:08:27.977 11:44:18 -- accel/accel.sh@20 -- # val= 00:08:27.977 11:44:18 -- accel/accel.sh@21 -- # case "$var" in 00:08:27.977 11:44:18 -- accel/accel.sh@19 -- # IFS=: 00:08:27.977 11:44:18 -- accel/accel.sh@19 -- # read -r var val 00:08:27.977 11:44:18 -- accel/accel.sh@20 -- # val= 00:08:27.977 11:44:18 -- accel/accel.sh@21 -- # case "$var" in 00:08:27.977 11:44:18 -- accel/accel.sh@19 -- # IFS=: 00:08:27.977 11:44:18 -- accel/accel.sh@19 -- # read -r var val 00:08:27.977 11:44:18 -- accel/accel.sh@20 -- # val= 00:08:27.977 11:44:18 -- accel/accel.sh@21 -- # case "$var" in 00:08:27.977 11:44:18 -- accel/accel.sh@19 -- # IFS=: 00:08:27.977 11:44:18 -- accel/accel.sh@19 -- # read -r var val 00:08:27.977 11:44:18 -- accel/accel.sh@20 -- # val= 00:08:27.977 11:44:18 -- accel/accel.sh@21 -- # case "$var" in 00:08:27.977 11:44:18 -- accel/accel.sh@19 -- # IFS=: 00:08:27.977 11:44:18 -- accel/accel.sh@19 -- # read -r var val 00:08:27.977 11:44:18 -- accel/accel.sh@20 -- # val= 00:08:27.977 11:44:18 -- accel/accel.sh@21 -- # case "$var" in 00:08:27.977 11:44:18 -- accel/accel.sh@19 -- # IFS=: 00:08:27.977 11:44:18 -- accel/accel.sh@19 -- # read -r var val 00:08:27.977 11:44:18 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:27.977 11:44:18 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:27.977 11:44:18 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:27.977 00:08:27.977 real 0m2.551s 00:08:27.977 user 0m2.342s 00:08:27.977 sys 0m0.224s 00:08:27.978 11:44:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:27.978 11:44:18 -- common/autotest_common.sh@10 -- # set +x 00:08:27.978 ************************************ 00:08:27.978 END TEST accel_decomp_mthread 00:08:27.978 ************************************ 00:08:28.237 11:44:18 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:28.237 11:44:18 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:28.237 11:44:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:28.237 11:44:18 -- common/autotest_common.sh@10 -- # set +x 00:08:28.237 ************************************ 00:08:28.237 START TEST accel_deomp_full_mthread 00:08:28.237 ************************************ 00:08:28.237 11:44:18 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:28.237 11:44:18 -- accel/accel.sh@16 -- # local accel_opc 00:08:28.237 11:44:18 -- accel/accel.sh@17 -- # local accel_module 00:08:28.237 11:44:18 -- accel/accel.sh@19 -- # IFS=: 00:08:28.237 11:44:18 -- accel/accel.sh@19 -- # read -r var val 00:08:28.237 11:44:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:28.237 11:44:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:28.237 11:44:18 -- accel/accel.sh@12 -- # build_accel_config 00:08:28.237 11:44:18 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:28.237 11:44:18 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:28.237 11:44:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:28.237 11:44:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:28.237 11:44:18 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:28.237 11:44:18 -- accel/accel.sh@40 -- # local IFS=, 00:08:28.237 11:44:18 -- accel/accel.sh@41 -- # jq -r . 00:08:28.237 [2024-04-18 11:44:18.739448] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:08:28.237 [2024-04-18 11:44:18.739539] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2329068 ] 00:08:28.496 EAL: No free 2048 kB hugepages reported on node 1 00:08:28.496 [2024-04-18 11:44:18.858332] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.754 [2024-04-18 11:44:19.060926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.013 11:44:19 -- accel/accel.sh@20 -- # val= 00:08:29.013 11:44:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:29.013 11:44:19 -- accel/accel.sh@19 -- # IFS=: 00:08:29.013 11:44:19 -- accel/accel.sh@19 -- # read -r var val 00:08:29.013 11:44:19 -- accel/accel.sh@20 -- # val= 00:08:29.013 11:44:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:29.013 11:44:19 -- accel/accel.sh@19 -- # IFS=: 00:08:29.013 11:44:19 -- accel/accel.sh@19 -- # read -r var val 00:08:29.013 11:44:19 -- accel/accel.sh@20 -- # val= 00:08:29.013 11:44:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:29.013 11:44:19 -- accel/accel.sh@19 -- # IFS=: 00:08:29.013 11:44:19 -- accel/accel.sh@19 -- # read -r var val 00:08:29.013 11:44:19 -- accel/accel.sh@20 -- # val=0x1 00:08:29.013 11:44:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:29.013 11:44:19 -- accel/accel.sh@19 -- # IFS=: 00:08:29.013 11:44:19 -- accel/accel.sh@19 -- # read -r var val 00:08:29.013 11:44:19 -- accel/accel.sh@20 -- # val= 00:08:29.013 11:44:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:29.013 11:44:19 -- accel/accel.sh@19 -- # IFS=: 00:08:29.013 11:44:19 -- accel/accel.sh@19 -- # read -r var val 00:08:29.013 11:44:19 -- accel/accel.sh@20 -- # val= 00:08:29.013 11:44:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:29.013 11:44:19 -- accel/accel.sh@19 -- # IFS=: 00:08:29.013 11:44:19 -- accel/accel.sh@19 -- # read -r var val 00:08:29.013 11:44:19 -- accel/accel.sh@20 -- # val=decompress 00:08:29.013 11:44:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:29.013 11:44:19 -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:29.013 11:44:19 -- accel/accel.sh@19 -- # IFS=: 00:08:29.013 11:44:19 -- accel/accel.sh@19 -- # read -r var val 00:08:29.013 11:44:19 -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:29.013 11:44:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:29.013 11:44:19 -- accel/accel.sh@19 -- # IFS=: 00:08:29.013 11:44:19 -- accel/accel.sh@19 -- # read -r var val 00:08:29.013 11:44:19 -- accel/accel.sh@20 -- # val= 00:08:29.013 11:44:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:29.013 11:44:19 -- accel/accel.sh@19 -- # IFS=: 00:08:29.013 11:44:19 -- accel/accel.sh@19 -- # read -r var val 00:08:29.013 11:44:19 -- accel/accel.sh@20 -- # val=software 00:08:29.013 11:44:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:29.013 11:44:19 -- accel/accel.sh@22 -- # accel_module=software 00:08:29.013 11:44:19 -- accel/accel.sh@19 -- # IFS=: 00:08:29.013 11:44:19 -- accel/accel.sh@19 -- # read -r var val 00:08:29.013 11:44:19 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:29.013 11:44:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:29.013 11:44:19 -- accel/accel.sh@19 -- # IFS=: 00:08:29.013 11:44:19 -- accel/accel.sh@19 -- # read -r var val 00:08:29.013 11:44:19 -- accel/accel.sh@20 -- # val=32 00:08:29.013 11:44:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:29.013 11:44:19 -- accel/accel.sh@19 -- # IFS=: 00:08:29.013 11:44:19 -- accel/accel.sh@19 -- # read -r var val 00:08:29.013 11:44:19 -- accel/accel.sh@20 -- # val=32 00:08:29.013 11:44:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:29.013 11:44:19 -- accel/accel.sh@19 -- # IFS=: 00:08:29.013 11:44:19 -- accel/accel.sh@19 -- # read -r var val 00:08:29.013 11:44:19 -- accel/accel.sh@20 -- # val=2 00:08:29.013 11:44:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:29.013 11:44:19 -- accel/accel.sh@19 -- # IFS=: 00:08:29.013 11:44:19 -- accel/accel.sh@19 -- # read -r var val 00:08:29.013 11:44:19 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:29.013 11:44:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:29.013 11:44:19 -- accel/accel.sh@19 -- # IFS=: 00:08:29.013 11:44:19 -- accel/accel.sh@19 -- # read -r var val 00:08:29.013 11:44:19 -- accel/accel.sh@20 -- # val=Yes 00:08:29.013 11:44:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:29.013 11:44:19 -- accel/accel.sh@19 -- # IFS=: 00:08:29.013 11:44:19 -- accel/accel.sh@19 -- # read -r var val 00:08:29.013 11:44:19 -- accel/accel.sh@20 -- # val= 00:08:29.013 11:44:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:29.013 11:44:19 -- accel/accel.sh@19 -- # IFS=: 00:08:29.013 11:44:19 -- accel/accel.sh@19 -- # read -r var val 00:08:29.013 11:44:19 -- accel/accel.sh@20 -- # val= 00:08:29.013 11:44:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:29.013 11:44:19 -- accel/accel.sh@19 -- # IFS=: 00:08:29.013 11:44:19 -- accel/accel.sh@19 -- # read -r var val 00:08:30.917 11:44:21 -- accel/accel.sh@20 -- # val= 00:08:30.917 11:44:21 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.917 11:44:21 -- accel/accel.sh@19 -- # IFS=: 00:08:30.917 11:44:21 -- accel/accel.sh@19 -- # read -r var val 00:08:30.917 11:44:21 -- accel/accel.sh@20 -- # val= 00:08:30.917 11:44:21 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.917 11:44:21 -- accel/accel.sh@19 -- # IFS=: 00:08:30.917 11:44:21 -- accel/accel.sh@19 -- # read -r var val 00:08:30.917 11:44:21 -- accel/accel.sh@20 -- # val= 00:08:30.917 11:44:21 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.917 11:44:21 -- accel/accel.sh@19 -- # IFS=: 00:08:30.917 11:44:21 -- accel/accel.sh@19 -- # read -r var val 00:08:30.917 11:44:21 -- accel/accel.sh@20 -- # val= 00:08:30.917 11:44:21 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.917 11:44:21 -- accel/accel.sh@19 -- # IFS=: 00:08:30.917 11:44:21 -- accel/accel.sh@19 -- # read -r var val 00:08:30.917 11:44:21 -- accel/accel.sh@20 -- # val= 00:08:30.917 11:44:21 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.917 11:44:21 -- accel/accel.sh@19 -- # IFS=: 00:08:30.917 11:44:21 -- accel/accel.sh@19 -- # read -r var val 00:08:30.918 11:44:21 -- accel/accel.sh@20 -- # val= 00:08:30.918 11:44:21 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.918 11:44:21 -- accel/accel.sh@19 -- # IFS=: 00:08:30.918 11:44:21 -- accel/accel.sh@19 -- # read -r var val 00:08:30.918 11:44:21 -- accel/accel.sh@20 -- # val= 00:08:30.918 11:44:21 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.918 11:44:21 -- accel/accel.sh@19 -- # IFS=: 00:08:30.918 11:44:21 -- accel/accel.sh@19 -- # read -r var val 00:08:30.918 11:44:21 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:30.918 11:44:21 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:30.918 11:44:21 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:30.918 00:08:30.918 real 0m2.584s 00:08:30.918 user 0m2.388s 00:08:30.918 sys 0m0.210s 00:08:30.918 11:44:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:30.918 11:44:21 -- common/autotest_common.sh@10 -- # set +x 00:08:30.918 ************************************ 00:08:30.918 END TEST accel_deomp_full_mthread 00:08:30.918 ************************************ 00:08:30.918 11:44:21 -- accel/accel.sh@124 -- # [[ n == y ]] 00:08:30.918 11:44:21 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:30.918 11:44:21 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:30.918 11:44:21 -- accel/accel.sh@137 -- # build_accel_config 00:08:30.918 11:44:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:30.918 11:44:21 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:30.918 11:44:21 -- common/autotest_common.sh@10 -- # set +x 00:08:30.918 11:44:21 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:30.918 11:44:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:30.918 11:44:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:30.918 11:44:21 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:30.918 11:44:21 -- accel/accel.sh@40 -- # local IFS=, 00:08:30.918 11:44:21 -- accel/accel.sh@41 -- # jq -r . 00:08:30.918 ************************************ 00:08:30.918 START TEST accel_dif_functional_tests 00:08:30.918 ************************************ 00:08:30.918 11:44:21 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:31.176 [2024-04-18 11:44:21.538325] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:08:31.176 [2024-04-18 11:44:21.538414] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2329616 ] 00:08:31.176 EAL: No free 2048 kB hugepages reported on node 1 00:08:31.176 [2024-04-18 11:44:21.658245] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:31.435 [2024-04-18 11:44:21.865221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.435 [2024-04-18 11:44:21.865291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.435 [2024-04-18 11:44:21.865295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:31.694 00:08:31.694 00:08:31.694 CUnit - A unit testing framework for C - Version 2.1-3 00:08:31.694 http://cunit.sourceforge.net/ 00:08:31.694 00:08:31.694 00:08:31.694 Suite: accel_dif 00:08:31.694 Test: verify: DIF generated, GUARD check ...passed 00:08:31.694 Test: verify: DIF generated, APPTAG check ...passed 00:08:31.694 Test: verify: DIF generated, REFTAG check ...passed 00:08:31.694 Test: verify: DIF not generated, GUARD check ...[2024-04-18 11:44:22.227080] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:31.694 [2024-04-18 11:44:22.227136] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:31.694 passed 00:08:31.694 Test: verify: DIF not generated, APPTAG check ...[2024-04-18 11:44:22.227200] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:31.694 [2024-04-18 11:44:22.227225] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:31.694 passed 00:08:31.694 Test: verify: DIF not generated, REFTAG check ...[2024-04-18 11:44:22.227256] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:31.694 [2024-04-18 11:44:22.227277] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:31.694 passed 00:08:31.694 Test: verify: APPTAG correct, APPTAG check ...passed 00:08:31.694 Test: verify: APPTAG incorrect, APPTAG check ...[2024-04-18 11:44:22.227347] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:08:31.694 passed 00:08:31.694 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:08:31.694 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:08:31.694 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:08:31.694 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-04-18 11:44:22.227508] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:08:31.694 passed 00:08:31.694 Test: generate copy: DIF generated, GUARD check ...passed 00:08:31.694 Test: generate copy: DIF generated, APTTAG check ...passed 00:08:31.694 Test: generate copy: DIF generated, REFTAG check ...passed 00:08:31.694 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:08:31.694 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:08:31.694 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:08:31.694 Test: generate copy: iovecs-len validate ...[2024-04-18 11:44:22.227825] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:08:31.694 passed 00:08:31.694 Test: generate copy: buffer alignment validate ...passed 00:08:31.694 00:08:31.694 Run Summary: Type Total Ran Passed Failed Inactive 00:08:31.694 suites 1 1 n/a 0 0 00:08:31.694 tests 20 20 20 0 0 00:08:31.694 asserts 204 204 204 0 n/a 00:08:31.694 00:08:31.694 Elapsed time = 0.003 seconds 00:08:33.091 00:08:33.091 real 0m1.984s 00:08:33.091 user 0m4.048s 00:08:33.091 sys 0m0.257s 00:08:33.091 11:44:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:33.091 11:44:23 -- common/autotest_common.sh@10 -- # set +x 00:08:33.091 ************************************ 00:08:33.091 END TEST accel_dif_functional_tests 00:08:33.091 ************************************ 00:08:33.091 00:08:33.091 real 1m5.233s 00:08:33.091 user 1m10.555s 00:08:33.091 sys 0m8.609s 00:08:33.091 11:44:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:33.091 11:44:23 -- common/autotest_common.sh@10 -- # set +x 00:08:33.091 ************************************ 00:08:33.091 END TEST accel 00:08:33.091 ************************************ 00:08:33.091 11:44:23 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:08:33.091 11:44:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:33.091 11:44:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:33.091 11:44:23 -- common/autotest_common.sh@10 -- # set +x 00:08:33.349 ************************************ 00:08:33.349 START TEST accel_rpc 00:08:33.349 ************************************ 00:08:33.349 11:44:23 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:08:33.349 * Looking for test storage... 00:08:33.349 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:08:33.349 11:44:23 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:33.349 11:44:23 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2329964 00:08:33.349 11:44:23 -- accel/accel_rpc.sh@15 -- # waitforlisten 2329964 00:08:33.349 11:44:23 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:08:33.349 11:44:23 -- common/autotest_common.sh@817 -- # '[' -z 2329964 ']' 00:08:33.349 11:44:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.349 11:44:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:33.349 11:44:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.349 11:44:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:33.349 11:44:23 -- common/autotest_common.sh@10 -- # set +x 00:08:33.349 [2024-04-18 11:44:23.895474] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:08:33.349 [2024-04-18 11:44:23.895563] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2329964 ] 00:08:33.607 EAL: No free 2048 kB hugepages reported on node 1 00:08:33.607 [2024-04-18 11:44:24.017538] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.865 [2024-04-18 11:44:24.224630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.124 11:44:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:34.124 11:44:24 -- common/autotest_common.sh@850 -- # return 0 00:08:34.124 11:44:24 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:08:34.124 11:44:24 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:08:34.124 11:44:24 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:08:34.124 11:44:24 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:08:34.124 11:44:24 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:08:34.124 11:44:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:34.124 11:44:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:34.124 11:44:24 -- common/autotest_common.sh@10 -- # set +x 00:08:34.383 ************************************ 00:08:34.383 START TEST accel_assign_opcode 00:08:34.383 ************************************ 00:08:34.383 11:44:24 -- common/autotest_common.sh@1111 -- # accel_assign_opcode_test_suite 00:08:34.383 11:44:24 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:08:34.383 11:44:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.383 11:44:24 -- common/autotest_common.sh@10 -- # set +x 00:08:34.383 [2024-04-18 11:44:24.806731] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:08:34.383 11:44:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.383 11:44:24 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:08:34.383 11:44:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.383 11:44:24 -- common/autotest_common.sh@10 -- # set +x 00:08:34.383 [2024-04-18 11:44:24.814751] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:08:34.383 11:44:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.383 11:44:24 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:08:34.383 11:44:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.383 11:44:24 -- common/autotest_common.sh@10 -- # set +x 00:08:35.319 11:44:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:35.319 11:44:25 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:08:35.319 11:44:25 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:08:35.319 11:44:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:35.319 11:44:25 -- accel/accel_rpc.sh@42 -- # grep software 00:08:35.319 11:44:25 -- common/autotest_common.sh@10 -- # set +x 00:08:35.319 11:44:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:35.319 software 00:08:35.319 00:08:35.319 real 0m0.888s 00:08:35.319 user 0m0.028s 00:08:35.319 sys 0m0.009s 00:08:35.319 11:44:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:35.319 11:44:25 -- common/autotest_common.sh@10 -- # set +x 00:08:35.319 ************************************ 00:08:35.319 END TEST accel_assign_opcode 00:08:35.319 ************************************ 00:08:35.319 11:44:25 -- accel/accel_rpc.sh@55 -- # killprocess 2329964 00:08:35.319 11:44:25 -- common/autotest_common.sh@936 -- # '[' -z 2329964 ']' 00:08:35.319 11:44:25 -- common/autotest_common.sh@940 -- # kill -0 2329964 00:08:35.319 11:44:25 -- common/autotest_common.sh@941 -- # uname 00:08:35.319 11:44:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:35.319 11:44:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2329964 00:08:35.319 11:44:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:35.319 11:44:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:35.319 11:44:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2329964' 00:08:35.319 killing process with pid 2329964 00:08:35.319 11:44:25 -- common/autotest_common.sh@955 -- # kill 2329964 00:08:35.319 11:44:25 -- common/autotest_common.sh@960 -- # wait 2329964 00:08:37.850 00:08:37.850 real 0m4.433s 00:08:37.850 user 0m4.342s 00:08:37.850 sys 0m0.682s 00:08:37.850 11:44:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:37.850 11:44:28 -- common/autotest_common.sh@10 -- # set +x 00:08:37.850 ************************************ 00:08:37.850 END TEST accel_rpc 00:08:37.850 ************************************ 00:08:37.850 11:44:28 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:37.850 11:44:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:37.850 11:44:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:37.850 11:44:28 -- common/autotest_common.sh@10 -- # set +x 00:08:37.850 ************************************ 00:08:37.850 START TEST app_cmdline 00:08:37.850 ************************************ 00:08:37.850 11:44:28 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:38.108 * Looking for test storage... 00:08:38.108 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:38.108 11:44:28 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:38.108 11:44:28 -- app/cmdline.sh@17 -- # spdk_tgt_pid=2330848 00:08:38.108 11:44:28 -- app/cmdline.sh@18 -- # waitforlisten 2330848 00:08:38.108 11:44:28 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:38.108 11:44:28 -- common/autotest_common.sh@817 -- # '[' -z 2330848 ']' 00:08:38.108 11:44:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.108 11:44:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:38.108 11:44:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.108 11:44:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:38.108 11:44:28 -- common/autotest_common.sh@10 -- # set +x 00:08:38.108 [2024-04-18 11:44:28.513513] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:08:38.108 [2024-04-18 11:44:28.513606] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2330848 ] 00:08:38.108 EAL: No free 2048 kB hugepages reported on node 1 00:08:38.108 [2024-04-18 11:44:28.639074] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.366 [2024-04-18 11:44:28.862378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.332 11:44:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:39.332 11:44:29 -- common/autotest_common.sh@850 -- # return 0 00:08:39.332 11:44:29 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:08:39.598 { 00:08:39.598 "version": "SPDK v24.05-pre git sha1 65b4e17c6", 00:08:39.598 "fields": { 00:08:39.598 "major": 24, 00:08:39.598 "minor": 5, 00:08:39.598 "patch": 0, 00:08:39.598 "suffix": "-pre", 00:08:39.598 "commit": "65b4e17c6" 00:08:39.598 } 00:08:39.598 } 00:08:39.598 11:44:29 -- app/cmdline.sh@22 -- # expected_methods=() 00:08:39.598 11:44:29 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:39.598 11:44:29 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:39.598 11:44:29 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:39.598 11:44:29 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:39.598 11:44:29 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:39.598 11:44:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:39.598 11:44:29 -- common/autotest_common.sh@10 -- # set +x 00:08:39.598 11:44:29 -- app/cmdline.sh@26 -- # sort 00:08:39.598 11:44:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:39.598 11:44:29 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:39.598 11:44:29 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:39.598 11:44:29 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:39.598 11:44:29 -- common/autotest_common.sh@638 -- # local es=0 00:08:39.598 11:44:29 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:39.598 11:44:29 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:39.598 11:44:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:39.598 11:44:29 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:39.598 11:44:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:39.598 11:44:29 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:39.598 11:44:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:39.598 11:44:29 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:39.598 11:44:29 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:39.598 11:44:29 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:39.598 request: 00:08:39.598 { 00:08:39.598 "method": "env_dpdk_get_mem_stats", 00:08:39.598 "req_id": 1 00:08:39.598 } 00:08:39.598 Got JSON-RPC error response 00:08:39.598 response: 00:08:39.598 { 00:08:39.598 "code": -32601, 00:08:39.598 "message": "Method not found" 00:08:39.598 } 00:08:39.598 11:44:30 -- common/autotest_common.sh@641 -- # es=1 00:08:39.598 11:44:30 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:39.598 11:44:30 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:08:39.598 11:44:30 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:39.598 11:44:30 -- app/cmdline.sh@1 -- # killprocess 2330848 00:08:39.598 11:44:30 -- common/autotest_common.sh@936 -- # '[' -z 2330848 ']' 00:08:39.598 11:44:30 -- common/autotest_common.sh@940 -- # kill -0 2330848 00:08:39.598 11:44:30 -- common/autotest_common.sh@941 -- # uname 00:08:39.598 11:44:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:39.598 11:44:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2330848 00:08:39.856 11:44:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:39.856 11:44:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:39.856 11:44:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2330848' 00:08:39.856 killing process with pid 2330848 00:08:39.856 11:44:30 -- common/autotest_common.sh@955 -- # kill 2330848 00:08:39.856 11:44:30 -- common/autotest_common.sh@960 -- # wait 2330848 00:08:42.383 00:08:42.383 real 0m4.186s 00:08:42.383 user 0m4.308s 00:08:42.383 sys 0m0.632s 00:08:42.383 11:44:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:42.383 11:44:32 -- common/autotest_common.sh@10 -- # set +x 00:08:42.383 ************************************ 00:08:42.383 END TEST app_cmdline 00:08:42.383 ************************************ 00:08:42.383 11:44:32 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:42.383 11:44:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:42.383 11:44:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:42.383 11:44:32 -- common/autotest_common.sh@10 -- # set +x 00:08:42.383 ************************************ 00:08:42.383 START TEST version 00:08:42.383 ************************************ 00:08:42.383 11:44:32 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:42.383 * Looking for test storage... 00:08:42.383 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:42.383 11:44:32 -- app/version.sh@17 -- # get_header_version major 00:08:42.383 11:44:32 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:42.383 11:44:32 -- app/version.sh@14 -- # cut -f2 00:08:42.383 11:44:32 -- app/version.sh@14 -- # tr -d '"' 00:08:42.383 11:44:32 -- app/version.sh@17 -- # major=24 00:08:42.383 11:44:32 -- app/version.sh@18 -- # get_header_version minor 00:08:42.383 11:44:32 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:42.383 11:44:32 -- app/version.sh@14 -- # cut -f2 00:08:42.383 11:44:32 -- app/version.sh@14 -- # tr -d '"' 00:08:42.383 11:44:32 -- app/version.sh@18 -- # minor=5 00:08:42.383 11:44:32 -- app/version.sh@19 -- # get_header_version patch 00:08:42.383 11:44:32 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:42.383 11:44:32 -- app/version.sh@14 -- # cut -f2 00:08:42.383 11:44:32 -- app/version.sh@14 -- # tr -d '"' 00:08:42.383 11:44:32 -- app/version.sh@19 -- # patch=0 00:08:42.383 11:44:32 -- app/version.sh@20 -- # get_header_version suffix 00:08:42.383 11:44:32 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:42.383 11:44:32 -- app/version.sh@14 -- # cut -f2 00:08:42.383 11:44:32 -- app/version.sh@14 -- # tr -d '"' 00:08:42.383 11:44:32 -- app/version.sh@20 -- # suffix=-pre 00:08:42.383 11:44:32 -- app/version.sh@22 -- # version=24.5 00:08:42.383 11:44:32 -- app/version.sh@25 -- # (( patch != 0 )) 00:08:42.383 11:44:32 -- app/version.sh@28 -- # version=24.5rc0 00:08:42.384 11:44:32 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:42.384 11:44:32 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:42.384 11:44:32 -- app/version.sh@30 -- # py_version=24.5rc0 00:08:42.384 11:44:32 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:08:42.384 00:08:42.384 real 0m0.187s 00:08:42.384 user 0m0.084s 00:08:42.384 sys 0m0.147s 00:08:42.384 11:44:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:42.384 11:44:32 -- common/autotest_common.sh@10 -- # set +x 00:08:42.384 ************************************ 00:08:42.384 END TEST version 00:08:42.384 ************************************ 00:08:42.642 11:44:32 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:08:42.642 11:44:32 -- spdk/autotest.sh@194 -- # uname -s 00:08:42.642 11:44:32 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:08:42.642 11:44:32 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:42.642 11:44:32 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:42.642 11:44:32 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:08:42.642 11:44:32 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:08:42.642 11:44:32 -- spdk/autotest.sh@258 -- # timing_exit lib 00:08:42.642 11:44:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:42.642 11:44:32 -- common/autotest_common.sh@10 -- # set +x 00:08:42.642 11:44:32 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:08:42.642 11:44:32 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:08:42.642 11:44:32 -- spdk/autotest.sh@277 -- # '[' 1 -eq 1 ']' 00:08:42.642 11:44:32 -- spdk/autotest.sh@278 -- # export NET_TYPE 00:08:42.642 11:44:32 -- spdk/autotest.sh@281 -- # '[' tcp = rdma ']' 00:08:42.642 11:44:32 -- spdk/autotest.sh@284 -- # '[' tcp = tcp ']' 00:08:42.642 11:44:32 -- spdk/autotest.sh@285 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:42.642 11:44:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:42.642 11:44:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:42.642 11:44:32 -- common/autotest_common.sh@10 -- # set +x 00:08:42.642 ************************************ 00:08:42.642 START TEST nvmf_tcp 00:08:42.642 ************************************ 00:08:42.642 11:44:33 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:42.900 * Looking for test storage... 00:08:42.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:08:42.900 11:44:33 -- nvmf/nvmf.sh@10 -- # uname -s 00:08:42.900 11:44:33 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:42.900 11:44:33 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:42.900 11:44:33 -- nvmf/common.sh@7 -- # uname -s 00:08:42.900 11:44:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:42.900 11:44:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:42.900 11:44:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:42.900 11:44:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:42.900 11:44:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:42.900 11:44:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:42.900 11:44:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:42.900 11:44:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:42.900 11:44:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:42.900 11:44:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:42.900 11:44:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:08:42.900 11:44:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:08:42.900 11:44:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:42.900 11:44:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:42.900 11:44:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:42.900 11:44:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:42.900 11:44:33 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:42.900 11:44:33 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:42.900 11:44:33 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:42.900 11:44:33 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:42.900 11:44:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.900 11:44:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.900 11:44:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.900 11:44:33 -- paths/export.sh@5 -- # export PATH 00:08:42.900 11:44:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.900 11:44:33 -- nvmf/common.sh@47 -- # : 0 00:08:42.900 11:44:33 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:42.900 11:44:33 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:42.900 11:44:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:42.900 11:44:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:42.900 11:44:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:42.900 11:44:33 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:42.900 11:44:33 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:42.900 11:44:33 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:42.900 11:44:33 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:42.900 11:44:33 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:08:42.900 11:44:33 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:08:42.900 11:44:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:42.900 11:44:33 -- common/autotest_common.sh@10 -- # set +x 00:08:42.900 11:44:33 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:08:42.900 11:44:33 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:42.900 11:44:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:42.900 11:44:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:42.900 11:44:33 -- common/autotest_common.sh@10 -- # set +x 00:08:42.900 ************************************ 00:08:42.900 START TEST nvmf_example 00:08:42.900 ************************************ 00:08:42.900 11:44:33 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:43.158 * Looking for test storage... 00:08:43.158 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:43.158 11:44:33 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:43.158 11:44:33 -- nvmf/common.sh@7 -- # uname -s 00:08:43.158 11:44:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:43.158 11:44:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:43.158 11:44:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:43.158 11:44:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:43.158 11:44:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:43.158 11:44:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:43.158 11:44:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:43.158 11:44:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:43.158 11:44:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:43.158 11:44:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:43.158 11:44:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:08:43.158 11:44:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:08:43.158 11:44:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:43.159 11:44:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:43.159 11:44:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:43.159 11:44:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:43.159 11:44:33 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:43.159 11:44:33 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:43.159 11:44:33 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:43.159 11:44:33 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:43.159 11:44:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.159 11:44:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.159 11:44:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.159 11:44:33 -- paths/export.sh@5 -- # export PATH 00:08:43.159 11:44:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.159 11:44:33 -- nvmf/common.sh@47 -- # : 0 00:08:43.159 11:44:33 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:43.159 11:44:33 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:43.159 11:44:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:43.159 11:44:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:43.159 11:44:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:43.159 11:44:33 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:43.159 11:44:33 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:43.159 11:44:33 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:43.159 11:44:33 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:08:43.159 11:44:33 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:08:43.159 11:44:33 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:08:43.159 11:44:33 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:08:43.159 11:44:33 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:08:43.159 11:44:33 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:08:43.159 11:44:33 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:08:43.159 11:44:33 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:08:43.159 11:44:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:43.159 11:44:33 -- common/autotest_common.sh@10 -- # set +x 00:08:43.159 11:44:33 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:08:43.159 11:44:33 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:43.159 11:44:33 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:43.159 11:44:33 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:43.159 11:44:33 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:43.159 11:44:33 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:43.159 11:44:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:43.159 11:44:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:43.159 11:44:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:43.159 11:44:33 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:43.159 11:44:33 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:43.159 11:44:33 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:43.159 11:44:33 -- common/autotest_common.sh@10 -- # set +x 00:08:49.724 11:44:40 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:49.724 11:44:40 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:49.724 11:44:40 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:49.724 11:44:40 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:49.724 11:44:40 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:49.724 11:44:40 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:49.724 11:44:40 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:49.724 11:44:40 -- nvmf/common.sh@295 -- # net_devs=() 00:08:49.724 11:44:40 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:49.724 11:44:40 -- nvmf/common.sh@296 -- # e810=() 00:08:49.724 11:44:40 -- nvmf/common.sh@296 -- # local -ga e810 00:08:49.724 11:44:40 -- nvmf/common.sh@297 -- # x722=() 00:08:49.724 11:44:40 -- nvmf/common.sh@297 -- # local -ga x722 00:08:49.724 11:44:40 -- nvmf/common.sh@298 -- # mlx=() 00:08:49.724 11:44:40 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:49.724 11:44:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:49.724 11:44:40 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:49.724 11:44:40 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:49.724 11:44:40 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:49.724 11:44:40 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:49.724 11:44:40 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:49.724 11:44:40 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:49.724 11:44:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:49.724 11:44:40 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:49.724 11:44:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:49.724 11:44:40 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:49.724 11:44:40 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:49.724 11:44:40 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:49.724 11:44:40 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:49.724 11:44:40 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:49.724 11:44:40 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:49.724 11:44:40 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:49.724 11:44:40 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:49.724 11:44:40 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:49.724 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:49.724 11:44:40 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:49.724 11:44:40 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:49.724 11:44:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:49.724 11:44:40 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:49.724 11:44:40 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:49.724 11:44:40 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:49.724 11:44:40 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:49.724 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:49.724 11:44:40 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:49.724 11:44:40 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:49.724 11:44:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:49.724 11:44:40 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:49.724 11:44:40 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:49.724 11:44:40 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:49.724 11:44:40 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:49.724 11:44:40 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:49.724 11:44:40 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:49.724 11:44:40 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.724 11:44:40 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:49.724 11:44:40 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.724 11:44:40 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:49.724 Found net devices under 0000:af:00.0: cvl_0_0 00:08:49.724 11:44:40 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.724 11:44:40 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:49.724 11:44:40 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.724 11:44:40 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:49.724 11:44:40 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.724 11:44:40 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:49.724 Found net devices under 0000:af:00.1: cvl_0_1 00:08:49.724 11:44:40 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.724 11:44:40 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:49.724 11:44:40 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:49.724 11:44:40 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:49.724 11:44:40 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:49.724 11:44:40 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:49.724 11:44:40 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:49.724 11:44:40 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:49.724 11:44:40 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:49.724 11:44:40 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:49.724 11:44:40 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:49.724 11:44:40 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:49.724 11:44:40 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:49.724 11:44:40 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:49.724 11:44:40 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:49.724 11:44:40 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:49.725 11:44:40 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:49.725 11:44:40 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:49.725 11:44:40 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:49.725 11:44:40 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:49.983 11:44:40 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:49.983 11:44:40 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:49.983 11:44:40 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:49.983 11:44:40 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:49.983 11:44:40 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:49.983 11:44:40 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:49.983 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:49.983 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:08:49.983 00:08:49.983 --- 10.0.0.2 ping statistics --- 00:08:49.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.983 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:08:49.983 11:44:40 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:49.983 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:49.983 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.254 ms 00:08:49.983 00:08:49.983 --- 10.0.0.1 ping statistics --- 00:08:49.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.983 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:08:49.983 11:44:40 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:49.983 11:44:40 -- nvmf/common.sh@411 -- # return 0 00:08:49.983 11:44:40 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:49.983 11:44:40 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:49.983 11:44:40 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:49.983 11:44:40 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:49.983 11:44:40 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:49.983 11:44:40 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:49.983 11:44:40 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:49.983 11:44:40 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:08:49.983 11:44:40 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:08:49.983 11:44:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:49.983 11:44:40 -- common/autotest_common.sh@10 -- # set +x 00:08:49.983 11:44:40 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:08:49.983 11:44:40 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:08:49.983 11:44:40 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:08:49.983 11:44:40 -- target/nvmf_example.sh@34 -- # nvmfpid=2335194 00:08:49.983 11:44:40 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:49.983 11:44:40 -- target/nvmf_example.sh@36 -- # waitforlisten 2335194 00:08:49.983 11:44:40 -- common/autotest_common.sh@817 -- # '[' -z 2335194 ']' 00:08:49.983 11:44:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.983 11:44:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:49.983 11:44:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.983 11:44:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:49.983 11:44:40 -- common/autotest_common.sh@10 -- # set +x 00:08:50.241 EAL: No free 2048 kB hugepages reported on node 1 00:08:50.806 11:44:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:50.806 11:44:41 -- common/autotest_common.sh@850 -- # return 0 00:08:50.806 11:44:41 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:08:50.806 11:44:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:50.806 11:44:41 -- common/autotest_common.sh@10 -- # set +x 00:08:51.065 11:44:41 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:51.065 11:44:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:51.065 11:44:41 -- common/autotest_common.sh@10 -- # set +x 00:08:51.065 11:44:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:51.065 11:44:41 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:08:51.065 11:44:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:51.065 11:44:41 -- common/autotest_common.sh@10 -- # set +x 00:08:51.065 11:44:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:51.065 11:44:41 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:08:51.065 11:44:41 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:51.065 11:44:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:51.065 11:44:41 -- common/autotest_common.sh@10 -- # set +x 00:08:51.065 11:44:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:51.065 11:44:41 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:08:51.065 11:44:41 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:51.065 11:44:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:51.065 11:44:41 -- common/autotest_common.sh@10 -- # set +x 00:08:51.065 11:44:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:51.065 11:44:41 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:51.065 11:44:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:51.065 11:44:41 -- common/autotest_common.sh@10 -- # set +x 00:08:51.065 11:44:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:51.065 11:44:41 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:08:51.065 11:44:41 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:51.065 EAL: No free 2048 kB hugepages reported on node 1 00:09:03.268 Initializing NVMe Controllers 00:09:03.268 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:03.268 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:03.268 Initialization complete. Launching workers. 00:09:03.268 ======================================================== 00:09:03.268 Latency(us) 00:09:03.268 Device Information : IOPS MiB/s Average min max 00:09:03.268 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14805.04 57.83 4324.02 804.02 15438.81 00:09:03.268 ======================================================== 00:09:03.268 Total : 14805.04 57.83 4324.02 804.02 15438.81 00:09:03.268 00:09:03.268 11:44:51 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:09:03.268 11:44:51 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:09:03.269 11:44:51 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:03.269 11:44:51 -- nvmf/common.sh@117 -- # sync 00:09:03.269 11:44:51 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:03.269 11:44:51 -- nvmf/common.sh@120 -- # set +e 00:09:03.269 11:44:51 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:03.269 11:44:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:03.269 rmmod nvme_tcp 00:09:03.269 rmmod nvme_fabrics 00:09:03.269 rmmod nvme_keyring 00:09:03.269 11:44:52 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:03.269 11:44:52 -- nvmf/common.sh@124 -- # set -e 00:09:03.269 11:44:52 -- nvmf/common.sh@125 -- # return 0 00:09:03.269 11:44:52 -- nvmf/common.sh@478 -- # '[' -n 2335194 ']' 00:09:03.269 11:44:52 -- nvmf/common.sh@479 -- # killprocess 2335194 00:09:03.269 11:44:52 -- common/autotest_common.sh@936 -- # '[' -z 2335194 ']' 00:09:03.269 11:44:52 -- common/autotest_common.sh@940 -- # kill -0 2335194 00:09:03.269 11:44:52 -- common/autotest_common.sh@941 -- # uname 00:09:03.269 11:44:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:03.269 11:44:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2335194 00:09:03.269 11:44:52 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:09:03.269 11:44:52 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:09:03.269 11:44:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2335194' 00:09:03.269 killing process with pid 2335194 00:09:03.269 11:44:52 -- common/autotest_common.sh@955 -- # kill 2335194 00:09:03.269 11:44:52 -- common/autotest_common.sh@960 -- # wait 2335194 00:09:03.269 nvmf threads initialize successfully 00:09:03.269 bdev subsystem init successfully 00:09:03.269 created a nvmf target service 00:09:03.269 create targets's poll groups done 00:09:03.269 all subsystems of target started 00:09:03.269 nvmf target is running 00:09:03.269 all subsystems of target stopped 00:09:03.269 destroy targets's poll groups done 00:09:03.269 destroyed the nvmf target service 00:09:03.269 bdev subsystem finish successfully 00:09:03.269 nvmf threads destroy successfully 00:09:03.269 11:44:53 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:03.269 11:44:53 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:03.269 11:44:53 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:03.269 11:44:53 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:03.269 11:44:53 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:03.269 11:44:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:03.269 11:44:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:03.269 11:44:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:05.176 11:44:55 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:05.176 11:44:55 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:09:05.176 11:44:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:05.176 11:44:55 -- common/autotest_common.sh@10 -- # set +x 00:09:05.176 00:09:05.176 real 0m22.027s 00:09:05.176 user 0m49.537s 00:09:05.176 sys 0m7.369s 00:09:05.176 11:44:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:05.176 11:44:55 -- common/autotest_common.sh@10 -- # set +x 00:09:05.176 ************************************ 00:09:05.176 END TEST nvmf_example 00:09:05.176 ************************************ 00:09:05.176 11:44:55 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:05.176 11:44:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:05.176 11:44:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:05.176 11:44:55 -- common/autotest_common.sh@10 -- # set +x 00:09:05.176 ************************************ 00:09:05.176 START TEST nvmf_filesystem 00:09:05.176 ************************************ 00:09:05.176 11:44:55 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:05.440 * Looking for test storage... 00:09:05.440 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:05.440 11:44:55 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:09:05.440 11:44:55 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:05.440 11:44:55 -- common/autotest_common.sh@34 -- # set -e 00:09:05.440 11:44:55 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:05.440 11:44:55 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:05.440 11:44:55 -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:09:05.440 11:44:55 -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:05.440 11:44:55 -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:09:05.440 11:44:55 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:05.440 11:44:55 -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:09:05.440 11:44:55 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:05.440 11:44:55 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:05.440 11:44:55 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:05.440 11:44:55 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:05.440 11:44:55 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:05.440 11:44:55 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:05.440 11:44:55 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:05.440 11:44:55 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:05.440 11:44:55 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:05.440 11:44:55 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:05.440 11:44:55 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:05.440 11:44:55 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:05.440 11:44:55 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:05.440 11:44:55 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:05.440 11:44:55 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:09:05.440 11:44:55 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:05.440 11:44:55 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:05.440 11:44:55 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:09:05.440 11:44:55 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:09:05.440 11:44:55 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:09:05.440 11:44:55 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:05.440 11:44:55 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:09:05.440 11:44:55 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:09:05.440 11:44:55 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:05.440 11:44:55 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:05.440 11:44:55 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:09:05.440 11:44:55 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:09:05.440 11:44:55 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:09:05.440 11:44:55 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:09:05.440 11:44:55 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:09:05.440 11:44:55 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:09:05.440 11:44:55 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:09:05.440 11:44:55 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:09:05.440 11:44:55 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:05.440 11:44:55 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:09:05.440 11:44:55 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:09:05.440 11:44:55 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:09:05.440 11:44:55 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:09:05.440 11:44:55 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:09:05.440 11:44:55 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:09:05.440 11:44:55 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:09:05.440 11:44:55 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:05.440 11:44:55 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:09:05.440 11:44:55 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:09:05.440 11:44:55 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:09:05.440 11:44:55 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:05.440 11:44:55 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:09:05.440 11:44:55 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:09:05.440 11:44:55 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:09:05.440 11:44:55 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:09:05.440 11:44:55 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:09:05.440 11:44:55 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:09:05.440 11:44:55 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:09:05.440 11:44:55 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:09:05.440 11:44:55 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:09:05.440 11:44:55 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:09:05.440 11:44:55 -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:09:05.440 11:44:55 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:09:05.440 11:44:55 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:09:05.440 11:44:55 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR= 00:09:05.440 11:44:55 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:09:05.440 11:44:55 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:09:05.440 11:44:55 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:09:05.440 11:44:55 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:09:05.440 11:44:55 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:09:05.440 11:44:55 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:05.440 11:44:55 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:09:05.440 11:44:55 -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:09:05.440 11:44:55 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:09:05.440 11:44:55 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:09:05.440 11:44:55 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:09:05.440 11:44:55 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:09:05.440 11:44:55 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:09:05.440 11:44:55 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:09:05.440 11:44:55 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:09:05.440 11:44:55 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:09:05.440 11:44:55 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:09:05.440 11:44:55 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:05.440 11:44:55 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:09:05.440 11:44:55 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:09:05.440 11:44:55 -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:05.440 11:44:55 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:05.440 11:44:55 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:05.440 11:44:55 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:05.440 11:44:55 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:05.440 11:44:55 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:05.440 11:44:55 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:05.440 11:44:55 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:05.440 11:44:55 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:05.440 11:44:55 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:05.440 11:44:55 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:05.440 11:44:55 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:05.440 11:44:55 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:05.440 11:44:55 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:05.440 11:44:55 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:09:05.440 11:44:55 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:05.440 #define SPDK_CONFIG_H 00:09:05.440 #define SPDK_CONFIG_APPS 1 00:09:05.440 #define SPDK_CONFIG_ARCH native 00:09:05.440 #define SPDK_CONFIG_ASAN 1 00:09:05.440 #undef SPDK_CONFIG_AVAHI 00:09:05.440 #undef SPDK_CONFIG_CET 00:09:05.440 #define SPDK_CONFIG_COVERAGE 1 00:09:05.440 #define SPDK_CONFIG_CROSS_PREFIX 00:09:05.440 #undef SPDK_CONFIG_CRYPTO 00:09:05.440 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:05.440 #undef SPDK_CONFIG_CUSTOMOCF 00:09:05.440 #undef SPDK_CONFIG_DAOS 00:09:05.440 #define SPDK_CONFIG_DAOS_DIR 00:09:05.440 #define SPDK_CONFIG_DEBUG 1 00:09:05.440 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:05.440 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:05.440 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:05.440 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:05.440 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:05.441 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:05.441 #define SPDK_CONFIG_EXAMPLES 1 00:09:05.441 #undef SPDK_CONFIG_FC 00:09:05.441 #define SPDK_CONFIG_FC_PATH 00:09:05.441 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:05.441 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:05.441 #undef SPDK_CONFIG_FUSE 00:09:05.441 #undef SPDK_CONFIG_FUZZER 00:09:05.441 #define SPDK_CONFIG_FUZZER_LIB 00:09:05.441 #undef SPDK_CONFIG_GOLANG 00:09:05.441 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:05.441 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:05.441 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:05.441 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:09:05.441 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:05.441 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:05.441 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:05.441 #define SPDK_CONFIG_IDXD 1 00:09:05.441 #undef SPDK_CONFIG_IDXD_KERNEL 00:09:05.441 #undef SPDK_CONFIG_IPSEC_MB 00:09:05.441 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:05.441 #define SPDK_CONFIG_ISAL 1 00:09:05.441 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:05.441 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:05.441 #define SPDK_CONFIG_LIBDIR 00:09:05.441 #undef SPDK_CONFIG_LTO 00:09:05.441 #define SPDK_CONFIG_MAX_LCORES 00:09:05.441 #define SPDK_CONFIG_NVME_CUSE 1 00:09:05.441 #undef SPDK_CONFIG_OCF 00:09:05.441 #define SPDK_CONFIG_OCF_PATH 00:09:05.441 #define SPDK_CONFIG_OPENSSL_PATH 00:09:05.441 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:05.441 #define SPDK_CONFIG_PGO_DIR 00:09:05.441 #undef SPDK_CONFIG_PGO_USE 00:09:05.441 #define SPDK_CONFIG_PREFIX /usr/local 00:09:05.441 #undef SPDK_CONFIG_RAID5F 00:09:05.441 #undef SPDK_CONFIG_RBD 00:09:05.441 #define SPDK_CONFIG_RDMA 1 00:09:05.441 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:05.441 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:05.441 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:05.441 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:05.441 #define SPDK_CONFIG_SHARED 1 00:09:05.441 #undef SPDK_CONFIG_SMA 00:09:05.441 #define SPDK_CONFIG_TESTS 1 00:09:05.441 #undef SPDK_CONFIG_TSAN 00:09:05.441 #define SPDK_CONFIG_UBLK 1 00:09:05.441 #define SPDK_CONFIG_UBSAN 1 00:09:05.441 #undef SPDK_CONFIG_UNIT_TESTS 00:09:05.441 #undef SPDK_CONFIG_URING 00:09:05.441 #define SPDK_CONFIG_URING_PATH 00:09:05.441 #undef SPDK_CONFIG_URING_ZNS 00:09:05.441 #undef SPDK_CONFIG_USDT 00:09:05.441 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:05.441 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:05.441 #define SPDK_CONFIG_VFIO_USER 1 00:09:05.441 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:05.441 #define SPDK_CONFIG_VHOST 1 00:09:05.441 #define SPDK_CONFIG_VIRTIO 1 00:09:05.441 #undef SPDK_CONFIG_VTUNE 00:09:05.441 #define SPDK_CONFIG_VTUNE_DIR 00:09:05.441 #define SPDK_CONFIG_WERROR 1 00:09:05.441 #define SPDK_CONFIG_WPDK_DIR 00:09:05.441 #undef SPDK_CONFIG_XNVME 00:09:05.441 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:05.441 11:44:55 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:05.441 11:44:55 -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:05.441 11:44:55 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:05.441 11:44:55 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:05.441 11:44:55 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:05.441 11:44:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.441 11:44:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.441 11:44:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.441 11:44:55 -- paths/export.sh@5 -- # export PATH 00:09:05.441 11:44:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.441 11:44:55 -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:05.441 11:44:55 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:05.441 11:44:55 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:05.441 11:44:55 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:05.441 11:44:55 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:05.441 11:44:55 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:05.441 11:44:55 -- pm/common@67 -- # TEST_TAG=N/A 00:09:05.441 11:44:55 -- pm/common@68 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:09:05.441 11:44:55 -- pm/common@70 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:09:05.441 11:44:55 -- pm/common@71 -- # uname -s 00:09:05.441 11:44:55 -- pm/common@71 -- # PM_OS=Linux 00:09:05.441 11:44:55 -- pm/common@73 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:05.441 11:44:55 -- pm/common@74 -- # [[ Linux == FreeBSD ]] 00:09:05.441 11:44:55 -- pm/common@76 -- # [[ Linux == Linux ]] 00:09:05.441 11:44:55 -- pm/common@76 -- # [[ ............................... != QEMU ]] 00:09:05.441 11:44:55 -- pm/common@76 -- # [[ ! -e /.dockerenv ]] 00:09:05.441 11:44:55 -- pm/common@79 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:09:05.441 11:44:55 -- pm/common@80 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:09:05.441 11:44:55 -- pm/common@83 -- # MONITOR_RESOURCES_PIDS=() 00:09:05.441 11:44:55 -- pm/common@83 -- # declare -A MONITOR_RESOURCES_PIDS 00:09:05.441 11:44:55 -- pm/common@85 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:09:05.441 11:44:55 -- common/autotest_common.sh@57 -- # : 0 00:09:05.441 11:44:55 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:09:05.441 11:44:55 -- common/autotest_common.sh@61 -- # : 0 00:09:05.441 11:44:55 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:05.441 11:44:55 -- common/autotest_common.sh@63 -- # : 0 00:09:05.441 11:44:55 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:09:05.441 11:44:55 -- common/autotest_common.sh@65 -- # : 1 00:09:05.441 11:44:55 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:05.441 11:44:55 -- common/autotest_common.sh@67 -- # : 0 00:09:05.441 11:44:55 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:09:05.441 11:44:55 -- common/autotest_common.sh@69 -- # : 00:09:05.441 11:44:55 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:09:05.441 11:44:55 -- common/autotest_common.sh@71 -- # : 0 00:09:05.441 11:44:55 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:09:05.441 11:44:55 -- common/autotest_common.sh@73 -- # : 0 00:09:05.441 11:44:55 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:09:05.441 11:44:55 -- common/autotest_common.sh@75 -- # : 0 00:09:05.441 11:44:55 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:09:05.441 11:44:55 -- common/autotest_common.sh@77 -- # : 0 00:09:05.441 11:44:55 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:05.441 11:44:55 -- common/autotest_common.sh@79 -- # : 0 00:09:05.441 11:44:55 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:09:05.441 11:44:55 -- common/autotest_common.sh@81 -- # : 0 00:09:05.441 11:44:55 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:09:05.441 11:44:55 -- common/autotest_common.sh@83 -- # : 0 00:09:05.441 11:44:55 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:09:05.441 11:44:55 -- common/autotest_common.sh@85 -- # : 1 00:09:05.441 11:44:55 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:09:05.441 11:44:55 -- common/autotest_common.sh@87 -- # : 0 00:09:05.441 11:44:55 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:09:05.441 11:44:55 -- common/autotest_common.sh@89 -- # : 0 00:09:05.441 11:44:55 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:09:05.441 11:44:55 -- common/autotest_common.sh@91 -- # : 1 00:09:05.441 11:44:55 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:09:05.441 11:44:55 -- common/autotest_common.sh@93 -- # : 1 00:09:05.441 11:44:55 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:09:05.441 11:44:55 -- common/autotest_common.sh@95 -- # : 0 00:09:05.441 11:44:55 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:05.441 11:44:55 -- common/autotest_common.sh@97 -- # : 0 00:09:05.441 11:44:55 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:09:05.441 11:44:55 -- common/autotest_common.sh@99 -- # : 0 00:09:05.441 11:44:55 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:09:05.441 11:44:55 -- common/autotest_common.sh@101 -- # : tcp 00:09:05.441 11:44:55 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:05.441 11:44:55 -- common/autotest_common.sh@103 -- # : 0 00:09:05.441 11:44:55 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:09:05.441 11:44:55 -- common/autotest_common.sh@105 -- # : 0 00:09:05.441 11:44:55 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:09:05.441 11:44:55 -- common/autotest_common.sh@107 -- # : 0 00:09:05.441 11:44:55 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:09:05.441 11:44:55 -- common/autotest_common.sh@109 -- # : 0 00:09:05.441 11:44:55 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:09:05.442 11:44:55 -- common/autotest_common.sh@111 -- # : 0 00:09:05.442 11:44:55 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:09:05.442 11:44:55 -- common/autotest_common.sh@113 -- # : 0 00:09:05.442 11:44:55 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:09:05.442 11:44:55 -- common/autotest_common.sh@115 -- # : 0 00:09:05.442 11:44:55 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:09:05.442 11:44:55 -- common/autotest_common.sh@117 -- # : 0 00:09:05.442 11:44:55 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:05.442 11:44:55 -- common/autotest_common.sh@119 -- # : 1 00:09:05.442 11:44:55 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:09:05.442 11:44:55 -- common/autotest_common.sh@121 -- # : 1 00:09:05.442 11:44:55 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:09:05.442 11:44:55 -- common/autotest_common.sh@123 -- # : 00:09:05.442 11:44:55 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:05.442 11:44:55 -- common/autotest_common.sh@125 -- # : 0 00:09:05.442 11:44:55 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:09:05.442 11:44:55 -- common/autotest_common.sh@127 -- # : 0 00:09:05.442 11:44:55 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:09:05.442 11:44:55 -- common/autotest_common.sh@129 -- # : 0 00:09:05.442 11:44:55 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:09:05.442 11:44:55 -- common/autotest_common.sh@131 -- # : 0 00:09:05.442 11:44:55 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:09:05.442 11:44:55 -- common/autotest_common.sh@133 -- # : 0 00:09:05.442 11:44:55 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:09:05.442 11:44:55 -- common/autotest_common.sh@135 -- # : 0 00:09:05.442 11:44:55 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:09:05.442 11:44:55 -- common/autotest_common.sh@137 -- # : 00:09:05.442 11:44:55 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:09:05.442 11:44:55 -- common/autotest_common.sh@139 -- # : true 00:09:05.442 11:44:55 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:09:05.442 11:44:55 -- common/autotest_common.sh@141 -- # : 0 00:09:05.442 11:44:55 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:09:05.442 11:44:55 -- common/autotest_common.sh@143 -- # : 0 00:09:05.442 11:44:55 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:09:05.442 11:44:55 -- common/autotest_common.sh@145 -- # : 0 00:09:05.442 11:44:55 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:09:05.442 11:44:55 -- common/autotest_common.sh@147 -- # : 0 00:09:05.442 11:44:55 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:09:05.442 11:44:55 -- common/autotest_common.sh@149 -- # : 0 00:09:05.442 11:44:55 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:09:05.442 11:44:55 -- common/autotest_common.sh@151 -- # : 0 00:09:05.442 11:44:55 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:09:05.442 11:44:55 -- common/autotest_common.sh@153 -- # : e810 00:09:05.442 11:44:55 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:09:05.442 11:44:55 -- common/autotest_common.sh@155 -- # : 0 00:09:05.442 11:44:55 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:09:05.442 11:44:55 -- common/autotest_common.sh@157 -- # : 0 00:09:05.442 11:44:55 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:09:05.442 11:44:55 -- common/autotest_common.sh@159 -- # : 0 00:09:05.442 11:44:55 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:09:05.442 11:44:55 -- common/autotest_common.sh@161 -- # : 0 00:09:05.442 11:44:55 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:09:05.442 11:44:55 -- common/autotest_common.sh@163 -- # : 0 00:09:05.442 11:44:55 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:09:05.442 11:44:55 -- common/autotest_common.sh@166 -- # : 00:09:05.442 11:44:55 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:09:05.442 11:44:55 -- common/autotest_common.sh@168 -- # : 0 00:09:05.442 11:44:55 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:09:05.442 11:44:55 -- common/autotest_common.sh@170 -- # : 0 00:09:05.442 11:44:55 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:05.442 11:44:55 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:05.442 11:44:55 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:05.442 11:44:55 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:05.442 11:44:55 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:05.442 11:44:55 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:05.442 11:44:55 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:05.442 11:44:55 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:05.442 11:44:55 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:05.442 11:44:55 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:05.442 11:44:55 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:05.442 11:44:55 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:05.442 11:44:55 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:05.442 11:44:55 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:05.442 11:44:55 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:09:05.442 11:44:55 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:05.442 11:44:55 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:05.442 11:44:55 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:05.442 11:44:55 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:05.442 11:44:55 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:05.442 11:44:55 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:09:05.442 11:44:55 -- common/autotest_common.sh@199 -- # cat 00:09:05.442 11:44:55 -- common/autotest_common.sh@225 -- # echo leak:libfuse3.so 00:09:05.442 11:44:55 -- common/autotest_common.sh@227 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:05.442 11:44:55 -- common/autotest_common.sh@227 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:05.442 11:44:55 -- common/autotest_common.sh@229 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:05.442 11:44:55 -- common/autotest_common.sh@229 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:05.442 11:44:55 -- common/autotest_common.sh@231 -- # '[' -z /var/spdk/dependencies ']' 00:09:05.442 11:44:55 -- common/autotest_common.sh@234 -- # export DEPENDENCY_DIR 00:09:05.442 11:44:55 -- common/autotest_common.sh@238 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:05.442 11:44:55 -- common/autotest_common.sh@238 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:05.442 11:44:55 -- common/autotest_common.sh@239 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:05.442 11:44:55 -- common/autotest_common.sh@239 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:05.442 11:44:55 -- common/autotest_common.sh@242 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:05.442 11:44:55 -- common/autotest_common.sh@242 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:05.442 11:44:55 -- common/autotest_common.sh@243 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:05.442 11:44:55 -- common/autotest_common.sh@243 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:05.442 11:44:55 -- common/autotest_common.sh@245 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:05.442 11:44:55 -- common/autotest_common.sh@245 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:05.442 11:44:55 -- common/autotest_common.sh@248 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:05.442 11:44:55 -- common/autotest_common.sh@248 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:05.442 11:44:55 -- common/autotest_common.sh@251 -- # '[' 0 -eq 0 ']' 00:09:05.442 11:44:55 -- common/autotest_common.sh@252 -- # export valgrind= 00:09:05.442 11:44:55 -- common/autotest_common.sh@252 -- # valgrind= 00:09:05.442 11:44:55 -- common/autotest_common.sh@258 -- # uname -s 00:09:05.442 11:44:55 -- common/autotest_common.sh@258 -- # '[' Linux = Linux ']' 00:09:05.442 11:44:55 -- common/autotest_common.sh@259 -- # HUGEMEM=4096 00:09:05.442 11:44:55 -- common/autotest_common.sh@260 -- # export CLEAR_HUGE=yes 00:09:05.442 11:44:55 -- common/autotest_common.sh@260 -- # CLEAR_HUGE=yes 00:09:05.443 11:44:55 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:09:05.443 11:44:55 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:09:05.443 11:44:55 -- common/autotest_common.sh@268 -- # MAKE=make 00:09:05.443 11:44:55 -- common/autotest_common.sh@269 -- # MAKEFLAGS=-j112 00:09:05.443 11:44:55 -- common/autotest_common.sh@285 -- # export HUGEMEM=4096 00:09:05.443 11:44:55 -- common/autotest_common.sh@285 -- # HUGEMEM=4096 00:09:05.443 11:44:55 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:09:05.443 11:44:55 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:09:05.443 11:44:55 -- common/autotest_common.sh@289 -- # for i in "$@" 00:09:05.443 11:44:55 -- common/autotest_common.sh@290 -- # case "$i" in 00:09:05.443 11:44:55 -- common/autotest_common.sh@295 -- # TEST_TRANSPORT=tcp 00:09:05.443 11:44:55 -- common/autotest_common.sh@307 -- # [[ -z 2337971 ]] 00:09:05.443 11:44:55 -- common/autotest_common.sh@307 -- # kill -0 2337971 00:09:05.443 11:44:55 -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:09:05.443 11:44:55 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:09:05.443 11:44:55 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:09:05.443 11:44:55 -- common/autotest_common.sh@320 -- # local mount target_dir 00:09:05.443 11:44:55 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:09:05.443 11:44:55 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:09:05.443 11:44:55 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:09:05.443 11:44:55 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:09:05.443 11:44:55 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.0DSdxz 00:09:05.443 11:44:55 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:05.443 11:44:55 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:09:05.443 11:44:55 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:09:05.443 11:44:55 -- common/autotest_common.sh@344 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.0DSdxz/tests/target /tmp/spdk.0DSdxz 00:09:05.443 11:44:55 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:09:05.443 11:44:55 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:09:05.443 11:44:55 -- common/autotest_common.sh@316 -- # df -T 00:09:05.443 11:44:55 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:09:05.443 11:44:55 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_devtmpfs 00:09:05.443 11:44:55 -- common/autotest_common.sh@350 -- # fss["$mount"]=devtmpfs 00:09:05.443 11:44:55 -- common/autotest_common.sh@351 -- # avails["$mount"]=67108864 00:09:05.443 11:44:55 -- common/autotest_common.sh@351 -- # sizes["$mount"]=67108864 00:09:05.443 11:44:55 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:09:05.443 11:44:55 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:09:05.443 11:44:55 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/pmem0 00:09:05.443 11:44:55 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext2 00:09:05.443 11:44:55 -- common/autotest_common.sh@351 -- # avails["$mount"]=995438592 00:09:05.443 11:44:55 -- common/autotest_common.sh@351 -- # sizes["$mount"]=5284429824 00:09:05.443 11:44:55 -- common/autotest_common.sh@352 -- # uses["$mount"]=4288991232 00:09:05.443 11:44:55 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:09:05.443 11:44:55 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_root 00:09:05.443 11:44:55 -- common/autotest_common.sh@350 -- # fss["$mount"]=overlay 00:09:05.443 11:44:55 -- common/autotest_common.sh@351 -- # avails["$mount"]=52181143552 00:09:05.443 11:44:55 -- common/autotest_common.sh@351 -- # sizes["$mount"]=61742301184 00:09:05.443 11:44:55 -- common/autotest_common.sh@352 -- # uses["$mount"]=9561157632 00:09:05.443 11:44:55 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:09:05.443 11:44:55 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:09:05.443 11:44:55 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:09:05.443 11:44:55 -- common/autotest_common.sh@351 -- # avails["$mount"]=30868537344 00:09:05.443 11:44:55 -- common/autotest_common.sh@351 -- # sizes["$mount"]=30871150592 00:09:05.443 11:44:55 -- common/autotest_common.sh@352 -- # uses["$mount"]=2613248 00:09:05.443 11:44:55 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:09:05.443 11:44:55 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:09:05.443 11:44:55 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:09:05.443 11:44:55 -- common/autotest_common.sh@351 -- # avails["$mount"]=12339077120 00:09:05.443 11:44:55 -- common/autotest_common.sh@351 -- # sizes["$mount"]=12348461056 00:09:05.443 11:44:55 -- common/autotest_common.sh@352 -- # uses["$mount"]=9383936 00:09:05.443 11:44:55 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:09:05.443 11:44:55 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:09:05.443 11:44:55 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:09:05.443 11:44:55 -- common/autotest_common.sh@351 -- # avails["$mount"]=30870589440 00:09:05.443 11:44:55 -- common/autotest_common.sh@351 -- # sizes["$mount"]=30871150592 00:09:05.443 11:44:55 -- common/autotest_common.sh@352 -- # uses["$mount"]=561152 00:09:05.443 11:44:55 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:09:05.443 11:44:55 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:09:05.443 11:44:55 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:09:05.443 11:44:55 -- common/autotest_common.sh@351 -- # avails["$mount"]=6174224384 00:09:05.443 11:44:55 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6174228480 00:09:05.443 11:44:55 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:09:05.443 11:44:55 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:09:05.443 11:44:55 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:09:05.443 * Looking for test storage... 00:09:05.443 11:44:55 -- common/autotest_common.sh@357 -- # local target_space new_size 00:09:05.443 11:44:55 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:09:05.443 11:44:55 -- common/autotest_common.sh@361 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:05.443 11:44:55 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:05.443 11:44:55 -- common/autotest_common.sh@361 -- # mount=/ 00:09:05.443 11:44:55 -- common/autotest_common.sh@363 -- # target_space=52181143552 00:09:05.443 11:44:55 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:09:05.443 11:44:55 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:09:05.443 11:44:55 -- common/autotest_common.sh@369 -- # [[ overlay == tmpfs ]] 00:09:05.443 11:44:55 -- common/autotest_common.sh@369 -- # [[ overlay == ramfs ]] 00:09:05.443 11:44:55 -- common/autotest_common.sh@369 -- # [[ / == / ]] 00:09:05.443 11:44:55 -- common/autotest_common.sh@370 -- # new_size=11775750144 00:09:05.443 11:44:55 -- common/autotest_common.sh@371 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:05.443 11:44:55 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:05.443 11:44:55 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:05.443 11:44:55 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:05.443 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:05.443 11:44:55 -- common/autotest_common.sh@378 -- # return 0 00:09:05.443 11:44:55 -- common/autotest_common.sh@1668 -- # set -o errtrace 00:09:05.443 11:44:55 -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:09:05.443 11:44:55 -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:05.443 11:44:55 -- common/autotest_common.sh@1672 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:05.443 11:44:55 -- common/autotest_common.sh@1673 -- # true 00:09:05.443 11:44:55 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:09:05.443 11:44:55 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:09:05.443 11:44:55 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:09:05.443 11:44:55 -- common/autotest_common.sh@27 -- # exec 00:09:05.443 11:44:55 -- common/autotest_common.sh@29 -- # exec 00:09:05.443 11:44:55 -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:05.443 11:44:55 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:05.443 11:44:55 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:05.443 11:44:55 -- common/autotest_common.sh@18 -- # set -x 00:09:05.443 11:44:55 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:05.443 11:44:55 -- nvmf/common.sh@7 -- # uname -s 00:09:05.443 11:44:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:05.443 11:44:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:05.443 11:44:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:05.443 11:44:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:05.443 11:44:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:05.443 11:44:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:05.443 11:44:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:05.443 11:44:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:05.443 11:44:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:05.443 11:44:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:05.443 11:44:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:09:05.443 11:44:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:09:05.443 11:44:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:05.443 11:44:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:05.443 11:44:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:05.443 11:44:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:05.443 11:44:55 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:05.443 11:44:55 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:05.443 11:44:55 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:05.443 11:44:55 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:05.444 11:44:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.444 11:44:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.444 11:44:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.444 11:44:55 -- paths/export.sh@5 -- # export PATH 00:09:05.444 11:44:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.444 11:44:55 -- nvmf/common.sh@47 -- # : 0 00:09:05.444 11:44:55 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:05.444 11:44:55 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:05.444 11:44:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:05.444 11:44:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:05.444 11:44:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:05.444 11:44:55 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:05.444 11:44:55 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:05.444 11:44:55 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:05.444 11:44:55 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:09:05.444 11:44:55 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:05.444 11:44:55 -- target/filesystem.sh@15 -- # nvmftestinit 00:09:05.444 11:44:55 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:05.444 11:44:55 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:05.444 11:44:55 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:05.444 11:44:55 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:05.444 11:44:55 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:05.444 11:44:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:05.444 11:44:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:05.444 11:44:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:05.444 11:44:55 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:05.444 11:44:55 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:05.444 11:44:55 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:05.444 11:44:55 -- common/autotest_common.sh@10 -- # set +x 00:09:12.078 11:45:01 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:12.078 11:45:01 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:12.078 11:45:01 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:12.078 11:45:01 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:12.079 11:45:01 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:12.079 11:45:01 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:12.079 11:45:01 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:12.079 11:45:01 -- nvmf/common.sh@295 -- # net_devs=() 00:09:12.079 11:45:01 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:12.079 11:45:01 -- nvmf/common.sh@296 -- # e810=() 00:09:12.079 11:45:01 -- nvmf/common.sh@296 -- # local -ga e810 00:09:12.079 11:45:01 -- nvmf/common.sh@297 -- # x722=() 00:09:12.079 11:45:01 -- nvmf/common.sh@297 -- # local -ga x722 00:09:12.079 11:45:01 -- nvmf/common.sh@298 -- # mlx=() 00:09:12.079 11:45:01 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:12.079 11:45:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:12.079 11:45:01 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:12.079 11:45:01 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:12.079 11:45:01 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:12.079 11:45:01 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:12.079 11:45:01 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:12.079 11:45:01 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:12.079 11:45:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:12.079 11:45:01 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:12.079 11:45:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:12.079 11:45:01 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:12.079 11:45:01 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:12.079 11:45:01 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:12.079 11:45:01 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:12.079 11:45:01 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:12.079 11:45:01 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:12.079 11:45:01 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:12.079 11:45:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:12.079 11:45:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:12.079 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:12.079 11:45:01 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:12.079 11:45:01 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:12.079 11:45:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:12.079 11:45:01 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:12.079 11:45:01 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:12.079 11:45:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:12.079 11:45:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:12.079 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:12.079 11:45:01 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:12.079 11:45:01 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:12.079 11:45:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:12.079 11:45:01 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:12.079 11:45:01 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:12.079 11:45:01 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:12.079 11:45:01 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:12.079 11:45:01 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:12.079 11:45:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:12.079 11:45:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:12.079 11:45:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:12.079 11:45:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:12.079 11:45:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:12.079 Found net devices under 0000:af:00.0: cvl_0_0 00:09:12.079 11:45:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:12.079 11:45:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:12.079 11:45:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:12.079 11:45:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:12.079 11:45:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:12.079 11:45:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:12.079 Found net devices under 0000:af:00.1: cvl_0_1 00:09:12.079 11:45:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:12.079 11:45:01 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:12.079 11:45:01 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:12.079 11:45:01 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:12.079 11:45:01 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:12.079 11:45:01 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:12.079 11:45:01 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:12.079 11:45:01 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:12.079 11:45:01 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:12.079 11:45:01 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:12.079 11:45:01 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:12.079 11:45:01 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:12.079 11:45:01 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:12.079 11:45:01 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:12.079 11:45:01 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:12.079 11:45:01 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:12.079 11:45:02 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:12.079 11:45:02 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:12.079 11:45:02 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:12.079 11:45:02 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:12.079 11:45:02 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:12.079 11:45:02 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:12.079 11:45:02 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:12.079 11:45:02 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:12.079 11:45:02 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:12.079 11:45:02 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:12.079 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:12.079 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.306 ms 00:09:12.079 00:09:12.079 --- 10.0.0.2 ping statistics --- 00:09:12.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.079 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:09:12.079 11:45:02 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:12.079 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:12.079 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:09:12.079 00:09:12.079 --- 10.0.0.1 ping statistics --- 00:09:12.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.079 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:09:12.079 11:45:02 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:12.079 11:45:02 -- nvmf/common.sh@411 -- # return 0 00:09:12.079 11:45:02 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:12.079 11:45:02 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:12.079 11:45:02 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:12.079 11:45:02 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:12.079 11:45:02 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:12.079 11:45:02 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:12.079 11:45:02 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:12.079 11:45:02 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:09:12.079 11:45:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:12.079 11:45:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:12.079 11:45:02 -- common/autotest_common.sh@10 -- # set +x 00:09:12.079 ************************************ 00:09:12.079 START TEST nvmf_filesystem_no_in_capsule 00:09:12.079 ************************************ 00:09:12.079 11:45:02 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 0 00:09:12.079 11:45:02 -- target/filesystem.sh@47 -- # in_capsule=0 00:09:12.079 11:45:02 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:12.079 11:45:02 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:12.079 11:45:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:12.079 11:45:02 -- common/autotest_common.sh@10 -- # set +x 00:09:12.079 11:45:02 -- nvmf/common.sh@470 -- # nvmfpid=2341238 00:09:12.079 11:45:02 -- nvmf/common.sh@471 -- # waitforlisten 2341238 00:09:12.079 11:45:02 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:12.079 11:45:02 -- common/autotest_common.sh@817 -- # '[' -z 2341238 ']' 00:09:12.079 11:45:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.079 11:45:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:12.079 11:45:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.079 11:45:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:12.079 11:45:02 -- common/autotest_common.sh@10 -- # set +x 00:09:12.079 [2024-04-18 11:45:02.620366] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:09:12.079 [2024-04-18 11:45:02.620446] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:12.339 EAL: No free 2048 kB hugepages reported on node 1 00:09:12.339 [2024-04-18 11:45:02.751872] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:12.597 [2024-04-18 11:45:02.975975] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:12.597 [2024-04-18 11:45:02.976028] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:12.597 [2024-04-18 11:45:02.976040] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:12.597 [2024-04-18 11:45:02.976052] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:12.597 [2024-04-18 11:45:02.976061] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:12.597 [2024-04-18 11:45:02.976135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:12.597 [2024-04-18 11:45:02.976211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:12.597 [2024-04-18 11:45:02.976272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.597 [2024-04-18 11:45:02.976281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:13.164 11:45:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:13.164 11:45:03 -- common/autotest_common.sh@850 -- # return 0 00:09:13.164 11:45:03 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:13.164 11:45:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:13.164 11:45:03 -- common/autotest_common.sh@10 -- # set +x 00:09:13.164 11:45:03 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:13.164 11:45:03 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:13.164 11:45:03 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:13.164 11:45:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:13.165 11:45:03 -- common/autotest_common.sh@10 -- # set +x 00:09:13.165 [2024-04-18 11:45:03.458183] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:13.165 11:45:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:13.165 11:45:03 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:13.165 11:45:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:13.165 11:45:03 -- common/autotest_common.sh@10 -- # set +x 00:09:13.732 Malloc1 00:09:13.732 11:45:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:13.732 11:45:04 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:13.732 11:45:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:13.732 11:45:04 -- common/autotest_common.sh@10 -- # set +x 00:09:13.732 11:45:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:13.732 11:45:04 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:13.732 11:45:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:13.732 11:45:04 -- common/autotest_common.sh@10 -- # set +x 00:09:13.732 11:45:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:13.732 11:45:04 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:13.732 11:45:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:13.732 11:45:04 -- common/autotest_common.sh@10 -- # set +x 00:09:13.733 [2024-04-18 11:45:04.155190] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:13.733 11:45:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:13.733 11:45:04 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:13.733 11:45:04 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:09:13.733 11:45:04 -- common/autotest_common.sh@1365 -- # local bdev_info 00:09:13.733 11:45:04 -- common/autotest_common.sh@1366 -- # local bs 00:09:13.733 11:45:04 -- common/autotest_common.sh@1367 -- # local nb 00:09:13.733 11:45:04 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:13.733 11:45:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:13.733 11:45:04 -- common/autotest_common.sh@10 -- # set +x 00:09:13.733 11:45:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:13.733 11:45:04 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:09:13.733 { 00:09:13.733 "name": "Malloc1", 00:09:13.733 "aliases": [ 00:09:13.733 "71cd1980-ea17-45fc-b094-38a6f7d05881" 00:09:13.733 ], 00:09:13.733 "product_name": "Malloc disk", 00:09:13.733 "block_size": 512, 00:09:13.733 "num_blocks": 1048576, 00:09:13.733 "uuid": "71cd1980-ea17-45fc-b094-38a6f7d05881", 00:09:13.733 "assigned_rate_limits": { 00:09:13.733 "rw_ios_per_sec": 0, 00:09:13.733 "rw_mbytes_per_sec": 0, 00:09:13.733 "r_mbytes_per_sec": 0, 00:09:13.733 "w_mbytes_per_sec": 0 00:09:13.733 }, 00:09:13.733 "claimed": true, 00:09:13.733 "claim_type": "exclusive_write", 00:09:13.733 "zoned": false, 00:09:13.733 "supported_io_types": { 00:09:13.733 "read": true, 00:09:13.733 "write": true, 00:09:13.733 "unmap": true, 00:09:13.733 "write_zeroes": true, 00:09:13.733 "flush": true, 00:09:13.733 "reset": true, 00:09:13.733 "compare": false, 00:09:13.733 "compare_and_write": false, 00:09:13.733 "abort": true, 00:09:13.733 "nvme_admin": false, 00:09:13.733 "nvme_io": false 00:09:13.733 }, 00:09:13.733 "memory_domains": [ 00:09:13.733 { 00:09:13.733 "dma_device_id": "system", 00:09:13.733 "dma_device_type": 1 00:09:13.733 }, 00:09:13.733 { 00:09:13.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.733 "dma_device_type": 2 00:09:13.733 } 00:09:13.733 ], 00:09:13.733 "driver_specific": {} 00:09:13.733 } 00:09:13.733 ]' 00:09:13.733 11:45:04 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:09:13.733 11:45:04 -- common/autotest_common.sh@1369 -- # bs=512 00:09:13.733 11:45:04 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:09:13.733 11:45:04 -- common/autotest_common.sh@1370 -- # nb=1048576 00:09:13.733 11:45:04 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:09:13.733 11:45:04 -- common/autotest_common.sh@1374 -- # echo 512 00:09:13.733 11:45:04 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:13.733 11:45:04 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:15.110 11:45:05 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:15.110 11:45:05 -- common/autotest_common.sh@1184 -- # local i=0 00:09:15.110 11:45:05 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:15.110 11:45:05 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:15.110 11:45:05 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:17.643 11:45:07 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:17.643 11:45:07 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:17.643 11:45:07 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:17.643 11:45:07 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:17.643 11:45:07 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:17.643 11:45:07 -- common/autotest_common.sh@1194 -- # return 0 00:09:17.643 11:45:07 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:17.643 11:45:07 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:17.643 11:45:07 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:17.643 11:45:07 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:17.643 11:45:07 -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:17.643 11:45:07 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:17.643 11:45:07 -- setup/common.sh@80 -- # echo 536870912 00:09:17.643 11:45:07 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:17.643 11:45:07 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:17.643 11:45:07 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:17.643 11:45:07 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:17.643 11:45:07 -- target/filesystem.sh@69 -- # partprobe 00:09:17.901 11:45:08 -- target/filesystem.sh@70 -- # sleep 1 00:09:18.833 11:45:09 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:09:18.833 11:45:09 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:18.833 11:45:09 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:09:18.833 11:45:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:18.834 11:45:09 -- common/autotest_common.sh@10 -- # set +x 00:09:19.091 ************************************ 00:09:19.091 START TEST filesystem_ext4 00:09:19.091 ************************************ 00:09:19.091 11:45:09 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:19.091 11:45:09 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:19.091 11:45:09 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:19.091 11:45:09 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:19.091 11:45:09 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:09:19.091 11:45:09 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:09:19.091 11:45:09 -- common/autotest_common.sh@914 -- # local i=0 00:09:19.091 11:45:09 -- common/autotest_common.sh@915 -- # local force 00:09:19.091 11:45:09 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:09:19.091 11:45:09 -- common/autotest_common.sh@918 -- # force=-F 00:09:19.091 11:45:09 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:19.091 mke2fs 1.46.5 (30-Dec-2021) 00:09:19.091 Discarding device blocks: 0/522240 done 00:09:19.091 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:19.091 Filesystem UUID: 794a0d57-3676-4d45-bea2-711b5e1dfbd9 00:09:19.091 Superblock backups stored on blocks: 00:09:19.091 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:19.091 00:09:19.091 Allocating group tables: 0/64 done 00:09:19.091 Writing inode tables: 0/64 done 00:09:19.348 Creating journal (8192 blocks): done 00:09:20.174 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:09:20.174 00:09:20.174 11:45:10 -- common/autotest_common.sh@931 -- # return 0 00:09:20.174 11:45:10 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:20.433 11:45:10 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:20.433 11:45:10 -- target/filesystem.sh@25 -- # sync 00:09:20.433 11:45:10 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:20.433 11:45:10 -- target/filesystem.sh@27 -- # sync 00:09:20.433 11:45:10 -- target/filesystem.sh@29 -- # i=0 00:09:20.433 11:45:10 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:20.433 11:45:10 -- target/filesystem.sh@37 -- # kill -0 2341238 00:09:20.433 11:45:10 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:20.433 11:45:10 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:20.691 11:45:10 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:20.691 11:45:10 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:20.691 00:09:20.691 real 0m1.528s 00:09:20.691 user 0m0.027s 00:09:20.691 sys 0m0.077s 00:09:20.691 11:45:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:20.691 11:45:10 -- common/autotest_common.sh@10 -- # set +x 00:09:20.691 ************************************ 00:09:20.691 END TEST filesystem_ext4 00:09:20.691 ************************************ 00:09:20.691 11:45:11 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:20.691 11:45:11 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:09:20.691 11:45:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:20.691 11:45:11 -- common/autotest_common.sh@10 -- # set +x 00:09:20.691 ************************************ 00:09:20.691 START TEST filesystem_btrfs 00:09:20.691 ************************************ 00:09:20.691 11:45:11 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:20.691 11:45:11 -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:20.691 11:45:11 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:20.691 11:45:11 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:20.691 11:45:11 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:09:20.691 11:45:11 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:09:20.691 11:45:11 -- common/autotest_common.sh@914 -- # local i=0 00:09:20.691 11:45:11 -- common/autotest_common.sh@915 -- # local force 00:09:20.691 11:45:11 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:09:20.691 11:45:11 -- common/autotest_common.sh@920 -- # force=-f 00:09:20.691 11:45:11 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:20.948 btrfs-progs v6.6.2 00:09:20.948 See https://btrfs.readthedocs.io for more information. 00:09:20.948 00:09:20.948 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:20.948 NOTE: several default settings have changed in version 5.15, please make sure 00:09:20.948 this does not affect your deployments: 00:09:20.948 - DUP for metadata (-m dup) 00:09:20.948 - enabled no-holes (-O no-holes) 00:09:20.948 - enabled free-space-tree (-R free-space-tree) 00:09:20.948 00:09:20.948 Label: (null) 00:09:20.948 UUID: 4d7a6a55-6171-468f-bcc6-165dcb8e8da6 00:09:20.948 Node size: 16384 00:09:20.948 Sector size: 4096 00:09:20.948 Filesystem size: 510.00MiB 00:09:20.948 Block group profiles: 00:09:20.948 Data: single 8.00MiB 00:09:20.948 Metadata: DUP 32.00MiB 00:09:20.948 System: DUP 8.00MiB 00:09:20.948 SSD detected: yes 00:09:20.948 Zoned device: no 00:09:20.948 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:09:20.948 Runtime features: free-space-tree 00:09:20.948 Checksum: crc32c 00:09:20.948 Number of devices: 1 00:09:20.948 Devices: 00:09:20.948 ID SIZE PATH 00:09:20.948 1 510.00MiB /dev/nvme0n1p1 00:09:20.948 00:09:20.948 11:45:11 -- common/autotest_common.sh@931 -- # return 0 00:09:20.948 11:45:11 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:21.879 11:45:12 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:21.879 11:45:12 -- target/filesystem.sh@25 -- # sync 00:09:21.879 11:45:12 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:22.136 11:45:12 -- target/filesystem.sh@27 -- # sync 00:09:22.136 11:45:12 -- target/filesystem.sh@29 -- # i=0 00:09:22.137 11:45:12 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:22.137 11:45:12 -- target/filesystem.sh@37 -- # kill -0 2341238 00:09:22.137 11:45:12 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:22.137 11:45:12 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:22.137 11:45:12 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:22.137 11:45:12 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:22.137 00:09:22.137 real 0m1.289s 00:09:22.137 user 0m0.025s 00:09:22.137 sys 0m0.150s 00:09:22.137 11:45:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:22.137 11:45:12 -- common/autotest_common.sh@10 -- # set +x 00:09:22.137 ************************************ 00:09:22.137 END TEST filesystem_btrfs 00:09:22.137 ************************************ 00:09:22.137 11:45:12 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:09:22.137 11:45:12 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:09:22.137 11:45:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:22.137 11:45:12 -- common/autotest_common.sh@10 -- # set +x 00:09:22.395 ************************************ 00:09:22.395 START TEST filesystem_xfs 00:09:22.395 ************************************ 00:09:22.395 11:45:12 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:09:22.395 11:45:12 -- target/filesystem.sh@18 -- # fstype=xfs 00:09:22.395 11:45:12 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:22.395 11:45:12 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:22.395 11:45:12 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:09:22.395 11:45:12 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:09:22.395 11:45:12 -- common/autotest_common.sh@914 -- # local i=0 00:09:22.395 11:45:12 -- common/autotest_common.sh@915 -- # local force 00:09:22.395 11:45:12 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:09:22.395 11:45:12 -- common/autotest_common.sh@920 -- # force=-f 00:09:22.395 11:45:12 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:22.395 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:22.395 = sectsz=512 attr=2, projid32bit=1 00:09:22.395 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:22.395 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:22.395 data = bsize=4096 blocks=130560, imaxpct=25 00:09:22.395 = sunit=0 swidth=0 blks 00:09:22.395 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:22.395 log =internal log bsize=4096 blocks=16384, version=2 00:09:22.395 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:22.395 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:23.329 Discarding blocks...Done. 00:09:23.329 11:45:13 -- common/autotest_common.sh@931 -- # return 0 00:09:23.329 11:45:13 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:25.858 11:45:16 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:25.858 11:45:16 -- target/filesystem.sh@25 -- # sync 00:09:25.858 11:45:16 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:25.858 11:45:16 -- target/filesystem.sh@27 -- # sync 00:09:25.858 11:45:16 -- target/filesystem.sh@29 -- # i=0 00:09:25.858 11:45:16 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:25.858 11:45:16 -- target/filesystem.sh@37 -- # kill -0 2341238 00:09:25.858 11:45:16 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:25.858 11:45:16 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:25.858 11:45:16 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:25.858 11:45:16 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:25.858 00:09:25.858 real 0m3.663s 00:09:25.858 user 0m0.033s 00:09:25.858 sys 0m0.081s 00:09:25.858 11:45:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:25.859 11:45:16 -- common/autotest_common.sh@10 -- # set +x 00:09:25.859 ************************************ 00:09:25.859 END TEST filesystem_xfs 00:09:25.859 ************************************ 00:09:26.117 11:45:16 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:26.376 11:45:16 -- target/filesystem.sh@93 -- # sync 00:09:26.376 11:45:16 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:26.376 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.376 11:45:16 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:26.376 11:45:16 -- common/autotest_common.sh@1205 -- # local i=0 00:09:26.376 11:45:16 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:26.376 11:45:16 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:26.376 11:45:16 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:26.376 11:45:16 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:26.376 11:45:16 -- common/autotest_common.sh@1217 -- # return 0 00:09:26.376 11:45:16 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:26.376 11:45:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:26.376 11:45:16 -- common/autotest_common.sh@10 -- # set +x 00:09:26.376 11:45:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:26.376 11:45:16 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:26.376 11:45:16 -- target/filesystem.sh@101 -- # killprocess 2341238 00:09:26.376 11:45:16 -- common/autotest_common.sh@936 -- # '[' -z 2341238 ']' 00:09:26.376 11:45:16 -- common/autotest_common.sh@940 -- # kill -0 2341238 00:09:26.376 11:45:16 -- common/autotest_common.sh@941 -- # uname 00:09:26.376 11:45:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:26.376 11:45:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2341238 00:09:26.634 11:45:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:26.634 11:45:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:26.634 11:45:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2341238' 00:09:26.634 killing process with pid 2341238 00:09:26.634 11:45:16 -- common/autotest_common.sh@955 -- # kill 2341238 00:09:26.634 11:45:16 -- common/autotest_common.sh@960 -- # wait 2341238 00:09:29.230 11:45:19 -- target/filesystem.sh@102 -- # nvmfpid= 00:09:29.230 00:09:29.230 real 0m17.138s 00:09:29.230 user 1m4.901s 00:09:29.230 sys 0m2.094s 00:09:29.230 11:45:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:29.230 11:45:19 -- common/autotest_common.sh@10 -- # set +x 00:09:29.230 ************************************ 00:09:29.230 END TEST nvmf_filesystem_no_in_capsule 00:09:29.230 ************************************ 00:09:29.230 11:45:19 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:09:29.230 11:45:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:29.230 11:45:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:29.230 11:45:19 -- common/autotest_common.sh@10 -- # set +x 00:09:29.490 ************************************ 00:09:29.490 START TEST nvmf_filesystem_in_capsule 00:09:29.490 ************************************ 00:09:29.490 11:45:19 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 4096 00:09:29.490 11:45:19 -- target/filesystem.sh@47 -- # in_capsule=4096 00:09:29.490 11:45:19 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:29.490 11:45:19 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:29.490 11:45:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:29.490 11:45:19 -- common/autotest_common.sh@10 -- # set +x 00:09:29.490 11:45:19 -- nvmf/common.sh@470 -- # nvmfpid=2344851 00:09:29.490 11:45:19 -- nvmf/common.sh@471 -- # waitforlisten 2344851 00:09:29.490 11:45:19 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:29.490 11:45:19 -- common/autotest_common.sh@817 -- # '[' -z 2344851 ']' 00:09:29.490 11:45:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.490 11:45:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:29.490 11:45:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.490 11:45:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:29.490 11:45:19 -- common/autotest_common.sh@10 -- # set +x 00:09:29.490 [2024-04-18 11:45:19.970754] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:09:29.490 [2024-04-18 11:45:19.970858] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:29.749 EAL: No free 2048 kB hugepages reported on node 1 00:09:29.749 [2024-04-18 11:45:20.104250] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:30.008 [2024-04-18 11:45:20.330438] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:30.008 [2024-04-18 11:45:20.330494] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:30.008 [2024-04-18 11:45:20.330507] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:30.008 [2024-04-18 11:45:20.330521] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:30.008 [2024-04-18 11:45:20.330533] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:30.008 [2024-04-18 11:45:20.330616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:30.008 [2024-04-18 11:45:20.330688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:30.008 [2024-04-18 11:45:20.330755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.008 [2024-04-18 11:45:20.330762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:30.267 11:45:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:30.267 11:45:20 -- common/autotest_common.sh@850 -- # return 0 00:09:30.267 11:45:20 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:30.267 11:45:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:30.267 11:45:20 -- common/autotest_common.sh@10 -- # set +x 00:09:30.267 11:45:20 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:30.267 11:45:20 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:30.267 11:45:20 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:09:30.267 11:45:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.268 11:45:20 -- common/autotest_common.sh@10 -- # set +x 00:09:30.268 [2024-04-18 11:45:20.802637] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:30.268 11:45:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.268 11:45:20 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:30.268 11:45:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.268 11:45:20 -- common/autotest_common.sh@10 -- # set +x 00:09:31.203 Malloc1 00:09:31.203 11:45:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:31.203 11:45:21 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:31.203 11:45:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:31.203 11:45:21 -- common/autotest_common.sh@10 -- # set +x 00:09:31.203 11:45:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:31.203 11:45:21 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:31.203 11:45:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:31.203 11:45:21 -- common/autotest_common.sh@10 -- # set +x 00:09:31.203 11:45:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:31.203 11:45:21 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:31.203 11:45:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:31.203 11:45:21 -- common/autotest_common.sh@10 -- # set +x 00:09:31.203 [2024-04-18 11:45:21.545855] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:31.203 11:45:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:31.203 11:45:21 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:31.203 11:45:21 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:09:31.203 11:45:21 -- common/autotest_common.sh@1365 -- # local bdev_info 00:09:31.203 11:45:21 -- common/autotest_common.sh@1366 -- # local bs 00:09:31.203 11:45:21 -- common/autotest_common.sh@1367 -- # local nb 00:09:31.203 11:45:21 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:31.203 11:45:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:31.203 11:45:21 -- common/autotest_common.sh@10 -- # set +x 00:09:31.203 11:45:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:31.203 11:45:21 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:09:31.203 { 00:09:31.204 "name": "Malloc1", 00:09:31.204 "aliases": [ 00:09:31.204 "c51107ed-ecdc-4bba-a51f-dcccb170ac60" 00:09:31.204 ], 00:09:31.204 "product_name": "Malloc disk", 00:09:31.204 "block_size": 512, 00:09:31.204 "num_blocks": 1048576, 00:09:31.204 "uuid": "c51107ed-ecdc-4bba-a51f-dcccb170ac60", 00:09:31.204 "assigned_rate_limits": { 00:09:31.204 "rw_ios_per_sec": 0, 00:09:31.204 "rw_mbytes_per_sec": 0, 00:09:31.204 "r_mbytes_per_sec": 0, 00:09:31.204 "w_mbytes_per_sec": 0 00:09:31.204 }, 00:09:31.204 "claimed": true, 00:09:31.204 "claim_type": "exclusive_write", 00:09:31.204 "zoned": false, 00:09:31.204 "supported_io_types": { 00:09:31.204 "read": true, 00:09:31.204 "write": true, 00:09:31.204 "unmap": true, 00:09:31.204 "write_zeroes": true, 00:09:31.204 "flush": true, 00:09:31.204 "reset": true, 00:09:31.204 "compare": false, 00:09:31.204 "compare_and_write": false, 00:09:31.204 "abort": true, 00:09:31.204 "nvme_admin": false, 00:09:31.204 "nvme_io": false 00:09:31.204 }, 00:09:31.204 "memory_domains": [ 00:09:31.204 { 00:09:31.204 "dma_device_id": "system", 00:09:31.204 "dma_device_type": 1 00:09:31.204 }, 00:09:31.204 { 00:09:31.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.204 "dma_device_type": 2 00:09:31.204 } 00:09:31.204 ], 00:09:31.204 "driver_specific": {} 00:09:31.204 } 00:09:31.204 ]' 00:09:31.204 11:45:21 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:09:31.204 11:45:21 -- common/autotest_common.sh@1369 -- # bs=512 00:09:31.204 11:45:21 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:09:31.204 11:45:21 -- common/autotest_common.sh@1370 -- # nb=1048576 00:09:31.204 11:45:21 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:09:31.204 11:45:21 -- common/autotest_common.sh@1374 -- # echo 512 00:09:31.204 11:45:21 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:31.204 11:45:21 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:32.580 11:45:23 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:32.580 11:45:23 -- common/autotest_common.sh@1184 -- # local i=0 00:09:32.580 11:45:23 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:32.580 11:45:23 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:32.580 11:45:23 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:35.110 11:45:25 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:35.110 11:45:25 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:35.110 11:45:25 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:35.110 11:45:25 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:35.110 11:45:25 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:35.110 11:45:25 -- common/autotest_common.sh@1194 -- # return 0 00:09:35.110 11:45:25 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:35.110 11:45:25 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:35.110 11:45:25 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:35.110 11:45:25 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:35.110 11:45:25 -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:35.110 11:45:25 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:35.110 11:45:25 -- setup/common.sh@80 -- # echo 536870912 00:09:35.110 11:45:25 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:35.110 11:45:25 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:35.110 11:45:25 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:35.110 11:45:25 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:35.110 11:45:25 -- target/filesystem.sh@69 -- # partprobe 00:09:35.369 11:45:25 -- target/filesystem.sh@70 -- # sleep 1 00:09:36.303 11:45:26 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:09:36.303 11:45:26 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:36.303 11:45:26 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:09:36.303 11:45:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:36.303 11:45:26 -- common/autotest_common.sh@10 -- # set +x 00:09:36.562 ************************************ 00:09:36.562 START TEST filesystem_in_capsule_ext4 00:09:36.562 ************************************ 00:09:36.562 11:45:26 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:36.562 11:45:26 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:36.562 11:45:26 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:36.562 11:45:26 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:36.562 11:45:26 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:09:36.562 11:45:26 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:09:36.562 11:45:26 -- common/autotest_common.sh@914 -- # local i=0 00:09:36.562 11:45:26 -- common/autotest_common.sh@915 -- # local force 00:09:36.562 11:45:26 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:09:36.562 11:45:26 -- common/autotest_common.sh@918 -- # force=-F 00:09:36.562 11:45:26 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:36.562 mke2fs 1.46.5 (30-Dec-2021) 00:09:36.562 Discarding device blocks: 0/522240 done 00:09:36.562 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:36.562 Filesystem UUID: e22099e0-8f27-4049-b5c7-ae27247f530d 00:09:36.562 Superblock backups stored on blocks: 00:09:36.562 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:36.562 00:09:36.562 Allocating group tables: 0/64 done 00:09:36.562 Writing inode tables: 0/64 done 00:09:36.820 Creating journal (8192 blocks): done 00:09:37.078 Writing superblocks and filesystem accounting information: 0/64 done 00:09:37.078 00:09:37.078 11:45:27 -- common/autotest_common.sh@931 -- # return 0 00:09:37.078 11:45:27 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:37.078 11:45:27 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:37.078 11:45:27 -- target/filesystem.sh@25 -- # sync 00:09:37.078 11:45:27 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:37.078 11:45:27 -- target/filesystem.sh@27 -- # sync 00:09:37.078 11:45:27 -- target/filesystem.sh@29 -- # i=0 00:09:37.078 11:45:27 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:37.078 11:45:27 -- target/filesystem.sh@37 -- # kill -0 2344851 00:09:37.078 11:45:27 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:37.078 11:45:27 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:37.078 11:45:27 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:37.079 11:45:27 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:37.079 00:09:37.079 real 0m0.626s 00:09:37.079 user 0m0.031s 00:09:37.079 sys 0m0.075s 00:09:37.079 11:45:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:37.079 11:45:27 -- common/autotest_common.sh@10 -- # set +x 00:09:37.079 ************************************ 00:09:37.079 END TEST filesystem_in_capsule_ext4 00:09:37.079 ************************************ 00:09:37.337 11:45:27 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:37.337 11:45:27 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:09:37.338 11:45:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:37.338 11:45:27 -- common/autotest_common.sh@10 -- # set +x 00:09:37.338 ************************************ 00:09:37.338 START TEST filesystem_in_capsule_btrfs 00:09:37.338 ************************************ 00:09:37.338 11:45:27 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:37.338 11:45:27 -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:37.338 11:45:27 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:37.338 11:45:27 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:37.338 11:45:27 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:09:37.338 11:45:27 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:09:37.338 11:45:27 -- common/autotest_common.sh@914 -- # local i=0 00:09:37.338 11:45:27 -- common/autotest_common.sh@915 -- # local force 00:09:37.338 11:45:27 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:09:37.338 11:45:27 -- common/autotest_common.sh@920 -- # force=-f 00:09:37.338 11:45:27 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:37.596 btrfs-progs v6.6.2 00:09:37.596 See https://btrfs.readthedocs.io for more information. 00:09:37.596 00:09:37.596 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:37.596 NOTE: several default settings have changed in version 5.15, please make sure 00:09:37.596 this does not affect your deployments: 00:09:37.596 - DUP for metadata (-m dup) 00:09:37.596 - enabled no-holes (-O no-holes) 00:09:37.596 - enabled free-space-tree (-R free-space-tree) 00:09:37.596 00:09:37.596 Label: (null) 00:09:37.596 UUID: a540901b-f96c-400d-8c37-b1622b9eae6d 00:09:37.596 Node size: 16384 00:09:37.596 Sector size: 4096 00:09:37.596 Filesystem size: 510.00MiB 00:09:37.596 Block group profiles: 00:09:37.596 Data: single 8.00MiB 00:09:37.596 Metadata: DUP 32.00MiB 00:09:37.596 System: DUP 8.00MiB 00:09:37.596 SSD detected: yes 00:09:37.596 Zoned device: no 00:09:37.596 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:09:37.596 Runtime features: free-space-tree 00:09:37.596 Checksum: crc32c 00:09:37.596 Number of devices: 1 00:09:37.596 Devices: 00:09:37.596 ID SIZE PATH 00:09:37.596 1 510.00MiB /dev/nvme0n1p1 00:09:37.596 00:09:37.596 11:45:27 -- common/autotest_common.sh@931 -- # return 0 00:09:37.596 11:45:27 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:38.163 11:45:28 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:38.163 11:45:28 -- target/filesystem.sh@25 -- # sync 00:09:38.163 11:45:28 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:38.163 11:45:28 -- target/filesystem.sh@27 -- # sync 00:09:38.163 11:45:28 -- target/filesystem.sh@29 -- # i=0 00:09:38.163 11:45:28 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:38.163 11:45:28 -- target/filesystem.sh@37 -- # kill -0 2344851 00:09:38.163 11:45:28 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:38.163 11:45:28 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:38.163 11:45:28 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:38.163 11:45:28 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:38.163 00:09:38.163 real 0m0.876s 00:09:38.163 user 0m0.030s 00:09:38.163 sys 0m0.143s 00:09:38.163 11:45:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:38.163 11:45:28 -- common/autotest_common.sh@10 -- # set +x 00:09:38.163 ************************************ 00:09:38.163 END TEST filesystem_in_capsule_btrfs 00:09:38.163 ************************************ 00:09:38.163 11:45:28 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:09:38.163 11:45:28 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:09:38.163 11:45:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:38.163 11:45:28 -- common/autotest_common.sh@10 -- # set +x 00:09:38.422 ************************************ 00:09:38.422 START TEST filesystem_in_capsule_xfs 00:09:38.422 ************************************ 00:09:38.422 11:45:28 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:09:38.422 11:45:28 -- target/filesystem.sh@18 -- # fstype=xfs 00:09:38.422 11:45:28 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:38.422 11:45:28 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:38.422 11:45:28 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:09:38.422 11:45:28 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:09:38.422 11:45:28 -- common/autotest_common.sh@914 -- # local i=0 00:09:38.422 11:45:28 -- common/autotest_common.sh@915 -- # local force 00:09:38.422 11:45:28 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:09:38.422 11:45:28 -- common/autotest_common.sh@920 -- # force=-f 00:09:38.422 11:45:28 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:38.422 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:38.422 = sectsz=512 attr=2, projid32bit=1 00:09:38.422 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:38.422 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:38.422 data = bsize=4096 blocks=130560, imaxpct=25 00:09:38.422 = sunit=0 swidth=0 blks 00:09:38.422 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:38.422 log =internal log bsize=4096 blocks=16384, version=2 00:09:38.422 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:38.422 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:39.797 Discarding blocks...Done. 00:09:39.797 11:45:29 -- common/autotest_common.sh@931 -- # return 0 00:09:39.797 11:45:29 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:41.699 11:45:31 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:41.699 11:45:31 -- target/filesystem.sh@25 -- # sync 00:09:41.699 11:45:31 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:41.699 11:45:31 -- target/filesystem.sh@27 -- # sync 00:09:41.699 11:45:31 -- target/filesystem.sh@29 -- # i=0 00:09:41.699 11:45:31 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:41.699 11:45:31 -- target/filesystem.sh@37 -- # kill -0 2344851 00:09:41.699 11:45:31 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:41.699 11:45:31 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:41.699 11:45:31 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:41.699 11:45:31 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:41.699 00:09:41.699 real 0m2.986s 00:09:41.699 user 0m0.028s 00:09:41.699 sys 0m0.084s 00:09:41.699 11:45:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:41.699 11:45:31 -- common/autotest_common.sh@10 -- # set +x 00:09:41.699 ************************************ 00:09:41.699 END TEST filesystem_in_capsule_xfs 00:09:41.699 ************************************ 00:09:41.699 11:45:31 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:41.699 11:45:32 -- target/filesystem.sh@93 -- # sync 00:09:41.699 11:45:32 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:41.958 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.958 11:45:32 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:41.958 11:45:32 -- common/autotest_common.sh@1205 -- # local i=0 00:09:41.958 11:45:32 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:41.958 11:45:32 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:41.958 11:45:32 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:42.217 11:45:32 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:42.217 11:45:32 -- common/autotest_common.sh@1217 -- # return 0 00:09:42.217 11:45:32 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:42.217 11:45:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:42.217 11:45:32 -- common/autotest_common.sh@10 -- # set +x 00:09:42.217 11:45:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:42.217 11:45:32 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:42.217 11:45:32 -- target/filesystem.sh@101 -- # killprocess 2344851 00:09:42.217 11:45:32 -- common/autotest_common.sh@936 -- # '[' -z 2344851 ']' 00:09:42.217 11:45:32 -- common/autotest_common.sh@940 -- # kill -0 2344851 00:09:42.217 11:45:32 -- common/autotest_common.sh@941 -- # uname 00:09:42.217 11:45:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:42.217 11:45:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2344851 00:09:42.217 11:45:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:42.217 11:45:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:42.217 11:45:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2344851' 00:09:42.217 killing process with pid 2344851 00:09:42.217 11:45:32 -- common/autotest_common.sh@955 -- # kill 2344851 00:09:42.217 11:45:32 -- common/autotest_common.sh@960 -- # wait 2344851 00:09:44.820 11:45:35 -- target/filesystem.sh@102 -- # nvmfpid= 00:09:44.820 00:09:44.820 real 0m15.442s 00:09:44.820 user 0m58.131s 00:09:44.820 sys 0m2.031s 00:09:44.820 11:45:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:44.820 11:45:35 -- common/autotest_common.sh@10 -- # set +x 00:09:44.820 ************************************ 00:09:44.820 END TEST nvmf_filesystem_in_capsule 00:09:44.820 ************************************ 00:09:44.820 11:45:35 -- target/filesystem.sh@108 -- # nvmftestfini 00:09:44.820 11:45:35 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:44.820 11:45:35 -- nvmf/common.sh@117 -- # sync 00:09:44.820 11:45:35 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:44.820 11:45:35 -- nvmf/common.sh@120 -- # set +e 00:09:44.820 11:45:35 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:44.820 11:45:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:44.820 rmmod nvme_tcp 00:09:45.080 rmmod nvme_fabrics 00:09:45.080 rmmod nvme_keyring 00:09:45.080 11:45:35 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:45.080 11:45:35 -- nvmf/common.sh@124 -- # set -e 00:09:45.080 11:45:35 -- nvmf/common.sh@125 -- # return 0 00:09:45.080 11:45:35 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:09:45.080 11:45:35 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:45.080 11:45:35 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:45.080 11:45:35 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:45.080 11:45:35 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:45.080 11:45:35 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:45.080 11:45:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.080 11:45:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:45.080 11:45:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:46.984 11:45:37 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:46.984 00:09:46.984 real 0m41.812s 00:09:46.984 user 2m4.992s 00:09:46.984 sys 0m9.366s 00:09:46.984 11:45:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:46.984 11:45:37 -- common/autotest_common.sh@10 -- # set +x 00:09:46.984 ************************************ 00:09:46.984 END TEST nvmf_filesystem 00:09:46.984 ************************************ 00:09:46.985 11:45:37 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:09:46.985 11:45:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:46.985 11:45:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:46.985 11:45:37 -- common/autotest_common.sh@10 -- # set +x 00:09:47.243 ************************************ 00:09:47.243 START TEST nvmf_discovery 00:09:47.243 ************************************ 00:09:47.243 11:45:37 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:09:47.566 * Looking for test storage... 00:09:47.566 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:47.566 11:45:37 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:47.566 11:45:37 -- nvmf/common.sh@7 -- # uname -s 00:09:47.566 11:45:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:47.566 11:45:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:47.566 11:45:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:47.566 11:45:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:47.566 11:45:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:47.566 11:45:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:47.566 11:45:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:47.566 11:45:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:47.566 11:45:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:47.566 11:45:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:47.566 11:45:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:09:47.566 11:45:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:09:47.566 11:45:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:47.566 11:45:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:47.566 11:45:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:47.566 11:45:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:47.566 11:45:37 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:47.566 11:45:37 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:47.566 11:45:37 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:47.566 11:45:37 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:47.566 11:45:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.566 11:45:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.566 11:45:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.566 11:45:37 -- paths/export.sh@5 -- # export PATH 00:09:47.566 11:45:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.566 11:45:37 -- nvmf/common.sh@47 -- # : 0 00:09:47.566 11:45:37 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:47.566 11:45:37 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:47.566 11:45:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:47.566 11:45:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:47.566 11:45:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:47.566 11:45:37 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:47.566 11:45:37 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:47.566 11:45:37 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:47.566 11:45:37 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:09:47.566 11:45:37 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:09:47.566 11:45:37 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:09:47.566 11:45:37 -- target/discovery.sh@15 -- # hash nvme 00:09:47.566 11:45:37 -- target/discovery.sh@20 -- # nvmftestinit 00:09:47.566 11:45:37 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:47.566 11:45:37 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:47.566 11:45:37 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:47.566 11:45:37 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:47.566 11:45:37 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:47.566 11:45:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.566 11:45:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:47.566 11:45:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.566 11:45:37 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:47.566 11:45:37 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:47.566 11:45:37 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:47.566 11:45:37 -- common/autotest_common.sh@10 -- # set +x 00:09:54.134 11:45:44 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:54.134 11:45:44 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:54.134 11:45:44 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:54.134 11:45:44 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:54.134 11:45:44 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:54.134 11:45:44 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:54.134 11:45:44 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:54.134 11:45:44 -- nvmf/common.sh@295 -- # net_devs=() 00:09:54.134 11:45:44 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:54.134 11:45:44 -- nvmf/common.sh@296 -- # e810=() 00:09:54.134 11:45:44 -- nvmf/common.sh@296 -- # local -ga e810 00:09:54.134 11:45:44 -- nvmf/common.sh@297 -- # x722=() 00:09:54.134 11:45:44 -- nvmf/common.sh@297 -- # local -ga x722 00:09:54.134 11:45:44 -- nvmf/common.sh@298 -- # mlx=() 00:09:54.134 11:45:44 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:54.134 11:45:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:54.134 11:45:44 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:54.134 11:45:44 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:54.134 11:45:44 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:54.134 11:45:44 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:54.134 11:45:44 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:54.134 11:45:44 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:54.134 11:45:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:54.134 11:45:44 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:54.134 11:45:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:54.134 11:45:44 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:54.134 11:45:44 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:54.134 11:45:44 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:54.134 11:45:44 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:54.134 11:45:44 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:54.134 11:45:44 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:54.134 11:45:44 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:54.134 11:45:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:54.134 11:45:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:54.134 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:54.134 11:45:44 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:54.135 11:45:44 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:54.135 11:45:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:54.135 11:45:44 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:54.135 11:45:44 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:54.135 11:45:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:54.135 11:45:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:54.135 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:54.135 11:45:44 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:54.135 11:45:44 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:54.135 11:45:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:54.135 11:45:44 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:54.135 11:45:44 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:54.135 11:45:44 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:54.135 11:45:44 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:54.135 11:45:44 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:54.135 11:45:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:54.135 11:45:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:54.135 11:45:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:54.135 11:45:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:54.135 11:45:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:54.135 Found net devices under 0000:af:00.0: cvl_0_0 00:09:54.135 11:45:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:54.135 11:45:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:54.135 11:45:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:54.135 11:45:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:54.135 11:45:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:54.135 11:45:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:54.135 Found net devices under 0000:af:00.1: cvl_0_1 00:09:54.135 11:45:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:54.135 11:45:44 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:54.135 11:45:44 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:54.135 11:45:44 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:54.135 11:45:44 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:54.135 11:45:44 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:54.135 11:45:44 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:54.135 11:45:44 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:54.135 11:45:44 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:54.135 11:45:44 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:54.135 11:45:44 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:54.135 11:45:44 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:54.135 11:45:44 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:54.135 11:45:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:54.135 11:45:44 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:54.135 11:45:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:54.135 11:45:44 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:54.135 11:45:44 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:54.135 11:45:44 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:54.394 11:45:44 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:54.394 11:45:44 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:54.394 11:45:44 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:54.394 11:45:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:54.394 11:45:44 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:54.394 11:45:44 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:54.394 11:45:44 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:54.394 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:54.394 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:09:54.394 00:09:54.394 --- 10.0.0.2 ping statistics --- 00:09:54.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.394 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:09:54.394 11:45:44 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:54.394 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:54.394 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:09:54.394 00:09:54.394 --- 10.0.0.1 ping statistics --- 00:09:54.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.394 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:09:54.394 11:45:44 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:54.394 11:45:44 -- nvmf/common.sh@411 -- # return 0 00:09:54.394 11:45:44 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:54.394 11:45:44 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:54.394 11:45:44 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:54.394 11:45:44 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:54.394 11:45:44 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:54.394 11:45:44 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:54.394 11:45:44 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:54.394 11:45:44 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:09:54.394 11:45:44 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:54.394 11:45:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:54.394 11:45:44 -- common/autotest_common.sh@10 -- # set +x 00:09:54.654 11:45:44 -- nvmf/common.sh@470 -- # nvmfpid=2351443 00:09:54.654 11:45:44 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:54.654 11:45:44 -- nvmf/common.sh@471 -- # waitforlisten 2351443 00:09:54.654 11:45:44 -- common/autotest_common.sh@817 -- # '[' -z 2351443 ']' 00:09:54.654 11:45:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.654 11:45:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:54.654 11:45:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.654 11:45:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:54.654 11:45:44 -- common/autotest_common.sh@10 -- # set +x 00:09:54.654 [2024-04-18 11:45:45.034093] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:09:54.654 [2024-04-18 11:45:45.034193] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:54.654 EAL: No free 2048 kB hugepages reported on node 1 00:09:54.654 [2024-04-18 11:45:45.162854] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:54.913 [2024-04-18 11:45:45.382847] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:54.913 [2024-04-18 11:45:45.382907] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:54.913 [2024-04-18 11:45:45.382920] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:54.913 [2024-04-18 11:45:45.382934] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:54.913 [2024-04-18 11:45:45.382944] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:54.913 [2024-04-18 11:45:45.383035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:54.913 [2024-04-18 11:45:45.383108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:54.913 [2024-04-18 11:45:45.383168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.913 [2024-04-18 11:45:45.383177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:55.480 11:45:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:55.480 11:45:45 -- common/autotest_common.sh@850 -- # return 0 00:09:55.480 11:45:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:55.480 11:45:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:55.480 11:45:45 -- common/autotest_common.sh@10 -- # set +x 00:09:55.480 11:45:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:55.480 11:45:45 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:55.480 11:45:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.480 11:45:45 -- common/autotest_common.sh@10 -- # set +x 00:09:55.480 [2024-04-18 11:45:45.851814] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:55.480 11:45:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.480 11:45:45 -- target/discovery.sh@26 -- # seq 1 4 00:09:55.480 11:45:45 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:55.480 11:45:45 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:09:55.480 11:45:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.480 11:45:45 -- common/autotest_common.sh@10 -- # set +x 00:09:55.480 Null1 00:09:55.480 11:45:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.480 11:45:45 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:55.480 11:45:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.480 11:45:45 -- common/autotest_common.sh@10 -- # set +x 00:09:55.480 11:45:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.480 11:45:45 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:09:55.480 11:45:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.480 11:45:45 -- common/autotest_common.sh@10 -- # set +x 00:09:55.480 11:45:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.480 11:45:45 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:55.480 11:45:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.480 11:45:45 -- common/autotest_common.sh@10 -- # set +x 00:09:55.480 [2024-04-18 11:45:45.904228] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:55.480 11:45:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.480 11:45:45 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:55.480 11:45:45 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:09:55.480 11:45:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.480 11:45:45 -- common/autotest_common.sh@10 -- # set +x 00:09:55.480 Null2 00:09:55.480 11:45:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.480 11:45:45 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:09:55.480 11:45:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.480 11:45:45 -- common/autotest_common.sh@10 -- # set +x 00:09:55.481 11:45:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.481 11:45:45 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:09:55.481 11:45:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.481 11:45:45 -- common/autotest_common.sh@10 -- # set +x 00:09:55.481 11:45:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.481 11:45:45 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:55.481 11:45:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.481 11:45:45 -- common/autotest_common.sh@10 -- # set +x 00:09:55.481 11:45:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.481 11:45:45 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:55.481 11:45:45 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:09:55.481 11:45:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.481 11:45:45 -- common/autotest_common.sh@10 -- # set +x 00:09:55.481 Null3 00:09:55.481 11:45:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.481 11:45:45 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:09:55.481 11:45:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.481 11:45:45 -- common/autotest_common.sh@10 -- # set +x 00:09:55.481 11:45:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.481 11:45:45 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:09:55.481 11:45:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.481 11:45:45 -- common/autotest_common.sh@10 -- # set +x 00:09:55.481 11:45:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.481 11:45:45 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:09:55.481 11:45:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.481 11:45:45 -- common/autotest_common.sh@10 -- # set +x 00:09:55.481 11:45:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.481 11:45:45 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:55.481 11:45:45 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:09:55.481 11:45:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.481 11:45:45 -- common/autotest_common.sh@10 -- # set +x 00:09:55.481 Null4 00:09:55.481 11:45:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.481 11:45:45 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:09:55.481 11:45:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.481 11:45:45 -- common/autotest_common.sh@10 -- # set +x 00:09:55.481 11:45:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.481 11:45:45 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:09:55.481 11:45:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.481 11:45:45 -- common/autotest_common.sh@10 -- # set +x 00:09:55.481 11:45:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.481 11:45:45 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:09:55.481 11:45:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.481 11:45:45 -- common/autotest_common.sh@10 -- # set +x 00:09:55.481 11:45:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.481 11:45:46 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:55.481 11:45:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.481 11:45:46 -- common/autotest_common.sh@10 -- # set +x 00:09:55.481 11:45:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.481 11:45:46 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:09:55.481 11:45:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.481 11:45:46 -- common/autotest_common.sh@10 -- # set +x 00:09:55.481 11:45:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.481 11:45:46 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 4420 00:09:55.740 00:09:55.740 Discovery Log Number of Records 6, Generation counter 6 00:09:55.740 =====Discovery Log Entry 0====== 00:09:55.740 trtype: tcp 00:09:55.740 adrfam: ipv4 00:09:55.740 subtype: current discovery subsystem 00:09:55.740 treq: not required 00:09:55.740 portid: 0 00:09:55.740 trsvcid: 4420 00:09:55.740 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:55.740 traddr: 10.0.0.2 00:09:55.740 eflags: explicit discovery connections, duplicate discovery information 00:09:55.740 sectype: none 00:09:55.740 =====Discovery Log Entry 1====== 00:09:55.740 trtype: tcp 00:09:55.740 adrfam: ipv4 00:09:55.740 subtype: nvme subsystem 00:09:55.740 treq: not required 00:09:55.740 portid: 0 00:09:55.740 trsvcid: 4420 00:09:55.740 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:55.740 traddr: 10.0.0.2 00:09:55.740 eflags: none 00:09:55.740 sectype: none 00:09:55.740 =====Discovery Log Entry 2====== 00:09:55.740 trtype: tcp 00:09:55.740 adrfam: ipv4 00:09:55.740 subtype: nvme subsystem 00:09:55.740 treq: not required 00:09:55.740 portid: 0 00:09:55.740 trsvcid: 4420 00:09:55.740 subnqn: nqn.2016-06.io.spdk:cnode2 00:09:55.740 traddr: 10.0.0.2 00:09:55.740 eflags: none 00:09:55.740 sectype: none 00:09:55.740 =====Discovery Log Entry 3====== 00:09:55.740 trtype: tcp 00:09:55.740 adrfam: ipv4 00:09:55.740 subtype: nvme subsystem 00:09:55.740 treq: not required 00:09:55.740 portid: 0 00:09:55.740 trsvcid: 4420 00:09:55.740 subnqn: nqn.2016-06.io.spdk:cnode3 00:09:55.740 traddr: 10.0.0.2 00:09:55.740 eflags: none 00:09:55.740 sectype: none 00:09:55.740 =====Discovery Log Entry 4====== 00:09:55.740 trtype: tcp 00:09:55.740 adrfam: ipv4 00:09:55.740 subtype: nvme subsystem 00:09:55.740 treq: not required 00:09:55.740 portid: 0 00:09:55.740 trsvcid: 4420 00:09:55.740 subnqn: nqn.2016-06.io.spdk:cnode4 00:09:55.740 traddr: 10.0.0.2 00:09:55.740 eflags: none 00:09:55.740 sectype: none 00:09:55.740 =====Discovery Log Entry 5====== 00:09:55.740 trtype: tcp 00:09:55.740 adrfam: ipv4 00:09:55.740 subtype: discovery subsystem referral 00:09:55.740 treq: not required 00:09:55.740 portid: 0 00:09:55.740 trsvcid: 4430 00:09:55.740 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:55.740 traddr: 10.0.0.2 00:09:55.740 eflags: none 00:09:55.740 sectype: none 00:09:55.740 11:45:46 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:09:55.740 Perform nvmf subsystem discovery via RPC 00:09:55.740 11:45:46 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:09:55.740 11:45:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.740 11:45:46 -- common/autotest_common.sh@10 -- # set +x 00:09:55.740 [2024-04-18 11:45:46.112669] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:09:55.740 [ 00:09:55.740 { 00:09:55.740 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:55.740 "subtype": "Discovery", 00:09:55.740 "listen_addresses": [ 00:09:55.740 { 00:09:55.740 "transport": "TCP", 00:09:55.740 "trtype": "TCP", 00:09:55.740 "adrfam": "IPv4", 00:09:55.740 "traddr": "10.0.0.2", 00:09:55.740 "trsvcid": "4420" 00:09:55.740 } 00:09:55.740 ], 00:09:55.740 "allow_any_host": true, 00:09:55.740 "hosts": [] 00:09:55.740 }, 00:09:55.740 { 00:09:55.740 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:55.740 "subtype": "NVMe", 00:09:55.740 "listen_addresses": [ 00:09:55.740 { 00:09:55.740 "transport": "TCP", 00:09:55.740 "trtype": "TCP", 00:09:55.740 "adrfam": "IPv4", 00:09:55.740 "traddr": "10.0.0.2", 00:09:55.740 "trsvcid": "4420" 00:09:55.740 } 00:09:55.740 ], 00:09:55.740 "allow_any_host": true, 00:09:55.740 "hosts": [], 00:09:55.740 "serial_number": "SPDK00000000000001", 00:09:55.740 "model_number": "SPDK bdev Controller", 00:09:55.740 "max_namespaces": 32, 00:09:55.740 "min_cntlid": 1, 00:09:55.740 "max_cntlid": 65519, 00:09:55.740 "namespaces": [ 00:09:55.740 { 00:09:55.740 "nsid": 1, 00:09:55.740 "bdev_name": "Null1", 00:09:55.740 "name": "Null1", 00:09:55.740 "nguid": "0230D888DC3D49D2B0EFC7BD2C8A8CAC", 00:09:55.740 "uuid": "0230d888-dc3d-49d2-b0ef-c7bd2c8a8cac" 00:09:55.740 } 00:09:55.740 ] 00:09:55.740 }, 00:09:55.740 { 00:09:55.740 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:55.740 "subtype": "NVMe", 00:09:55.740 "listen_addresses": [ 00:09:55.740 { 00:09:55.740 "transport": "TCP", 00:09:55.740 "trtype": "TCP", 00:09:55.740 "adrfam": "IPv4", 00:09:55.740 "traddr": "10.0.0.2", 00:09:55.740 "trsvcid": "4420" 00:09:55.740 } 00:09:55.740 ], 00:09:55.740 "allow_any_host": true, 00:09:55.740 "hosts": [], 00:09:55.740 "serial_number": "SPDK00000000000002", 00:09:55.741 "model_number": "SPDK bdev Controller", 00:09:55.741 "max_namespaces": 32, 00:09:55.741 "min_cntlid": 1, 00:09:55.741 "max_cntlid": 65519, 00:09:55.741 "namespaces": [ 00:09:55.741 { 00:09:55.741 "nsid": 1, 00:09:55.741 "bdev_name": "Null2", 00:09:55.741 "name": "Null2", 00:09:55.741 "nguid": "A6CFE8AE0F5240F7BE8A4EBE6B08DDB6", 00:09:55.741 "uuid": "a6cfe8ae-0f52-40f7-be8a-4ebe6b08ddb6" 00:09:55.741 } 00:09:55.741 ] 00:09:55.741 }, 00:09:55.741 { 00:09:55.741 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:09:55.741 "subtype": "NVMe", 00:09:55.741 "listen_addresses": [ 00:09:55.741 { 00:09:55.741 "transport": "TCP", 00:09:55.741 "trtype": "TCP", 00:09:55.741 "adrfam": "IPv4", 00:09:55.741 "traddr": "10.0.0.2", 00:09:55.741 "trsvcid": "4420" 00:09:55.741 } 00:09:55.741 ], 00:09:55.741 "allow_any_host": true, 00:09:55.741 "hosts": [], 00:09:55.741 "serial_number": "SPDK00000000000003", 00:09:55.741 "model_number": "SPDK bdev Controller", 00:09:55.741 "max_namespaces": 32, 00:09:55.741 "min_cntlid": 1, 00:09:55.741 "max_cntlid": 65519, 00:09:55.741 "namespaces": [ 00:09:55.741 { 00:09:55.741 "nsid": 1, 00:09:55.741 "bdev_name": "Null3", 00:09:55.741 "name": "Null3", 00:09:55.741 "nguid": "4CF7658B8144466EA46E61EF9F2CE274", 00:09:55.741 "uuid": "4cf7658b-8144-466e-a46e-61ef9f2ce274" 00:09:55.741 } 00:09:55.741 ] 00:09:55.741 }, 00:09:55.741 { 00:09:55.741 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:09:55.741 "subtype": "NVMe", 00:09:55.741 "listen_addresses": [ 00:09:55.741 { 00:09:55.741 "transport": "TCP", 00:09:55.741 "trtype": "TCP", 00:09:55.741 "adrfam": "IPv4", 00:09:55.741 "traddr": "10.0.0.2", 00:09:55.741 "trsvcid": "4420" 00:09:55.741 } 00:09:55.741 ], 00:09:55.741 "allow_any_host": true, 00:09:55.741 "hosts": [], 00:09:55.741 "serial_number": "SPDK00000000000004", 00:09:55.741 "model_number": "SPDK bdev Controller", 00:09:55.741 "max_namespaces": 32, 00:09:55.741 "min_cntlid": 1, 00:09:55.741 "max_cntlid": 65519, 00:09:55.741 "namespaces": [ 00:09:55.741 { 00:09:55.741 "nsid": 1, 00:09:55.741 "bdev_name": "Null4", 00:09:55.741 "name": "Null4", 00:09:55.741 "nguid": "410BD4BC95124D3181D99283E2E68DD0", 00:09:55.741 "uuid": "410bd4bc-9512-4d31-81d9-9283e2e68dd0" 00:09:55.741 } 00:09:55.741 ] 00:09:55.741 } 00:09:55.741 ] 00:09:55.741 11:45:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.741 11:45:46 -- target/discovery.sh@42 -- # seq 1 4 00:09:55.741 11:45:46 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:55.741 11:45:46 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:55.741 11:45:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.741 11:45:46 -- common/autotest_common.sh@10 -- # set +x 00:09:55.741 11:45:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.741 11:45:46 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:09:55.741 11:45:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.741 11:45:46 -- common/autotest_common.sh@10 -- # set +x 00:09:55.741 11:45:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.741 11:45:46 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:55.741 11:45:46 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:09:55.741 11:45:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.741 11:45:46 -- common/autotest_common.sh@10 -- # set +x 00:09:55.741 11:45:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.741 11:45:46 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:09:55.741 11:45:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.741 11:45:46 -- common/autotest_common.sh@10 -- # set +x 00:09:55.741 11:45:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.741 11:45:46 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:55.741 11:45:46 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:09:55.741 11:45:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.741 11:45:46 -- common/autotest_common.sh@10 -- # set +x 00:09:55.741 11:45:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.741 11:45:46 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:09:55.741 11:45:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.741 11:45:46 -- common/autotest_common.sh@10 -- # set +x 00:09:55.741 11:45:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.741 11:45:46 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:55.741 11:45:46 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:09:55.741 11:45:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.741 11:45:46 -- common/autotest_common.sh@10 -- # set +x 00:09:55.741 11:45:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.741 11:45:46 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:09:55.741 11:45:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.741 11:45:46 -- common/autotest_common.sh@10 -- # set +x 00:09:55.741 11:45:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.741 11:45:46 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:09:55.741 11:45:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.741 11:45:46 -- common/autotest_common.sh@10 -- # set +x 00:09:55.741 11:45:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.741 11:45:46 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:09:55.741 11:45:46 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:09:55.741 11:45:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.741 11:45:46 -- common/autotest_common.sh@10 -- # set +x 00:09:55.741 11:45:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.741 11:45:46 -- target/discovery.sh@49 -- # check_bdevs= 00:09:55.741 11:45:46 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:09:55.741 11:45:46 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:09:55.741 11:45:46 -- target/discovery.sh@57 -- # nvmftestfini 00:09:55.741 11:45:46 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:55.741 11:45:46 -- nvmf/common.sh@117 -- # sync 00:09:55.741 11:45:46 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:55.741 11:45:46 -- nvmf/common.sh@120 -- # set +e 00:09:55.741 11:45:46 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:55.741 11:45:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:55.741 rmmod nvme_tcp 00:09:56.000 rmmod nvme_fabrics 00:09:56.000 rmmod nvme_keyring 00:09:56.000 11:45:46 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:56.000 11:45:46 -- nvmf/common.sh@124 -- # set -e 00:09:56.000 11:45:46 -- nvmf/common.sh@125 -- # return 0 00:09:56.000 11:45:46 -- nvmf/common.sh@478 -- # '[' -n 2351443 ']' 00:09:56.000 11:45:46 -- nvmf/common.sh@479 -- # killprocess 2351443 00:09:56.000 11:45:46 -- common/autotest_common.sh@936 -- # '[' -z 2351443 ']' 00:09:56.000 11:45:46 -- common/autotest_common.sh@940 -- # kill -0 2351443 00:09:56.000 11:45:46 -- common/autotest_common.sh@941 -- # uname 00:09:56.000 11:45:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:56.000 11:45:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2351443 00:09:56.000 11:45:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:56.000 11:45:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:56.000 11:45:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2351443' 00:09:56.000 killing process with pid 2351443 00:09:56.000 11:45:46 -- common/autotest_common.sh@955 -- # kill 2351443 00:09:56.000 [2024-04-18 11:45:46.400844] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:09:56.000 11:45:46 -- common/autotest_common.sh@960 -- # wait 2351443 00:09:57.378 11:45:47 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:57.378 11:45:47 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:57.378 11:45:47 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:57.378 11:45:47 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:57.378 11:45:47 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:57.378 11:45:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.378 11:45:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:57.378 11:45:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.283 11:45:49 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:59.283 00:09:59.283 real 0m12.055s 00:09:59.283 user 0m9.813s 00:09:59.283 sys 0m5.917s 00:09:59.283 11:45:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:59.283 11:45:49 -- common/autotest_common.sh@10 -- # set +x 00:09:59.283 ************************************ 00:09:59.283 END TEST nvmf_discovery 00:09:59.283 ************************************ 00:09:59.283 11:45:49 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:59.283 11:45:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:59.283 11:45:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:59.283 11:45:49 -- common/autotest_common.sh@10 -- # set +x 00:09:59.542 ************************************ 00:09:59.542 START TEST nvmf_referrals 00:09:59.542 ************************************ 00:09:59.542 11:45:49 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:59.542 * Looking for test storage... 00:09:59.542 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:59.543 11:45:50 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:59.543 11:45:50 -- nvmf/common.sh@7 -- # uname -s 00:09:59.543 11:45:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:59.543 11:45:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:59.543 11:45:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:59.543 11:45:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:59.543 11:45:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:59.543 11:45:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:59.543 11:45:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:59.543 11:45:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:59.543 11:45:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:59.543 11:45:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:59.543 11:45:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:09:59.543 11:45:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:09:59.543 11:45:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:59.543 11:45:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:59.543 11:45:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:59.543 11:45:50 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:59.543 11:45:50 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:59.802 11:45:50 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:59.802 11:45:50 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:59.802 11:45:50 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:59.802 11:45:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.802 11:45:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.802 11:45:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.802 11:45:50 -- paths/export.sh@5 -- # export PATH 00:09:59.802 11:45:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.802 11:45:50 -- nvmf/common.sh@47 -- # : 0 00:09:59.802 11:45:50 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:59.802 11:45:50 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:59.802 11:45:50 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:59.802 11:45:50 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:59.802 11:45:50 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:59.802 11:45:50 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:59.802 11:45:50 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:59.803 11:45:50 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:59.803 11:45:50 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:09:59.803 11:45:50 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:09:59.803 11:45:50 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:09:59.803 11:45:50 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:09:59.803 11:45:50 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:09:59.803 11:45:50 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:09:59.803 11:45:50 -- target/referrals.sh@37 -- # nvmftestinit 00:09:59.803 11:45:50 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:59.803 11:45:50 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:59.803 11:45:50 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:59.803 11:45:50 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:59.803 11:45:50 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:59.803 11:45:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.803 11:45:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:59.803 11:45:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.803 11:45:50 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:59.803 11:45:50 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:59.803 11:45:50 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:59.803 11:45:50 -- common/autotest_common.sh@10 -- # set +x 00:10:06.377 11:45:56 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:06.377 11:45:56 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:06.377 11:45:56 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:06.377 11:45:56 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:06.377 11:45:56 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:06.377 11:45:56 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:06.377 11:45:56 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:06.377 11:45:56 -- nvmf/common.sh@295 -- # net_devs=() 00:10:06.377 11:45:56 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:06.377 11:45:56 -- nvmf/common.sh@296 -- # e810=() 00:10:06.377 11:45:56 -- nvmf/common.sh@296 -- # local -ga e810 00:10:06.377 11:45:56 -- nvmf/common.sh@297 -- # x722=() 00:10:06.377 11:45:56 -- nvmf/common.sh@297 -- # local -ga x722 00:10:06.377 11:45:56 -- nvmf/common.sh@298 -- # mlx=() 00:10:06.377 11:45:56 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:06.377 11:45:56 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:06.377 11:45:56 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:06.377 11:45:56 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:06.377 11:45:56 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:06.377 11:45:56 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:06.377 11:45:56 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:06.377 11:45:56 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:06.377 11:45:56 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:06.377 11:45:56 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:06.377 11:45:56 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:06.377 11:45:56 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:06.377 11:45:56 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:06.377 11:45:56 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:06.377 11:45:56 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:06.377 11:45:56 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:06.377 11:45:56 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:06.377 11:45:56 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:06.377 11:45:56 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:06.377 11:45:56 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:06.377 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:06.377 11:45:56 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:06.377 11:45:56 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:06.377 11:45:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.377 11:45:56 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.377 11:45:56 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:06.377 11:45:56 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:06.377 11:45:56 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:06.377 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:06.377 11:45:56 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:06.377 11:45:56 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:06.377 11:45:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.377 11:45:56 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.377 11:45:56 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:06.377 11:45:56 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:06.377 11:45:56 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:06.377 11:45:56 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:06.377 11:45:56 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:06.377 11:45:56 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.377 11:45:56 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:06.377 11:45:56 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.377 11:45:56 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:06.377 Found net devices under 0000:af:00.0: cvl_0_0 00:10:06.377 11:45:56 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.377 11:45:56 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:06.377 11:45:56 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.377 11:45:56 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:06.377 11:45:56 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.377 11:45:56 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:06.377 Found net devices under 0000:af:00.1: cvl_0_1 00:10:06.377 11:45:56 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.377 11:45:56 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:06.377 11:45:56 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:06.377 11:45:56 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:06.377 11:45:56 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:10:06.377 11:45:56 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:10:06.377 11:45:56 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:06.377 11:45:56 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:06.377 11:45:56 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:06.377 11:45:56 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:06.377 11:45:56 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:06.377 11:45:56 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:06.377 11:45:56 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:06.377 11:45:56 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:06.377 11:45:56 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:06.377 11:45:56 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:06.377 11:45:56 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:06.378 11:45:56 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:06.378 11:45:56 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:06.378 11:45:56 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:06.378 11:45:56 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:06.378 11:45:56 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:06.378 11:45:56 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:06.378 11:45:56 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:06.378 11:45:56 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:06.378 11:45:56 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:06.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:06.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:10:06.378 00:10:06.378 --- 10.0.0.2 ping statistics --- 00:10:06.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.378 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:10:06.378 11:45:56 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:06.378 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:06.378 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:10:06.378 00:10:06.378 --- 10.0.0.1 ping statistics --- 00:10:06.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.378 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:10:06.378 11:45:56 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:06.378 11:45:56 -- nvmf/common.sh@411 -- # return 0 00:10:06.378 11:45:56 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:06.378 11:45:56 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:06.378 11:45:56 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:10:06.378 11:45:56 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:10:06.378 11:45:56 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:06.378 11:45:56 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:10:06.378 11:45:56 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:10:06.378 11:45:56 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:10:06.378 11:45:56 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:06.378 11:45:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:06.378 11:45:56 -- common/autotest_common.sh@10 -- # set +x 00:10:06.378 11:45:56 -- nvmf/common.sh@470 -- # nvmfpid=2355707 00:10:06.378 11:45:56 -- nvmf/common.sh@471 -- # waitforlisten 2355707 00:10:06.378 11:45:56 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:06.378 11:45:56 -- common/autotest_common.sh@817 -- # '[' -z 2355707 ']' 00:10:06.378 11:45:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.378 11:45:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:06.378 11:45:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.378 11:45:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:06.378 11:45:56 -- common/autotest_common.sh@10 -- # set +x 00:10:06.378 [2024-04-18 11:45:56.779751] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:10:06.378 [2024-04-18 11:45:56.779836] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:06.378 EAL: No free 2048 kB hugepages reported on node 1 00:10:06.378 [2024-04-18 11:45:56.907028] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:06.637 [2024-04-18 11:45:57.118509] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:06.637 [2024-04-18 11:45:57.118553] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:06.637 [2024-04-18 11:45:57.118565] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:06.637 [2024-04-18 11:45:57.118578] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:06.637 [2024-04-18 11:45:57.118588] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:06.637 [2024-04-18 11:45:57.118666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:06.637 [2024-04-18 11:45:57.118737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:06.637 [2024-04-18 11:45:57.118803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.637 [2024-04-18 11:45:57.118812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:07.205 11:45:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:07.205 11:45:57 -- common/autotest_common.sh@850 -- # return 0 00:10:07.205 11:45:57 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:07.205 11:45:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:07.205 11:45:57 -- common/autotest_common.sh@10 -- # set +x 00:10:07.205 11:45:57 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:07.205 11:45:57 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:07.205 11:45:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:07.205 11:45:57 -- common/autotest_common.sh@10 -- # set +x 00:10:07.205 [2024-04-18 11:45:57.592422] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:07.205 11:45:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:07.205 11:45:57 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:10:07.205 11:45:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:07.206 11:45:57 -- common/autotest_common.sh@10 -- # set +x 00:10:07.206 [2024-04-18 11:45:57.608683] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:10:07.206 11:45:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:07.206 11:45:57 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:10:07.206 11:45:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:07.206 11:45:57 -- common/autotest_common.sh@10 -- # set +x 00:10:07.206 11:45:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:07.206 11:45:57 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:10:07.206 11:45:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:07.206 11:45:57 -- common/autotest_common.sh@10 -- # set +x 00:10:07.206 11:45:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:07.206 11:45:57 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:10:07.206 11:45:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:07.206 11:45:57 -- common/autotest_common.sh@10 -- # set +x 00:10:07.206 11:45:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:07.206 11:45:57 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:07.206 11:45:57 -- target/referrals.sh@48 -- # jq length 00:10:07.206 11:45:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:07.206 11:45:57 -- common/autotest_common.sh@10 -- # set +x 00:10:07.206 11:45:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:07.206 11:45:57 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:10:07.206 11:45:57 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:10:07.206 11:45:57 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:07.206 11:45:57 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:07.206 11:45:57 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:07.206 11:45:57 -- target/referrals.sh@21 -- # sort 00:10:07.206 11:45:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:07.206 11:45:57 -- common/autotest_common.sh@10 -- # set +x 00:10:07.206 11:45:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:07.206 11:45:57 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:07.206 11:45:57 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:07.206 11:45:57 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:10:07.206 11:45:57 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:07.206 11:45:57 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:07.206 11:45:57 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:07.206 11:45:57 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:07.206 11:45:57 -- target/referrals.sh@26 -- # sort 00:10:07.465 11:45:57 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:07.465 11:45:57 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:07.465 11:45:57 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:10:07.465 11:45:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:07.465 11:45:57 -- common/autotest_common.sh@10 -- # set +x 00:10:07.465 11:45:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:07.465 11:45:57 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:10:07.465 11:45:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:07.465 11:45:57 -- common/autotest_common.sh@10 -- # set +x 00:10:07.465 11:45:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:07.465 11:45:57 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:10:07.465 11:45:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:07.465 11:45:57 -- common/autotest_common.sh@10 -- # set +x 00:10:07.465 11:45:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:07.465 11:45:57 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:07.465 11:45:57 -- target/referrals.sh@56 -- # jq length 00:10:07.465 11:45:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:07.465 11:45:57 -- common/autotest_common.sh@10 -- # set +x 00:10:07.465 11:45:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:07.725 11:45:58 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:10:07.725 11:45:58 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:10:07.725 11:45:58 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:07.725 11:45:58 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:07.725 11:45:58 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:07.725 11:45:58 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:07.725 11:45:58 -- target/referrals.sh@26 -- # sort 00:10:07.725 11:45:58 -- target/referrals.sh@26 -- # echo 00:10:07.725 11:45:58 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:10:07.725 11:45:58 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:10:07.725 11:45:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:07.725 11:45:58 -- common/autotest_common.sh@10 -- # set +x 00:10:07.725 11:45:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:07.725 11:45:58 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:07.725 11:45:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:07.725 11:45:58 -- common/autotest_common.sh@10 -- # set +x 00:10:07.725 11:45:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:07.725 11:45:58 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:10:07.725 11:45:58 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:07.725 11:45:58 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:07.725 11:45:58 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:07.725 11:45:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:07.725 11:45:58 -- target/referrals.sh@21 -- # sort 00:10:07.725 11:45:58 -- common/autotest_common.sh@10 -- # set +x 00:10:07.725 11:45:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:07.725 11:45:58 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:10:07.725 11:45:58 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:07.725 11:45:58 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:10:07.725 11:45:58 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:07.725 11:45:58 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:07.725 11:45:58 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:07.725 11:45:58 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:07.725 11:45:58 -- target/referrals.sh@26 -- # sort 00:10:07.983 11:45:58 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:10:07.983 11:45:58 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:07.983 11:45:58 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:10:07.983 11:45:58 -- target/referrals.sh@67 -- # jq -r .subnqn 00:10:07.983 11:45:58 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:07.983 11:45:58 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:07.983 11:45:58 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:08.242 11:45:58 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:08.242 11:45:58 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:10:08.242 11:45:58 -- target/referrals.sh@68 -- # jq -r .subnqn 00:10:08.242 11:45:58 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:08.242 11:45:58 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:08.242 11:45:58 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:08.242 11:45:58 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:08.242 11:45:58 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:08.242 11:45:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:08.242 11:45:58 -- common/autotest_common.sh@10 -- # set +x 00:10:08.242 11:45:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:08.242 11:45:58 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:10:08.242 11:45:58 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:08.242 11:45:58 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:08.242 11:45:58 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:08.242 11:45:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:08.242 11:45:58 -- common/autotest_common.sh@10 -- # set +x 00:10:08.242 11:45:58 -- target/referrals.sh@21 -- # sort 00:10:08.242 11:45:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:08.242 11:45:58 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:10:08.242 11:45:58 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:08.242 11:45:58 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:10:08.242 11:45:58 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:08.242 11:45:58 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:08.242 11:45:58 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:08.242 11:45:58 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:08.242 11:45:58 -- target/referrals.sh@26 -- # sort 00:10:08.502 11:45:58 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:10:08.502 11:45:58 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:08.502 11:45:58 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:10:08.502 11:45:58 -- target/referrals.sh@75 -- # jq -r .subnqn 00:10:08.502 11:45:58 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:08.502 11:45:58 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:08.502 11:45:58 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:08.502 11:45:59 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:10:08.502 11:45:59 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:10:08.502 11:45:59 -- target/referrals.sh@76 -- # jq -r .subnqn 00:10:08.502 11:45:59 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:08.502 11:45:59 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:08.502 11:45:59 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:08.761 11:45:59 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:08.761 11:45:59 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:10:08.761 11:45:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:08.761 11:45:59 -- common/autotest_common.sh@10 -- # set +x 00:10:08.761 11:45:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:08.761 11:45:59 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:08.761 11:45:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:08.761 11:45:59 -- target/referrals.sh@82 -- # jq length 00:10:08.761 11:45:59 -- common/autotest_common.sh@10 -- # set +x 00:10:08.761 11:45:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:08.761 11:45:59 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:10:08.761 11:45:59 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:10:08.761 11:45:59 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:08.761 11:45:59 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:08.761 11:45:59 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:08.761 11:45:59 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:08.761 11:45:59 -- target/referrals.sh@26 -- # sort 00:10:09.021 11:45:59 -- target/referrals.sh@26 -- # echo 00:10:09.021 11:45:59 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:10:09.021 11:45:59 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:10:09.021 11:45:59 -- target/referrals.sh@86 -- # nvmftestfini 00:10:09.021 11:45:59 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:09.021 11:45:59 -- nvmf/common.sh@117 -- # sync 00:10:09.021 11:45:59 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:09.021 11:45:59 -- nvmf/common.sh@120 -- # set +e 00:10:09.021 11:45:59 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:09.021 11:45:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:09.021 rmmod nvme_tcp 00:10:09.021 rmmod nvme_fabrics 00:10:09.021 rmmod nvme_keyring 00:10:09.021 11:45:59 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:09.021 11:45:59 -- nvmf/common.sh@124 -- # set -e 00:10:09.021 11:45:59 -- nvmf/common.sh@125 -- # return 0 00:10:09.021 11:45:59 -- nvmf/common.sh@478 -- # '[' -n 2355707 ']' 00:10:09.021 11:45:59 -- nvmf/common.sh@479 -- # killprocess 2355707 00:10:09.021 11:45:59 -- common/autotest_common.sh@936 -- # '[' -z 2355707 ']' 00:10:09.021 11:45:59 -- common/autotest_common.sh@940 -- # kill -0 2355707 00:10:09.021 11:45:59 -- common/autotest_common.sh@941 -- # uname 00:10:09.021 11:45:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:09.021 11:45:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2355707 00:10:09.021 11:45:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:09.021 11:45:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:09.021 11:45:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2355707' 00:10:09.021 killing process with pid 2355707 00:10:09.021 11:45:59 -- common/autotest_common.sh@955 -- # kill 2355707 00:10:09.021 11:45:59 -- common/autotest_common.sh@960 -- # wait 2355707 00:10:10.400 11:46:00 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:10.400 11:46:00 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:10:10.400 11:46:00 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:10:10.400 11:46:00 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:10.400 11:46:00 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:10.400 11:46:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:10.400 11:46:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:10.400 11:46:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:12.367 11:46:02 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:12.367 00:10:12.367 real 0m12.919s 00:10:12.367 user 0m15.833s 00:10:12.367 sys 0m5.995s 00:10:12.367 11:46:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:12.367 11:46:02 -- common/autotest_common.sh@10 -- # set +x 00:10:12.367 ************************************ 00:10:12.367 END TEST nvmf_referrals 00:10:12.367 ************************************ 00:10:12.626 11:46:02 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:12.626 11:46:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:12.626 11:46:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:12.626 11:46:02 -- common/autotest_common.sh@10 -- # set +x 00:10:12.626 ************************************ 00:10:12.626 START TEST nvmf_connect_disconnect 00:10:12.626 ************************************ 00:10:12.626 11:46:03 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:12.886 * Looking for test storage... 00:10:12.886 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:12.886 11:46:03 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:12.886 11:46:03 -- nvmf/common.sh@7 -- # uname -s 00:10:12.886 11:46:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:12.886 11:46:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:12.886 11:46:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:12.886 11:46:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:12.886 11:46:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:12.886 11:46:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:12.886 11:46:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:12.886 11:46:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:12.886 11:46:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:12.886 11:46:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:12.886 11:46:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:10:12.886 11:46:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:10:12.886 11:46:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:12.886 11:46:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:12.886 11:46:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:12.886 11:46:03 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:12.886 11:46:03 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:12.886 11:46:03 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:12.886 11:46:03 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:12.886 11:46:03 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:12.886 11:46:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.886 11:46:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.886 11:46:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.886 11:46:03 -- paths/export.sh@5 -- # export PATH 00:10:12.886 11:46:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.886 11:46:03 -- nvmf/common.sh@47 -- # : 0 00:10:12.886 11:46:03 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:12.886 11:46:03 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:12.886 11:46:03 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:12.886 11:46:03 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:12.886 11:46:03 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:12.886 11:46:03 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:12.886 11:46:03 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:12.886 11:46:03 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:12.886 11:46:03 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:12.886 11:46:03 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:12.886 11:46:03 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:10:12.886 11:46:03 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:10:12.886 11:46:03 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:12.886 11:46:03 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:12.886 11:46:03 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:12.886 11:46:03 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:12.886 11:46:03 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:12.886 11:46:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:12.886 11:46:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:12.886 11:46:03 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:12.886 11:46:03 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:12.886 11:46:03 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:12.886 11:46:03 -- common/autotest_common.sh@10 -- # set +x 00:10:19.455 11:46:09 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:19.455 11:46:09 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:19.455 11:46:09 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:19.455 11:46:09 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:19.455 11:46:09 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:19.455 11:46:09 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:19.455 11:46:09 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:19.455 11:46:09 -- nvmf/common.sh@295 -- # net_devs=() 00:10:19.455 11:46:09 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:19.455 11:46:09 -- nvmf/common.sh@296 -- # e810=() 00:10:19.455 11:46:09 -- nvmf/common.sh@296 -- # local -ga e810 00:10:19.455 11:46:09 -- nvmf/common.sh@297 -- # x722=() 00:10:19.455 11:46:09 -- nvmf/common.sh@297 -- # local -ga x722 00:10:19.455 11:46:09 -- nvmf/common.sh@298 -- # mlx=() 00:10:19.455 11:46:09 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:19.455 11:46:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:19.455 11:46:09 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:19.455 11:46:09 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:19.455 11:46:09 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:19.455 11:46:09 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:19.455 11:46:09 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:19.455 11:46:09 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:19.455 11:46:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:19.455 11:46:09 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:19.455 11:46:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:19.455 11:46:09 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:19.455 11:46:09 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:19.455 11:46:09 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:19.455 11:46:09 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:19.455 11:46:09 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:19.455 11:46:09 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:19.455 11:46:09 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:19.455 11:46:09 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:19.455 11:46:09 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:19.455 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:19.455 11:46:09 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:19.455 11:46:09 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:19.455 11:46:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:19.455 11:46:09 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:19.455 11:46:09 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:19.455 11:46:09 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:19.455 11:46:09 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:19.455 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:19.455 11:46:09 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:19.455 11:46:09 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:19.455 11:46:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:19.455 11:46:09 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:19.455 11:46:09 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:19.455 11:46:09 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:19.455 11:46:09 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:19.455 11:46:09 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:19.455 11:46:09 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:19.455 11:46:09 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:19.455 11:46:09 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:19.455 11:46:09 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:19.455 11:46:09 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:19.455 Found net devices under 0000:af:00.0: cvl_0_0 00:10:19.455 11:46:09 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:19.455 11:46:09 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:19.455 11:46:09 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:19.455 11:46:09 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:19.455 11:46:09 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:19.455 11:46:09 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:19.455 Found net devices under 0000:af:00.1: cvl_0_1 00:10:19.455 11:46:09 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:19.455 11:46:09 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:19.455 11:46:09 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:19.455 11:46:09 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:19.455 11:46:09 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:10:19.455 11:46:09 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:10:19.455 11:46:09 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:19.455 11:46:09 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:19.455 11:46:09 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:19.455 11:46:09 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:19.455 11:46:09 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:19.455 11:46:09 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:19.455 11:46:09 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:19.455 11:46:09 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:19.455 11:46:09 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:19.455 11:46:09 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:19.455 11:46:09 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:19.455 11:46:09 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:19.455 11:46:09 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:19.455 11:46:09 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:19.455 11:46:09 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:19.455 11:46:09 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:19.455 11:46:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:19.714 11:46:10 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:19.714 11:46:10 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:19.714 11:46:10 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:19.714 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:19.714 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:10:19.714 00:10:19.714 --- 10.0.0.2 ping statistics --- 00:10:19.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:19.714 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:10:19.714 11:46:10 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:19.714 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:19.714 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:10:19.714 00:10:19.714 --- 10.0.0.1 ping statistics --- 00:10:19.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:19.714 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:10:19.714 11:46:10 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:19.714 11:46:10 -- nvmf/common.sh@411 -- # return 0 00:10:19.714 11:46:10 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:19.714 11:46:10 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:19.714 11:46:10 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:10:19.714 11:46:10 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:10:19.714 11:46:10 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:19.714 11:46:10 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:10:19.714 11:46:10 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:10:19.714 11:46:10 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:10:19.714 11:46:10 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:19.714 11:46:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:19.714 11:46:10 -- common/autotest_common.sh@10 -- # set +x 00:10:19.714 11:46:10 -- nvmf/common.sh@470 -- # nvmfpid=2360121 00:10:19.714 11:46:10 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:19.714 11:46:10 -- nvmf/common.sh@471 -- # waitforlisten 2360121 00:10:19.714 11:46:10 -- common/autotest_common.sh@817 -- # '[' -z 2360121 ']' 00:10:19.714 11:46:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.714 11:46:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:19.714 11:46:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.714 11:46:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:19.714 11:46:10 -- common/autotest_common.sh@10 -- # set +x 00:10:19.714 [2024-04-18 11:46:10.187600] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:10:19.714 [2024-04-18 11:46:10.187687] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:19.714 EAL: No free 2048 kB hugepages reported on node 1 00:10:19.972 [2024-04-18 11:46:10.318645] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:20.230 [2024-04-18 11:46:10.537581] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:20.230 [2024-04-18 11:46:10.537627] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:20.230 [2024-04-18 11:46:10.537639] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:20.230 [2024-04-18 11:46:10.537651] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:20.230 [2024-04-18 11:46:10.537663] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:20.230 [2024-04-18 11:46:10.537740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:20.230 [2024-04-18 11:46:10.537823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:20.230 [2024-04-18 11:46:10.537870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.230 [2024-04-18 11:46:10.537879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:20.489 11:46:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:20.489 11:46:10 -- common/autotest_common.sh@850 -- # return 0 00:10:20.489 11:46:10 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:20.489 11:46:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:20.489 11:46:10 -- common/autotest_common.sh@10 -- # set +x 00:10:20.489 11:46:11 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:20.489 11:46:11 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:20.489 11:46:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:20.489 11:46:11 -- common/autotest_common.sh@10 -- # set +x 00:10:20.489 [2024-04-18 11:46:11.009006] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:20.489 11:46:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:20.489 11:46:11 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:10:20.489 11:46:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:20.489 11:46:11 -- common/autotest_common.sh@10 -- # set +x 00:10:20.746 11:46:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:20.747 11:46:11 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:10:20.747 11:46:11 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:20.747 11:46:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:20.747 11:46:11 -- common/autotest_common.sh@10 -- # set +x 00:10:20.747 11:46:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:20.747 11:46:11 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:20.747 11:46:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:20.747 11:46:11 -- common/autotest_common.sh@10 -- # set +x 00:10:20.747 11:46:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:20.747 11:46:11 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:20.747 11:46:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:20.747 11:46:11 -- common/autotest_common.sh@10 -- # set +x 00:10:20.747 [2024-04-18 11:46:11.129458] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:20.747 11:46:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:20.747 11:46:11 -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:10:20.747 11:46:11 -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:10:20.747 11:46:11 -- target/connect_disconnect.sh@34 -- # set +x 00:10:24.928 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.205 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.485 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.763 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.019 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.019 11:46:28 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:10:39.019 11:46:28 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:10:39.019 11:46:28 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:39.019 11:46:28 -- nvmf/common.sh@117 -- # sync 00:10:39.019 11:46:28 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:39.019 11:46:28 -- nvmf/common.sh@120 -- # set +e 00:10:39.019 11:46:28 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:39.019 11:46:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:39.019 rmmod nvme_tcp 00:10:39.019 rmmod nvme_fabrics 00:10:39.019 rmmod nvme_keyring 00:10:39.019 11:46:28 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:39.019 11:46:28 -- nvmf/common.sh@124 -- # set -e 00:10:39.019 11:46:28 -- nvmf/common.sh@125 -- # return 0 00:10:39.019 11:46:28 -- nvmf/common.sh@478 -- # '[' -n 2360121 ']' 00:10:39.019 11:46:28 -- nvmf/common.sh@479 -- # killprocess 2360121 00:10:39.019 11:46:28 -- common/autotest_common.sh@936 -- # '[' -z 2360121 ']' 00:10:39.019 11:46:28 -- common/autotest_common.sh@940 -- # kill -0 2360121 00:10:39.019 11:46:28 -- common/autotest_common.sh@941 -- # uname 00:10:39.019 11:46:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:39.019 11:46:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2360121 00:10:39.019 11:46:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:39.019 11:46:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:39.019 11:46:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2360121' 00:10:39.019 killing process with pid 2360121 00:10:39.019 11:46:28 -- common/autotest_common.sh@955 -- # kill 2360121 00:10:39.019 11:46:28 -- common/autotest_common.sh@960 -- # wait 2360121 00:10:39.958 11:46:30 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:39.958 11:46:30 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:10:39.958 11:46:30 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:10:39.958 11:46:30 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:39.958 11:46:30 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:39.958 11:46:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:39.958 11:46:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:39.958 11:46:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:41.863 11:46:32 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:41.863 00:10:41.863 real 0m29.315s 00:10:41.863 user 1m18.368s 00:10:41.863 sys 0m7.168s 00:10:41.863 11:46:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:41.863 11:46:32 -- common/autotest_common.sh@10 -- # set +x 00:10:41.863 ************************************ 00:10:41.863 END TEST nvmf_connect_disconnect 00:10:41.863 ************************************ 00:10:42.122 11:46:32 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:42.122 11:46:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:42.122 11:46:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:42.122 11:46:32 -- common/autotest_common.sh@10 -- # set +x 00:10:42.122 ************************************ 00:10:42.122 START TEST nvmf_multitarget 00:10:42.122 ************************************ 00:10:42.122 11:46:32 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:42.382 * Looking for test storage... 00:10:42.382 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:42.382 11:46:32 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:42.382 11:46:32 -- nvmf/common.sh@7 -- # uname -s 00:10:42.382 11:46:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:42.382 11:46:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:42.382 11:46:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:42.382 11:46:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:42.382 11:46:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:42.382 11:46:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:42.382 11:46:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:42.382 11:46:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:42.382 11:46:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:42.382 11:46:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:42.382 11:46:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:10:42.382 11:46:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:10:42.382 11:46:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:42.382 11:46:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:42.382 11:46:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:42.382 11:46:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:42.382 11:46:32 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:42.382 11:46:32 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:42.382 11:46:32 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:42.382 11:46:32 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:42.382 11:46:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.382 11:46:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.382 11:46:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.382 11:46:32 -- paths/export.sh@5 -- # export PATH 00:10:42.382 11:46:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.382 11:46:32 -- nvmf/common.sh@47 -- # : 0 00:10:42.382 11:46:32 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:42.382 11:46:32 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:42.382 11:46:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:42.382 11:46:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:42.382 11:46:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:42.382 11:46:32 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:42.382 11:46:32 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:42.382 11:46:32 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:42.382 11:46:32 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:10:42.382 11:46:32 -- target/multitarget.sh@15 -- # nvmftestinit 00:10:42.382 11:46:32 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:10:42.382 11:46:32 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:42.382 11:46:32 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:42.382 11:46:32 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:42.382 11:46:32 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:42.382 11:46:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:42.382 11:46:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:42.382 11:46:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:42.382 11:46:32 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:42.382 11:46:32 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:42.382 11:46:32 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:42.382 11:46:32 -- common/autotest_common.sh@10 -- # set +x 00:10:48.949 11:46:39 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:48.949 11:46:39 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:48.949 11:46:39 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:48.949 11:46:39 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:48.949 11:46:39 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:48.949 11:46:39 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:48.949 11:46:39 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:48.949 11:46:39 -- nvmf/common.sh@295 -- # net_devs=() 00:10:48.949 11:46:39 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:48.949 11:46:39 -- nvmf/common.sh@296 -- # e810=() 00:10:48.949 11:46:39 -- nvmf/common.sh@296 -- # local -ga e810 00:10:48.949 11:46:39 -- nvmf/common.sh@297 -- # x722=() 00:10:48.949 11:46:39 -- nvmf/common.sh@297 -- # local -ga x722 00:10:48.949 11:46:39 -- nvmf/common.sh@298 -- # mlx=() 00:10:48.949 11:46:39 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:48.949 11:46:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:48.949 11:46:39 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:48.949 11:46:39 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:48.949 11:46:39 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:48.949 11:46:39 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:48.949 11:46:39 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:48.949 11:46:39 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:48.949 11:46:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:48.949 11:46:39 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:48.949 11:46:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:48.949 11:46:39 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:48.949 11:46:39 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:48.949 11:46:39 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:48.949 11:46:39 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:48.949 11:46:39 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:48.949 11:46:39 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:48.949 11:46:39 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:48.949 11:46:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:48.949 11:46:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:48.949 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:48.949 11:46:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:48.949 11:46:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:48.949 11:46:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:48.949 11:46:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:48.949 11:46:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:48.949 11:46:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:48.949 11:46:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:48.949 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:48.949 11:46:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:48.949 11:46:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:48.949 11:46:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:48.949 11:46:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:48.949 11:46:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:48.949 11:46:39 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:48.949 11:46:39 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:48.949 11:46:39 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:48.949 11:46:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:48.949 11:46:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:48.949 11:46:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:48.949 11:46:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:48.949 11:46:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:48.949 Found net devices under 0000:af:00.0: cvl_0_0 00:10:48.949 11:46:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:48.949 11:46:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:48.949 11:46:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:48.949 11:46:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:48.949 11:46:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:48.949 11:46:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:48.949 Found net devices under 0000:af:00.1: cvl_0_1 00:10:48.949 11:46:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:48.949 11:46:39 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:48.949 11:46:39 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:48.949 11:46:39 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:48.949 11:46:39 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:10:48.949 11:46:39 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:10:48.949 11:46:39 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:48.949 11:46:39 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:48.949 11:46:39 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:48.949 11:46:39 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:48.949 11:46:39 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:48.949 11:46:39 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:48.949 11:46:39 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:48.949 11:46:39 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:48.949 11:46:39 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:48.949 11:46:39 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:48.949 11:46:39 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:48.949 11:46:39 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:48.949 11:46:39 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:48.949 11:46:39 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:48.949 11:46:39 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:48.949 11:46:39 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:48.949 11:46:39 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:49.208 11:46:39 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:49.208 11:46:39 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:49.208 11:46:39 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:49.208 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:49.208 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:10:49.208 00:10:49.208 --- 10.0.0.2 ping statistics --- 00:10:49.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.208 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:10:49.208 11:46:39 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:49.208 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:49.208 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.254 ms 00:10:49.208 00:10:49.208 --- 10.0.0.1 ping statistics --- 00:10:49.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.208 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:10:49.208 11:46:39 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:49.208 11:46:39 -- nvmf/common.sh@411 -- # return 0 00:10:49.208 11:46:39 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:49.208 11:46:39 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:49.208 11:46:39 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:10:49.208 11:46:39 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:10:49.208 11:46:39 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:49.208 11:46:39 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:10:49.208 11:46:39 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:10:49.208 11:46:39 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:10:49.208 11:46:39 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:49.208 11:46:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:49.208 11:46:39 -- common/autotest_common.sh@10 -- # set +x 00:10:49.208 11:46:39 -- nvmf/common.sh@470 -- # nvmfpid=2367353 00:10:49.208 11:46:39 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:49.208 11:46:39 -- nvmf/common.sh@471 -- # waitforlisten 2367353 00:10:49.208 11:46:39 -- common/autotest_common.sh@817 -- # '[' -z 2367353 ']' 00:10:49.208 11:46:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:49.208 11:46:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:49.208 11:46:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:49.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:49.208 11:46:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:49.208 11:46:39 -- common/autotest_common.sh@10 -- # set +x 00:10:49.467 [2024-04-18 11:46:39.774264] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:10:49.467 [2024-04-18 11:46:39.774352] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:49.467 EAL: No free 2048 kB hugepages reported on node 1 00:10:49.467 [2024-04-18 11:46:39.905673] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:49.725 [2024-04-18 11:46:40.136075] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:49.725 [2024-04-18 11:46:40.136119] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:49.725 [2024-04-18 11:46:40.136132] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:49.725 [2024-04-18 11:46:40.136147] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:49.725 [2024-04-18 11:46:40.136156] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:49.725 [2024-04-18 11:46:40.136237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:49.725 [2024-04-18 11:46:40.136314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:49.725 [2024-04-18 11:46:40.136373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.725 [2024-04-18 11:46:40.136381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:50.291 11:46:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:50.291 11:46:40 -- common/autotest_common.sh@850 -- # return 0 00:10:50.291 11:46:40 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:50.291 11:46:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:50.291 11:46:40 -- common/autotest_common.sh@10 -- # set +x 00:10:50.291 11:46:40 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:50.291 11:46:40 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:50.291 11:46:40 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:50.291 11:46:40 -- target/multitarget.sh@21 -- # jq length 00:10:50.291 11:46:40 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:10:50.291 11:46:40 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:10:50.291 "nvmf_tgt_1" 00:10:50.291 11:46:40 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:10:50.548 "nvmf_tgt_2" 00:10:50.548 11:46:40 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:50.548 11:46:40 -- target/multitarget.sh@28 -- # jq length 00:10:50.548 11:46:41 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:10:50.548 11:46:41 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:10:50.806 true 00:10:50.806 11:46:41 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:10:50.806 true 00:10:50.806 11:46:41 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:50.806 11:46:41 -- target/multitarget.sh@35 -- # jq length 00:10:50.806 11:46:41 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:10:50.806 11:46:41 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:50.806 11:46:41 -- target/multitarget.sh@41 -- # nvmftestfini 00:10:50.806 11:46:41 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:50.806 11:46:41 -- nvmf/common.sh@117 -- # sync 00:10:50.806 11:46:41 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:50.806 11:46:41 -- nvmf/common.sh@120 -- # set +e 00:10:50.807 11:46:41 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:50.807 11:46:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:50.807 rmmod nvme_tcp 00:10:51.065 rmmod nvme_fabrics 00:10:51.065 rmmod nvme_keyring 00:10:51.065 11:46:41 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:51.065 11:46:41 -- nvmf/common.sh@124 -- # set -e 00:10:51.065 11:46:41 -- nvmf/common.sh@125 -- # return 0 00:10:51.065 11:46:41 -- nvmf/common.sh@478 -- # '[' -n 2367353 ']' 00:10:51.065 11:46:41 -- nvmf/common.sh@479 -- # killprocess 2367353 00:10:51.065 11:46:41 -- common/autotest_common.sh@936 -- # '[' -z 2367353 ']' 00:10:51.065 11:46:41 -- common/autotest_common.sh@940 -- # kill -0 2367353 00:10:51.065 11:46:41 -- common/autotest_common.sh@941 -- # uname 00:10:51.065 11:46:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:51.065 11:46:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2367353 00:10:51.065 11:46:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:51.065 11:46:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:51.065 11:46:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2367353' 00:10:51.065 killing process with pid 2367353 00:10:51.065 11:46:41 -- common/autotest_common.sh@955 -- # kill 2367353 00:10:51.065 11:46:41 -- common/autotest_common.sh@960 -- # wait 2367353 00:10:52.441 11:46:42 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:52.441 11:46:42 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:10:52.441 11:46:42 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:10:52.441 11:46:42 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:52.441 11:46:42 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:52.441 11:46:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:52.441 11:46:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:52.441 11:46:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:54.344 11:46:44 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:54.344 00:10:54.344 real 0m12.198s 00:10:54.344 user 0m11.975s 00:10:54.344 sys 0m5.967s 00:10:54.344 11:46:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:54.344 11:46:44 -- common/autotest_common.sh@10 -- # set +x 00:10:54.344 ************************************ 00:10:54.344 END TEST nvmf_multitarget 00:10:54.344 ************************************ 00:10:54.344 11:46:44 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:10:54.344 11:46:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:54.344 11:46:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:54.344 11:46:44 -- common/autotest_common.sh@10 -- # set +x 00:10:54.603 ************************************ 00:10:54.603 START TEST nvmf_rpc 00:10:54.603 ************************************ 00:10:54.603 11:46:45 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:10:54.603 * Looking for test storage... 00:10:54.603 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:54.603 11:46:45 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:54.603 11:46:45 -- nvmf/common.sh@7 -- # uname -s 00:10:54.603 11:46:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:54.603 11:46:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:54.603 11:46:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:54.603 11:46:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:54.603 11:46:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:54.603 11:46:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:54.603 11:46:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:54.603 11:46:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:54.603 11:46:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:54.603 11:46:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:54.603 11:46:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:10:54.603 11:46:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:10:54.603 11:46:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:54.603 11:46:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:54.603 11:46:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:54.603 11:46:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:54.603 11:46:45 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:54.862 11:46:45 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:54.862 11:46:45 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:54.862 11:46:45 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:54.862 11:46:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.862 11:46:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.862 11:46:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.862 11:46:45 -- paths/export.sh@5 -- # export PATH 00:10:54.862 11:46:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.862 11:46:45 -- nvmf/common.sh@47 -- # : 0 00:10:54.862 11:46:45 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:54.862 11:46:45 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:54.862 11:46:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:54.862 11:46:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:54.862 11:46:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:54.862 11:46:45 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:54.862 11:46:45 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:54.862 11:46:45 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:54.862 11:46:45 -- target/rpc.sh@11 -- # loops=5 00:10:54.862 11:46:45 -- target/rpc.sh@23 -- # nvmftestinit 00:10:54.862 11:46:45 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:10:54.862 11:46:45 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:54.862 11:46:45 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:54.862 11:46:45 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:54.862 11:46:45 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:54.862 11:46:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:54.862 11:46:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:54.862 11:46:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:54.862 11:46:45 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:54.862 11:46:45 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:54.862 11:46:45 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:54.862 11:46:45 -- common/autotest_common.sh@10 -- # set +x 00:11:01.429 11:46:51 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:01.429 11:46:51 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:01.429 11:46:51 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:01.429 11:46:51 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:01.429 11:46:51 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:01.429 11:46:51 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:01.429 11:46:51 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:01.429 11:46:51 -- nvmf/common.sh@295 -- # net_devs=() 00:11:01.429 11:46:51 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:01.429 11:46:51 -- nvmf/common.sh@296 -- # e810=() 00:11:01.429 11:46:51 -- nvmf/common.sh@296 -- # local -ga e810 00:11:01.429 11:46:51 -- nvmf/common.sh@297 -- # x722=() 00:11:01.429 11:46:51 -- nvmf/common.sh@297 -- # local -ga x722 00:11:01.429 11:46:51 -- nvmf/common.sh@298 -- # mlx=() 00:11:01.429 11:46:51 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:01.429 11:46:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:01.429 11:46:51 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:01.429 11:46:51 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:01.429 11:46:51 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:01.429 11:46:51 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:01.429 11:46:51 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:01.429 11:46:51 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:01.429 11:46:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:01.429 11:46:51 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:01.429 11:46:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:01.429 11:46:51 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:01.429 11:46:51 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:01.429 11:46:51 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:01.429 11:46:51 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:01.429 11:46:51 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:01.429 11:46:51 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:01.429 11:46:51 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:01.429 11:46:51 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:01.429 11:46:51 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:01.429 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:01.429 11:46:51 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:01.429 11:46:51 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:01.429 11:46:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:01.429 11:46:51 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:01.429 11:46:51 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:01.429 11:46:51 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:01.429 11:46:51 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:01.429 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:01.429 11:46:51 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:01.429 11:46:51 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:01.429 11:46:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:01.429 11:46:51 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:01.429 11:46:51 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:01.429 11:46:51 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:01.429 11:46:51 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:01.429 11:46:51 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:01.429 11:46:51 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:01.430 11:46:51 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:01.430 11:46:51 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:01.430 11:46:51 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:01.430 11:46:51 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:01.430 Found net devices under 0000:af:00.0: cvl_0_0 00:11:01.430 11:46:51 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:01.430 11:46:51 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:01.430 11:46:51 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:01.430 11:46:51 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:01.430 11:46:51 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:01.430 11:46:51 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:01.430 Found net devices under 0000:af:00.1: cvl_0_1 00:11:01.430 11:46:51 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:01.430 11:46:51 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:01.430 11:46:51 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:01.430 11:46:51 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:01.430 11:46:51 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:01.430 11:46:51 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:01.430 11:46:51 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:01.430 11:46:51 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:01.430 11:46:51 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:01.430 11:46:51 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:01.430 11:46:51 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:01.430 11:46:51 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:01.430 11:46:51 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:01.430 11:46:51 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:01.430 11:46:51 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:01.430 11:46:51 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:01.430 11:46:51 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:01.430 11:46:51 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:01.430 11:46:51 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:01.430 11:46:51 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:01.430 11:46:51 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:01.748 11:46:51 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:01.748 11:46:51 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:01.748 11:46:52 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:01.748 11:46:52 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:01.748 11:46:52 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:01.748 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:01.748 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:11:01.748 00:11:01.748 --- 10.0.0.2 ping statistics --- 00:11:01.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.748 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:11:01.748 11:46:52 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:01.748 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:01.748 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.241 ms 00:11:01.748 00:11:01.748 --- 10.0.0.1 ping statistics --- 00:11:01.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.748 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:11:01.748 11:46:52 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:01.748 11:46:52 -- nvmf/common.sh@411 -- # return 0 00:11:01.748 11:46:52 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:01.748 11:46:52 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:01.748 11:46:52 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:01.748 11:46:52 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:01.748 11:46:52 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:01.748 11:46:52 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:01.748 11:46:52 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:01.748 11:46:52 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:01.748 11:46:52 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:01.748 11:46:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:01.748 11:46:52 -- common/autotest_common.sh@10 -- # set +x 00:11:01.748 11:46:52 -- nvmf/common.sh@470 -- # nvmfpid=2371695 00:11:01.748 11:46:52 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:01.748 11:46:52 -- nvmf/common.sh@471 -- # waitforlisten 2371695 00:11:01.748 11:46:52 -- common/autotest_common.sh@817 -- # '[' -z 2371695 ']' 00:11:01.748 11:46:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.748 11:46:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:01.748 11:46:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.748 11:46:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:01.748 11:46:52 -- common/autotest_common.sh@10 -- # set +x 00:11:01.748 [2024-04-18 11:46:52.269884] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:11:01.748 [2024-04-18 11:46:52.269984] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:02.006 EAL: No free 2048 kB hugepages reported on node 1 00:11:02.007 [2024-04-18 11:46:52.398894] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:02.264 [2024-04-18 11:46:52.622978] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:02.264 [2024-04-18 11:46:52.623026] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:02.265 [2024-04-18 11:46:52.623039] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:02.265 [2024-04-18 11:46:52.623052] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:02.265 [2024-04-18 11:46:52.623062] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:02.265 [2024-04-18 11:46:52.626483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:02.265 [2024-04-18 11:46:52.626505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:02.265 [2024-04-18 11:46:52.626565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.265 [2024-04-18 11:46:52.626574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:02.523 11:46:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:02.523 11:46:53 -- common/autotest_common.sh@850 -- # return 0 00:11:02.523 11:46:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:02.523 11:46:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:02.523 11:46:53 -- common/autotest_common.sh@10 -- # set +x 00:11:02.780 11:46:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:02.780 11:46:53 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:02.780 11:46:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:02.780 11:46:53 -- common/autotest_common.sh@10 -- # set +x 00:11:02.780 11:46:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:02.780 11:46:53 -- target/rpc.sh@26 -- # stats='{ 00:11:02.780 "tick_rate": 2500000000, 00:11:02.780 "poll_groups": [ 00:11:02.780 { 00:11:02.780 "name": "nvmf_tgt_poll_group_0", 00:11:02.780 "admin_qpairs": 0, 00:11:02.780 "io_qpairs": 0, 00:11:02.780 "current_admin_qpairs": 0, 00:11:02.780 "current_io_qpairs": 0, 00:11:02.780 "pending_bdev_io": 0, 00:11:02.780 "completed_nvme_io": 0, 00:11:02.780 "transports": [] 00:11:02.780 }, 00:11:02.780 { 00:11:02.780 "name": "nvmf_tgt_poll_group_1", 00:11:02.780 "admin_qpairs": 0, 00:11:02.780 "io_qpairs": 0, 00:11:02.780 "current_admin_qpairs": 0, 00:11:02.780 "current_io_qpairs": 0, 00:11:02.780 "pending_bdev_io": 0, 00:11:02.780 "completed_nvme_io": 0, 00:11:02.780 "transports": [] 00:11:02.780 }, 00:11:02.780 { 00:11:02.780 "name": "nvmf_tgt_poll_group_2", 00:11:02.780 "admin_qpairs": 0, 00:11:02.780 "io_qpairs": 0, 00:11:02.780 "current_admin_qpairs": 0, 00:11:02.780 "current_io_qpairs": 0, 00:11:02.780 "pending_bdev_io": 0, 00:11:02.780 "completed_nvme_io": 0, 00:11:02.780 "transports": [] 00:11:02.780 }, 00:11:02.780 { 00:11:02.780 "name": "nvmf_tgt_poll_group_3", 00:11:02.780 "admin_qpairs": 0, 00:11:02.780 "io_qpairs": 0, 00:11:02.780 "current_admin_qpairs": 0, 00:11:02.780 "current_io_qpairs": 0, 00:11:02.780 "pending_bdev_io": 0, 00:11:02.780 "completed_nvme_io": 0, 00:11:02.780 "transports": [] 00:11:02.780 } 00:11:02.780 ] 00:11:02.780 }' 00:11:02.780 11:46:53 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:02.780 11:46:53 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:02.780 11:46:53 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:02.780 11:46:53 -- target/rpc.sh@15 -- # wc -l 00:11:02.780 11:46:53 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:02.780 11:46:53 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:02.780 11:46:53 -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:02.780 11:46:53 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:02.780 11:46:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:02.780 11:46:53 -- common/autotest_common.sh@10 -- # set +x 00:11:02.780 [2024-04-18 11:46:53.223033] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:02.780 11:46:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:02.780 11:46:53 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:02.780 11:46:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:02.780 11:46:53 -- common/autotest_common.sh@10 -- # set +x 00:11:02.780 11:46:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:02.780 11:46:53 -- target/rpc.sh@33 -- # stats='{ 00:11:02.780 "tick_rate": 2500000000, 00:11:02.780 "poll_groups": [ 00:11:02.780 { 00:11:02.780 "name": "nvmf_tgt_poll_group_0", 00:11:02.780 "admin_qpairs": 0, 00:11:02.780 "io_qpairs": 0, 00:11:02.780 "current_admin_qpairs": 0, 00:11:02.780 "current_io_qpairs": 0, 00:11:02.780 "pending_bdev_io": 0, 00:11:02.780 "completed_nvme_io": 0, 00:11:02.780 "transports": [ 00:11:02.780 { 00:11:02.780 "trtype": "TCP" 00:11:02.780 } 00:11:02.780 ] 00:11:02.780 }, 00:11:02.780 { 00:11:02.780 "name": "nvmf_tgt_poll_group_1", 00:11:02.780 "admin_qpairs": 0, 00:11:02.780 "io_qpairs": 0, 00:11:02.780 "current_admin_qpairs": 0, 00:11:02.780 "current_io_qpairs": 0, 00:11:02.780 "pending_bdev_io": 0, 00:11:02.780 "completed_nvme_io": 0, 00:11:02.780 "transports": [ 00:11:02.780 { 00:11:02.780 "trtype": "TCP" 00:11:02.780 } 00:11:02.780 ] 00:11:02.780 }, 00:11:02.780 { 00:11:02.780 "name": "nvmf_tgt_poll_group_2", 00:11:02.780 "admin_qpairs": 0, 00:11:02.780 "io_qpairs": 0, 00:11:02.780 "current_admin_qpairs": 0, 00:11:02.780 "current_io_qpairs": 0, 00:11:02.780 "pending_bdev_io": 0, 00:11:02.780 "completed_nvme_io": 0, 00:11:02.780 "transports": [ 00:11:02.780 { 00:11:02.780 "trtype": "TCP" 00:11:02.780 } 00:11:02.780 ] 00:11:02.780 }, 00:11:02.780 { 00:11:02.780 "name": "nvmf_tgt_poll_group_3", 00:11:02.780 "admin_qpairs": 0, 00:11:02.780 "io_qpairs": 0, 00:11:02.780 "current_admin_qpairs": 0, 00:11:02.780 "current_io_qpairs": 0, 00:11:02.780 "pending_bdev_io": 0, 00:11:02.780 "completed_nvme_io": 0, 00:11:02.780 "transports": [ 00:11:02.780 { 00:11:02.780 "trtype": "TCP" 00:11:02.780 } 00:11:02.780 ] 00:11:02.780 } 00:11:02.780 ] 00:11:02.780 }' 00:11:02.780 11:46:53 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:02.780 11:46:53 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:02.780 11:46:53 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:02.780 11:46:53 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:02.780 11:46:53 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:02.780 11:46:53 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:02.780 11:46:53 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:02.780 11:46:53 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:02.780 11:46:53 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:03.037 11:46:53 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:03.037 11:46:53 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:03.037 11:46:53 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:03.037 11:46:53 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:03.037 11:46:53 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:03.037 11:46:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:03.037 11:46:53 -- common/autotest_common.sh@10 -- # set +x 00:11:03.037 Malloc1 00:11:03.037 11:46:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:03.037 11:46:53 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:03.037 11:46:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:03.037 11:46:53 -- common/autotest_common.sh@10 -- # set +x 00:11:03.037 11:46:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:03.037 11:46:53 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:03.037 11:46:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:03.037 11:46:53 -- common/autotest_common.sh@10 -- # set +x 00:11:03.037 11:46:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:03.037 11:46:53 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:03.037 11:46:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:03.037 11:46:53 -- common/autotest_common.sh@10 -- # set +x 00:11:03.037 11:46:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:03.037 11:46:53 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:03.037 11:46:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:03.037 11:46:53 -- common/autotest_common.sh@10 -- # set +x 00:11:03.037 [2024-04-18 11:46:53.474648] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:03.037 11:46:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:03.037 11:46:53 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.2 -s 4420 00:11:03.037 11:46:53 -- common/autotest_common.sh@638 -- # local es=0 00:11:03.037 11:46:53 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.2 -s 4420 00:11:03.038 11:46:53 -- common/autotest_common.sh@626 -- # local arg=nvme 00:11:03.038 11:46:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:03.038 11:46:53 -- common/autotest_common.sh@630 -- # type -t nvme 00:11:03.038 11:46:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:03.038 11:46:53 -- common/autotest_common.sh@632 -- # type -P nvme 00:11:03.038 11:46:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:03.038 11:46:53 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:11:03.038 11:46:53 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:11:03.038 11:46:53 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.2 -s 4420 00:11:03.038 [2024-04-18 11:46:53.510196] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e' 00:11:03.038 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:03.038 could not add new controller: failed to write to nvme-fabrics device 00:11:03.038 11:46:53 -- common/autotest_common.sh@641 -- # es=1 00:11:03.038 11:46:53 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:03.038 11:46:53 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:03.038 11:46:53 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:03.038 11:46:53 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:11:03.038 11:46:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:03.038 11:46:53 -- common/autotest_common.sh@10 -- # set +x 00:11:03.038 11:46:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:03.038 11:46:53 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:04.411 11:46:54 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:04.411 11:46:54 -- common/autotest_common.sh@1184 -- # local i=0 00:11:04.411 11:46:54 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:04.411 11:46:54 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:11:04.411 11:46:54 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:06.311 11:46:56 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:06.311 11:46:56 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:06.311 11:46:56 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:06.569 11:46:56 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:11:06.569 11:46:56 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:06.569 11:46:56 -- common/autotest_common.sh@1194 -- # return 0 00:11:06.569 11:46:56 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:06.827 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.827 11:46:57 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:06.827 11:46:57 -- common/autotest_common.sh@1205 -- # local i=0 00:11:06.827 11:46:57 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:11:06.827 11:46:57 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:06.827 11:46:57 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:11:06.827 11:46:57 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:06.827 11:46:57 -- common/autotest_common.sh@1217 -- # return 0 00:11:06.827 11:46:57 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:11:06.827 11:46:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:06.827 11:46:57 -- common/autotest_common.sh@10 -- # set +x 00:11:06.827 11:46:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:06.827 11:46:57 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:06.827 11:46:57 -- common/autotest_common.sh@638 -- # local es=0 00:11:06.827 11:46:57 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:06.827 11:46:57 -- common/autotest_common.sh@626 -- # local arg=nvme 00:11:06.827 11:46:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:06.827 11:46:57 -- common/autotest_common.sh@630 -- # type -t nvme 00:11:06.827 11:46:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:06.827 11:46:57 -- common/autotest_common.sh@632 -- # type -P nvme 00:11:06.827 11:46:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:06.827 11:46:57 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:11:06.827 11:46:57 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:11:06.827 11:46:57 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:06.827 [2024-04-18 11:46:57.250735] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e' 00:11:06.827 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:06.827 could not add new controller: failed to write to nvme-fabrics device 00:11:06.827 11:46:57 -- common/autotest_common.sh@641 -- # es=1 00:11:06.827 11:46:57 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:06.827 11:46:57 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:06.827 11:46:57 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:06.827 11:46:57 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:06.827 11:46:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:06.827 11:46:57 -- common/autotest_common.sh@10 -- # set +x 00:11:06.827 11:46:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:06.827 11:46:57 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:08.201 11:46:58 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:08.201 11:46:58 -- common/autotest_common.sh@1184 -- # local i=0 00:11:08.201 11:46:58 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:08.201 11:46:58 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:11:08.201 11:46:58 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:10.099 11:47:00 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:10.099 11:47:00 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:10.100 11:47:00 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:10.100 11:47:00 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:11:10.100 11:47:00 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:10.100 11:47:00 -- common/autotest_common.sh@1194 -- # return 0 00:11:10.100 11:47:00 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:10.358 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.358 11:47:00 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:10.358 11:47:00 -- common/autotest_common.sh@1205 -- # local i=0 00:11:10.616 11:47:00 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:11:10.616 11:47:00 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:10.616 11:47:00 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:11:10.616 11:47:00 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:10.616 11:47:00 -- common/autotest_common.sh@1217 -- # return 0 00:11:10.616 11:47:00 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:10.616 11:47:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:10.616 11:47:00 -- common/autotest_common.sh@10 -- # set +x 00:11:10.616 11:47:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:10.616 11:47:00 -- target/rpc.sh@81 -- # seq 1 5 00:11:10.616 11:47:00 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:10.616 11:47:00 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:10.616 11:47:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:10.616 11:47:00 -- common/autotest_common.sh@10 -- # set +x 00:11:10.616 11:47:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:10.616 11:47:00 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:10.616 11:47:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:10.616 11:47:00 -- common/autotest_common.sh@10 -- # set +x 00:11:10.616 [2024-04-18 11:47:00.974171] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:10.616 11:47:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:10.616 11:47:00 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:10.616 11:47:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:10.616 11:47:00 -- common/autotest_common.sh@10 -- # set +x 00:11:10.616 11:47:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:10.616 11:47:00 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:10.616 11:47:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:10.616 11:47:00 -- common/autotest_common.sh@10 -- # set +x 00:11:10.616 11:47:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:10.616 11:47:00 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:11.989 11:47:02 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:11.989 11:47:02 -- common/autotest_common.sh@1184 -- # local i=0 00:11:11.989 11:47:02 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:11.989 11:47:02 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:11:11.989 11:47:02 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:13.889 11:47:04 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:13.889 11:47:04 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:13.889 11:47:04 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:13.889 11:47:04 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:11:13.889 11:47:04 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:13.889 11:47:04 -- common/autotest_common.sh@1194 -- # return 0 00:11:13.889 11:47:04 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:14.458 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.458 11:47:04 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:14.458 11:47:04 -- common/autotest_common.sh@1205 -- # local i=0 00:11:14.458 11:47:04 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:11:14.458 11:47:04 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:14.458 11:47:04 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:11:14.458 11:47:04 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:14.458 11:47:04 -- common/autotest_common.sh@1217 -- # return 0 00:11:14.458 11:47:04 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:14.458 11:47:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:14.458 11:47:04 -- common/autotest_common.sh@10 -- # set +x 00:11:14.458 11:47:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:14.458 11:47:04 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:14.458 11:47:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:14.458 11:47:04 -- common/autotest_common.sh@10 -- # set +x 00:11:14.458 11:47:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:14.458 11:47:04 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:14.458 11:47:04 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:14.458 11:47:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:14.458 11:47:04 -- common/autotest_common.sh@10 -- # set +x 00:11:14.458 11:47:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:14.458 11:47:04 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:14.458 11:47:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:14.458 11:47:04 -- common/autotest_common.sh@10 -- # set +x 00:11:14.458 [2024-04-18 11:47:04.831292] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:14.458 11:47:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:14.458 11:47:04 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:14.458 11:47:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:14.458 11:47:04 -- common/autotest_common.sh@10 -- # set +x 00:11:14.458 11:47:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:14.458 11:47:04 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:14.458 11:47:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:14.458 11:47:04 -- common/autotest_common.sh@10 -- # set +x 00:11:14.458 11:47:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:14.458 11:47:04 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:15.837 11:47:06 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:15.837 11:47:06 -- common/autotest_common.sh@1184 -- # local i=0 00:11:15.837 11:47:06 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:15.837 11:47:06 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:11:15.837 11:47:06 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:17.740 11:47:08 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:17.740 11:47:08 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:17.740 11:47:08 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:17.740 11:47:08 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:11:17.740 11:47:08 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:17.740 11:47:08 -- common/autotest_common.sh@1194 -- # return 0 00:11:17.740 11:47:08 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:17.998 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.998 11:47:08 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:17.998 11:47:08 -- common/autotest_common.sh@1205 -- # local i=0 00:11:17.998 11:47:08 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:11:17.998 11:47:08 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:17.998 11:47:08 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:11:17.998 11:47:08 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:18.256 11:47:08 -- common/autotest_common.sh@1217 -- # return 0 00:11:18.256 11:47:08 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:18.256 11:47:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:18.256 11:47:08 -- common/autotest_common.sh@10 -- # set +x 00:11:18.256 11:47:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:18.256 11:47:08 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:18.256 11:47:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:18.256 11:47:08 -- common/autotest_common.sh@10 -- # set +x 00:11:18.256 11:47:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:18.256 11:47:08 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:18.256 11:47:08 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:18.256 11:47:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:18.256 11:47:08 -- common/autotest_common.sh@10 -- # set +x 00:11:18.256 11:47:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:18.256 11:47:08 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:18.256 11:47:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:18.256 11:47:08 -- common/autotest_common.sh@10 -- # set +x 00:11:18.256 [2024-04-18 11:47:08.588575] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:18.256 11:47:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:18.256 11:47:08 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:18.256 11:47:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:18.256 11:47:08 -- common/autotest_common.sh@10 -- # set +x 00:11:18.256 11:47:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:18.256 11:47:08 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:18.256 11:47:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:18.256 11:47:08 -- common/autotest_common.sh@10 -- # set +x 00:11:18.256 11:47:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:18.256 11:47:08 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:19.663 11:47:09 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:19.663 11:47:09 -- common/autotest_common.sh@1184 -- # local i=0 00:11:19.663 11:47:09 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:19.663 11:47:09 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:11:19.663 11:47:09 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:21.575 11:47:11 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:21.575 11:47:11 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:21.575 11:47:11 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:21.575 11:47:11 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:11:21.576 11:47:11 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:21.576 11:47:11 -- common/autotest_common.sh@1194 -- # return 0 00:11:21.576 11:47:11 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:21.835 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.835 11:47:12 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:21.835 11:47:12 -- common/autotest_common.sh@1205 -- # local i=0 00:11:21.835 11:47:12 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:11:21.835 11:47:12 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:21.835 11:47:12 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:11:21.835 11:47:12 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:21.835 11:47:12 -- common/autotest_common.sh@1217 -- # return 0 00:11:21.835 11:47:12 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:21.835 11:47:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:21.835 11:47:12 -- common/autotest_common.sh@10 -- # set +x 00:11:21.835 11:47:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:21.835 11:47:12 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:21.835 11:47:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:21.835 11:47:12 -- common/autotest_common.sh@10 -- # set +x 00:11:21.835 11:47:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:21.835 11:47:12 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:21.835 11:47:12 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:21.835 11:47:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:21.835 11:47:12 -- common/autotest_common.sh@10 -- # set +x 00:11:21.835 11:47:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:21.835 11:47:12 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:21.835 11:47:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:21.835 11:47:12 -- common/autotest_common.sh@10 -- # set +x 00:11:21.835 [2024-04-18 11:47:12.306723] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:21.835 11:47:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:21.835 11:47:12 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:21.835 11:47:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:21.835 11:47:12 -- common/autotest_common.sh@10 -- # set +x 00:11:21.835 11:47:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:21.835 11:47:12 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:21.835 11:47:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:21.835 11:47:12 -- common/autotest_common.sh@10 -- # set +x 00:11:21.835 11:47:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:21.835 11:47:12 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:23.212 11:47:13 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:23.212 11:47:13 -- common/autotest_common.sh@1184 -- # local i=0 00:11:23.212 11:47:13 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:23.212 11:47:13 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:11:23.212 11:47:13 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:25.746 11:47:15 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:25.746 11:47:15 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:25.746 11:47:15 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:25.746 11:47:15 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:11:25.746 11:47:15 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:25.746 11:47:15 -- common/autotest_common.sh@1194 -- # return 0 00:11:25.746 11:47:15 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:25.746 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.746 11:47:15 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:25.746 11:47:15 -- common/autotest_common.sh@1205 -- # local i=0 00:11:25.746 11:47:15 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:11:25.746 11:47:15 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:25.746 11:47:16 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:11:25.746 11:47:16 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:25.746 11:47:16 -- common/autotest_common.sh@1217 -- # return 0 00:11:25.746 11:47:16 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:25.746 11:47:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:25.746 11:47:16 -- common/autotest_common.sh@10 -- # set +x 00:11:25.746 11:47:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:25.746 11:47:16 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:25.746 11:47:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:25.746 11:47:16 -- common/autotest_common.sh@10 -- # set +x 00:11:25.746 11:47:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:25.746 11:47:16 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:25.746 11:47:16 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:25.746 11:47:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:25.746 11:47:16 -- common/autotest_common.sh@10 -- # set +x 00:11:25.746 11:47:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:25.746 11:47:16 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:25.746 11:47:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:25.746 11:47:16 -- common/autotest_common.sh@10 -- # set +x 00:11:25.746 [2024-04-18 11:47:16.065218] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:25.746 11:47:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:25.746 11:47:16 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:25.746 11:47:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:25.746 11:47:16 -- common/autotest_common.sh@10 -- # set +x 00:11:25.746 11:47:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:25.746 11:47:16 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:25.746 11:47:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:25.746 11:47:16 -- common/autotest_common.sh@10 -- # set +x 00:11:25.746 11:47:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:25.746 11:47:16 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:27.123 11:47:17 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:27.123 11:47:17 -- common/autotest_common.sh@1184 -- # local i=0 00:11:27.123 11:47:17 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:27.123 11:47:17 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:11:27.123 11:47:17 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:29.027 11:47:19 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:29.027 11:47:19 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:29.027 11:47:19 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:29.027 11:47:19 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:11:29.027 11:47:19 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:29.027 11:47:19 -- common/autotest_common.sh@1194 -- # return 0 00:11:29.027 11:47:19 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:29.286 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.286 11:47:19 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:29.286 11:47:19 -- common/autotest_common.sh@1205 -- # local i=0 00:11:29.286 11:47:19 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:11:29.286 11:47:19 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:29.286 11:47:19 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:11:29.286 11:47:19 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:29.286 11:47:19 -- common/autotest_common.sh@1217 -- # return 0 00:11:29.286 11:47:19 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:29.286 11:47:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:29.286 11:47:19 -- common/autotest_common.sh@10 -- # set +x 00:11:29.286 11:47:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:29.286 11:47:19 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:29.286 11:47:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:29.286 11:47:19 -- common/autotest_common.sh@10 -- # set +x 00:11:29.286 11:47:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:29.286 11:47:19 -- target/rpc.sh@99 -- # seq 1 5 00:11:29.286 11:47:19 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:29.286 11:47:19 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:29.286 11:47:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:29.286 11:47:19 -- common/autotest_common.sh@10 -- # set +x 00:11:29.286 11:47:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:29.286 11:47:19 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:29.286 11:47:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:29.286 11:47:19 -- common/autotest_common.sh@10 -- # set +x 00:11:29.286 [2024-04-18 11:47:19.798127] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:29.286 11:47:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:29.286 11:47:19 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:29.286 11:47:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:29.286 11:47:19 -- common/autotest_common.sh@10 -- # set +x 00:11:29.286 11:47:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:29.286 11:47:19 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:29.286 11:47:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:29.286 11:47:19 -- common/autotest_common.sh@10 -- # set +x 00:11:29.286 11:47:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:29.286 11:47:19 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:29.286 11:47:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:29.286 11:47:19 -- common/autotest_common.sh@10 -- # set +x 00:11:29.286 11:47:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:29.286 11:47:19 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:29.286 11:47:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:29.286 11:47:19 -- common/autotest_common.sh@10 -- # set +x 00:11:29.286 11:47:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:29.545 11:47:19 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:29.545 11:47:19 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:29.545 11:47:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:29.545 11:47:19 -- common/autotest_common.sh@10 -- # set +x 00:11:29.545 11:47:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:29.545 11:47:19 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:29.545 11:47:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:29.545 11:47:19 -- common/autotest_common.sh@10 -- # set +x 00:11:29.545 [2024-04-18 11:47:19.846211] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:29.545 11:47:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:29.545 11:47:19 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:29.545 11:47:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:29.545 11:47:19 -- common/autotest_common.sh@10 -- # set +x 00:11:29.545 11:47:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:29.545 11:47:19 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:29.545 11:47:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:29.545 11:47:19 -- common/autotest_common.sh@10 -- # set +x 00:11:29.545 11:47:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:29.545 11:47:19 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:29.545 11:47:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:29.545 11:47:19 -- common/autotest_common.sh@10 -- # set +x 00:11:29.545 11:47:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:29.545 11:47:19 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:29.545 11:47:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:29.545 11:47:19 -- common/autotest_common.sh@10 -- # set +x 00:11:29.545 11:47:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:29.545 11:47:19 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:29.545 11:47:19 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:29.545 11:47:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:29.545 11:47:19 -- common/autotest_common.sh@10 -- # set +x 00:11:29.545 11:47:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:29.545 11:47:19 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:29.545 11:47:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:29.545 11:47:19 -- common/autotest_common.sh@10 -- # set +x 00:11:29.545 [2024-04-18 11:47:19.894372] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:29.545 11:47:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:29.545 11:47:19 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:29.545 11:47:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:29.545 11:47:19 -- common/autotest_common.sh@10 -- # set +x 00:11:29.545 11:47:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:29.545 11:47:19 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:29.545 11:47:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:29.545 11:47:19 -- common/autotest_common.sh@10 -- # set +x 00:11:29.545 11:47:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:29.545 11:47:19 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:29.545 11:47:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:29.545 11:47:19 -- common/autotest_common.sh@10 -- # set +x 00:11:29.545 11:47:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:29.545 11:47:19 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:29.545 11:47:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:29.545 11:47:19 -- common/autotest_common.sh@10 -- # set +x 00:11:29.545 11:47:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:29.545 11:47:19 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:29.545 11:47:19 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:29.545 11:47:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:29.546 11:47:19 -- common/autotest_common.sh@10 -- # set +x 00:11:29.546 11:47:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:29.546 11:47:19 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:29.546 11:47:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:29.546 11:47:19 -- common/autotest_common.sh@10 -- # set +x 00:11:29.546 [2024-04-18 11:47:19.946577] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:29.546 11:47:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:29.546 11:47:19 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:29.546 11:47:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:29.546 11:47:19 -- common/autotest_common.sh@10 -- # set +x 00:11:29.546 11:47:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:29.546 11:47:19 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:29.546 11:47:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:29.546 11:47:19 -- common/autotest_common.sh@10 -- # set +x 00:11:29.546 11:47:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:29.546 11:47:19 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:29.546 11:47:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:29.546 11:47:19 -- common/autotest_common.sh@10 -- # set +x 00:11:29.546 11:47:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:29.546 11:47:19 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:29.546 11:47:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:29.546 11:47:19 -- common/autotest_common.sh@10 -- # set +x 00:11:29.546 11:47:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:29.546 11:47:19 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:29.546 11:47:19 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:29.546 11:47:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:29.546 11:47:19 -- common/autotest_common.sh@10 -- # set +x 00:11:29.546 11:47:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:29.546 11:47:19 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:29.546 11:47:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:29.546 11:47:19 -- common/autotest_common.sh@10 -- # set +x 00:11:29.546 [2024-04-18 11:47:19.994732] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:29.546 11:47:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:29.546 11:47:19 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:29.546 11:47:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:29.546 11:47:19 -- common/autotest_common.sh@10 -- # set +x 00:11:29.546 11:47:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:29.546 11:47:20 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:29.546 11:47:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:29.546 11:47:20 -- common/autotest_common.sh@10 -- # set +x 00:11:29.546 11:47:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:29.546 11:47:20 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:29.546 11:47:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:29.546 11:47:20 -- common/autotest_common.sh@10 -- # set +x 00:11:29.546 11:47:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:29.546 11:47:20 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:29.546 11:47:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:29.546 11:47:20 -- common/autotest_common.sh@10 -- # set +x 00:11:29.546 11:47:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:29.546 11:47:20 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:11:29.546 11:47:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:29.546 11:47:20 -- common/autotest_common.sh@10 -- # set +x 00:11:29.546 11:47:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:29.546 11:47:20 -- target/rpc.sh@110 -- # stats='{ 00:11:29.546 "tick_rate": 2500000000, 00:11:29.546 "poll_groups": [ 00:11:29.546 { 00:11:29.546 "name": "nvmf_tgt_poll_group_0", 00:11:29.546 "admin_qpairs": 2, 00:11:29.546 "io_qpairs": 196, 00:11:29.546 "current_admin_qpairs": 0, 00:11:29.546 "current_io_qpairs": 0, 00:11:29.546 "pending_bdev_io": 0, 00:11:29.546 "completed_nvme_io": 296, 00:11:29.546 "transports": [ 00:11:29.546 { 00:11:29.546 "trtype": "TCP" 00:11:29.546 } 00:11:29.546 ] 00:11:29.546 }, 00:11:29.546 { 00:11:29.546 "name": "nvmf_tgt_poll_group_1", 00:11:29.546 "admin_qpairs": 2, 00:11:29.546 "io_qpairs": 196, 00:11:29.546 "current_admin_qpairs": 0, 00:11:29.546 "current_io_qpairs": 0, 00:11:29.546 "pending_bdev_io": 0, 00:11:29.546 "completed_nvme_io": 199, 00:11:29.546 "transports": [ 00:11:29.546 { 00:11:29.546 "trtype": "TCP" 00:11:29.546 } 00:11:29.546 ] 00:11:29.546 }, 00:11:29.546 { 00:11:29.546 "name": "nvmf_tgt_poll_group_2", 00:11:29.546 "admin_qpairs": 1, 00:11:29.546 "io_qpairs": 196, 00:11:29.546 "current_admin_qpairs": 0, 00:11:29.546 "current_io_qpairs": 0, 00:11:29.546 "pending_bdev_io": 0, 00:11:29.546 "completed_nvme_io": 391, 00:11:29.546 "transports": [ 00:11:29.546 { 00:11:29.546 "trtype": "TCP" 00:11:29.546 } 00:11:29.546 ] 00:11:29.546 }, 00:11:29.546 { 00:11:29.546 "name": "nvmf_tgt_poll_group_3", 00:11:29.546 "admin_qpairs": 2, 00:11:29.546 "io_qpairs": 196, 00:11:29.546 "current_admin_qpairs": 0, 00:11:29.546 "current_io_qpairs": 0, 00:11:29.546 "pending_bdev_io": 0, 00:11:29.546 "completed_nvme_io": 248, 00:11:29.546 "transports": [ 00:11:29.546 { 00:11:29.546 "trtype": "TCP" 00:11:29.546 } 00:11:29.546 ] 00:11:29.546 } 00:11:29.546 ] 00:11:29.546 }' 00:11:29.546 11:47:20 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:11:29.546 11:47:20 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:29.546 11:47:20 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:29.546 11:47:20 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:29.805 11:47:20 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:11:29.805 11:47:20 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:11:29.805 11:47:20 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:29.805 11:47:20 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:29.805 11:47:20 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:29.805 11:47:20 -- target/rpc.sh@113 -- # (( 784 > 0 )) 00:11:29.805 11:47:20 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:11:29.805 11:47:20 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:11:29.805 11:47:20 -- target/rpc.sh@123 -- # nvmftestfini 00:11:29.805 11:47:20 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:29.805 11:47:20 -- nvmf/common.sh@117 -- # sync 00:11:29.805 11:47:20 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:29.805 11:47:20 -- nvmf/common.sh@120 -- # set +e 00:11:29.805 11:47:20 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:29.805 11:47:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:29.805 rmmod nvme_tcp 00:11:29.805 rmmod nvme_fabrics 00:11:29.805 rmmod nvme_keyring 00:11:29.805 11:47:20 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:29.805 11:47:20 -- nvmf/common.sh@124 -- # set -e 00:11:29.805 11:47:20 -- nvmf/common.sh@125 -- # return 0 00:11:29.805 11:47:20 -- nvmf/common.sh@478 -- # '[' -n 2371695 ']' 00:11:29.805 11:47:20 -- nvmf/common.sh@479 -- # killprocess 2371695 00:11:29.805 11:47:20 -- common/autotest_common.sh@936 -- # '[' -z 2371695 ']' 00:11:29.805 11:47:20 -- common/autotest_common.sh@940 -- # kill -0 2371695 00:11:29.805 11:47:20 -- common/autotest_common.sh@941 -- # uname 00:11:29.805 11:47:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:29.805 11:47:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2371695 00:11:29.805 11:47:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:29.805 11:47:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:29.805 11:47:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2371695' 00:11:29.805 killing process with pid 2371695 00:11:29.805 11:47:20 -- common/autotest_common.sh@955 -- # kill 2371695 00:11:29.805 11:47:20 -- common/autotest_common.sh@960 -- # wait 2371695 00:11:31.709 11:47:21 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:31.709 11:47:21 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:31.709 11:47:21 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:31.709 11:47:21 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:31.709 11:47:21 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:31.709 11:47:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:31.709 11:47:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:31.709 11:47:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:33.612 11:47:23 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:33.612 00:11:33.612 real 0m38.809s 00:11:33.612 user 1m55.237s 00:11:33.612 sys 0m8.209s 00:11:33.612 11:47:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:33.612 11:47:23 -- common/autotest_common.sh@10 -- # set +x 00:11:33.612 ************************************ 00:11:33.612 END TEST nvmf_rpc 00:11:33.612 ************************************ 00:11:33.612 11:47:23 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:33.612 11:47:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:33.612 11:47:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:33.612 11:47:23 -- common/autotest_common.sh@10 -- # set +x 00:11:33.612 ************************************ 00:11:33.612 START TEST nvmf_invalid 00:11:33.612 ************************************ 00:11:33.612 11:47:24 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:33.612 * Looking for test storage... 00:11:33.871 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:33.871 11:47:24 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:33.871 11:47:24 -- nvmf/common.sh@7 -- # uname -s 00:11:33.871 11:47:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:33.871 11:47:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:33.871 11:47:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:33.871 11:47:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:33.871 11:47:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:33.871 11:47:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:33.871 11:47:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:33.871 11:47:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:33.871 11:47:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:33.871 11:47:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:33.871 11:47:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:11:33.871 11:47:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:11:33.871 11:47:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:33.871 11:47:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:33.871 11:47:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:33.871 11:47:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:33.871 11:47:24 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:33.871 11:47:24 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:33.871 11:47:24 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:33.871 11:47:24 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:33.871 11:47:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.871 11:47:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.871 11:47:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.871 11:47:24 -- paths/export.sh@5 -- # export PATH 00:11:33.871 11:47:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.871 11:47:24 -- nvmf/common.sh@47 -- # : 0 00:11:33.871 11:47:24 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:33.871 11:47:24 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:33.871 11:47:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:33.871 11:47:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:33.871 11:47:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:33.871 11:47:24 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:33.871 11:47:24 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:33.871 11:47:24 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:33.871 11:47:24 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:33.871 11:47:24 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:33.871 11:47:24 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:11:33.871 11:47:24 -- target/invalid.sh@14 -- # target=foobar 00:11:33.871 11:47:24 -- target/invalid.sh@16 -- # RANDOM=0 00:11:33.871 11:47:24 -- target/invalid.sh@34 -- # nvmftestinit 00:11:33.871 11:47:24 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:33.871 11:47:24 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:33.871 11:47:24 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:33.871 11:47:24 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:33.871 11:47:24 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:33.871 11:47:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:33.871 11:47:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:33.871 11:47:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:33.871 11:47:24 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:33.871 11:47:24 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:33.872 11:47:24 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:33.872 11:47:24 -- common/autotest_common.sh@10 -- # set +x 00:11:40.431 11:47:30 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:40.431 11:47:30 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:40.431 11:47:30 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:40.431 11:47:30 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:40.431 11:47:30 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:40.431 11:47:30 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:40.431 11:47:30 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:40.431 11:47:30 -- nvmf/common.sh@295 -- # net_devs=() 00:11:40.431 11:47:30 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:40.431 11:47:30 -- nvmf/common.sh@296 -- # e810=() 00:11:40.431 11:47:30 -- nvmf/common.sh@296 -- # local -ga e810 00:11:40.431 11:47:30 -- nvmf/common.sh@297 -- # x722=() 00:11:40.431 11:47:30 -- nvmf/common.sh@297 -- # local -ga x722 00:11:40.431 11:47:30 -- nvmf/common.sh@298 -- # mlx=() 00:11:40.431 11:47:30 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:40.431 11:47:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:40.431 11:47:30 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:40.431 11:47:30 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:40.431 11:47:30 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:40.431 11:47:30 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:40.431 11:47:30 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:40.431 11:47:30 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:40.431 11:47:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:40.431 11:47:30 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:40.431 11:47:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:40.431 11:47:30 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:40.431 11:47:30 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:40.431 11:47:30 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:40.431 11:47:30 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:40.431 11:47:30 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:40.431 11:47:30 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:40.431 11:47:30 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:40.431 11:47:30 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:40.431 11:47:30 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:40.431 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:40.431 11:47:30 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:40.431 11:47:30 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:40.431 11:47:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:40.431 11:47:30 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:40.431 11:47:30 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:40.431 11:47:30 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:40.431 11:47:30 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:40.431 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:40.431 11:47:30 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:40.431 11:47:30 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:40.431 11:47:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:40.431 11:47:30 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:40.431 11:47:30 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:40.431 11:47:30 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:40.431 11:47:30 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:40.431 11:47:30 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:40.431 11:47:30 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:40.431 11:47:30 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:40.431 11:47:30 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:40.431 11:47:30 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:40.431 11:47:30 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:40.431 Found net devices under 0000:af:00.0: cvl_0_0 00:11:40.431 11:47:30 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:40.431 11:47:30 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:40.431 11:47:30 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:40.431 11:47:30 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:40.431 11:47:30 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:40.431 11:47:30 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:40.431 Found net devices under 0000:af:00.1: cvl_0_1 00:11:40.431 11:47:30 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:40.431 11:47:30 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:40.431 11:47:30 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:40.431 11:47:30 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:40.431 11:47:30 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:40.431 11:47:30 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:40.431 11:47:30 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:40.431 11:47:30 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:40.431 11:47:30 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:40.431 11:47:30 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:40.431 11:47:30 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:40.431 11:47:30 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:40.431 11:47:30 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:40.431 11:47:30 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:40.431 11:47:30 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:40.431 11:47:30 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:40.431 11:47:30 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:40.431 11:47:30 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:40.431 11:47:30 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:40.431 11:47:30 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:40.431 11:47:30 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:40.431 11:47:30 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:40.431 11:47:30 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:40.431 11:47:30 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:40.431 11:47:30 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:40.431 11:47:30 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:40.431 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:40.431 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:11:40.431 00:11:40.431 --- 10.0.0.2 ping statistics --- 00:11:40.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:40.431 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:11:40.431 11:47:30 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:40.431 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:40.431 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:11:40.431 00:11:40.431 --- 10.0.0.1 ping statistics --- 00:11:40.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:40.431 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:11:40.431 11:47:30 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:40.431 11:47:30 -- nvmf/common.sh@411 -- # return 0 00:11:40.431 11:47:30 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:40.431 11:47:30 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:40.431 11:47:30 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:40.431 11:47:30 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:40.431 11:47:30 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:40.431 11:47:30 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:40.431 11:47:30 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:40.431 11:47:30 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:11:40.431 11:47:30 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:40.431 11:47:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:40.431 11:47:30 -- common/autotest_common.sh@10 -- # set +x 00:11:40.431 11:47:30 -- nvmf/common.sh@470 -- # nvmfpid=2380513 00:11:40.431 11:47:30 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:40.431 11:47:30 -- nvmf/common.sh@471 -- # waitforlisten 2380513 00:11:40.431 11:47:30 -- common/autotest_common.sh@817 -- # '[' -z 2380513 ']' 00:11:40.431 11:47:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:40.431 11:47:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:40.431 11:47:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:40.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:40.431 11:47:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:40.431 11:47:30 -- common/autotest_common.sh@10 -- # set +x 00:11:40.431 [2024-04-18 11:47:30.869925] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:11:40.431 [2024-04-18 11:47:30.870011] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:40.431 EAL: No free 2048 kB hugepages reported on node 1 00:11:40.689 [2024-04-18 11:47:31.000925] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:40.689 [2024-04-18 11:47:31.211444] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:40.689 [2024-04-18 11:47:31.211494] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:40.689 [2024-04-18 11:47:31.211506] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:40.689 [2024-04-18 11:47:31.211518] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:40.689 [2024-04-18 11:47:31.211527] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:40.689 [2024-04-18 11:47:31.211662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:40.689 [2024-04-18 11:47:31.211737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:40.689 [2024-04-18 11:47:31.211799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.689 [2024-04-18 11:47:31.211809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:41.295 11:47:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:41.295 11:47:31 -- common/autotest_common.sh@850 -- # return 0 00:11:41.295 11:47:31 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:41.295 11:47:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:41.295 11:47:31 -- common/autotest_common.sh@10 -- # set +x 00:11:41.295 11:47:31 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:41.295 11:47:31 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:41.295 11:47:31 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode20538 00:11:41.553 [2024-04-18 11:47:31.842460] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:11:41.553 11:47:31 -- target/invalid.sh@40 -- # out='request: 00:11:41.553 { 00:11:41.553 "nqn": "nqn.2016-06.io.spdk:cnode20538", 00:11:41.553 "tgt_name": "foobar", 00:11:41.553 "method": "nvmf_create_subsystem", 00:11:41.553 "req_id": 1 00:11:41.553 } 00:11:41.553 Got JSON-RPC error response 00:11:41.553 response: 00:11:41.553 { 00:11:41.553 "code": -32603, 00:11:41.553 "message": "Unable to find target foobar" 00:11:41.553 }' 00:11:41.553 11:47:31 -- target/invalid.sh@41 -- # [[ request: 00:11:41.553 { 00:11:41.554 "nqn": "nqn.2016-06.io.spdk:cnode20538", 00:11:41.554 "tgt_name": "foobar", 00:11:41.554 "method": "nvmf_create_subsystem", 00:11:41.554 "req_id": 1 00:11:41.554 } 00:11:41.554 Got JSON-RPC error response 00:11:41.554 response: 00:11:41.554 { 00:11:41.554 "code": -32603, 00:11:41.554 "message": "Unable to find target foobar" 00:11:41.554 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:11:41.554 11:47:31 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:11:41.554 11:47:31 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode25562 00:11:41.554 [2024-04-18 11:47:32.035181] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25562: invalid serial number 'SPDKISFASTANDAWESOME' 00:11:41.554 11:47:32 -- target/invalid.sh@45 -- # out='request: 00:11:41.554 { 00:11:41.554 "nqn": "nqn.2016-06.io.spdk:cnode25562", 00:11:41.554 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:41.554 "method": "nvmf_create_subsystem", 00:11:41.554 "req_id": 1 00:11:41.554 } 00:11:41.554 Got JSON-RPC error response 00:11:41.554 response: 00:11:41.554 { 00:11:41.554 "code": -32602, 00:11:41.554 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:41.554 }' 00:11:41.554 11:47:32 -- target/invalid.sh@46 -- # [[ request: 00:11:41.554 { 00:11:41.554 "nqn": "nqn.2016-06.io.spdk:cnode25562", 00:11:41.554 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:41.554 "method": "nvmf_create_subsystem", 00:11:41.554 "req_id": 1 00:11:41.554 } 00:11:41.554 Got JSON-RPC error response 00:11:41.554 response: 00:11:41.554 { 00:11:41.554 "code": -32602, 00:11:41.554 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:41.554 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:41.554 11:47:32 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:11:41.554 11:47:32 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode7046 00:11:41.812 [2024-04-18 11:47:32.227810] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7046: invalid model number 'SPDK_Controller' 00:11:41.812 11:47:32 -- target/invalid.sh@50 -- # out='request: 00:11:41.812 { 00:11:41.812 "nqn": "nqn.2016-06.io.spdk:cnode7046", 00:11:41.812 "model_number": "SPDK_Controller\u001f", 00:11:41.812 "method": "nvmf_create_subsystem", 00:11:41.812 "req_id": 1 00:11:41.812 } 00:11:41.812 Got JSON-RPC error response 00:11:41.812 response: 00:11:41.812 { 00:11:41.812 "code": -32602, 00:11:41.812 "message": "Invalid MN SPDK_Controller\u001f" 00:11:41.812 }' 00:11:41.812 11:47:32 -- target/invalid.sh@51 -- # [[ request: 00:11:41.812 { 00:11:41.812 "nqn": "nqn.2016-06.io.spdk:cnode7046", 00:11:41.812 "model_number": "SPDK_Controller\u001f", 00:11:41.812 "method": "nvmf_create_subsystem", 00:11:41.812 "req_id": 1 00:11:41.812 } 00:11:41.812 Got JSON-RPC error response 00:11:41.812 response: 00:11:41.812 { 00:11:41.812 "code": -32602, 00:11:41.812 "message": "Invalid MN SPDK_Controller\u001f" 00:11:41.812 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:41.812 11:47:32 -- target/invalid.sh@54 -- # gen_random_s 21 00:11:41.812 11:47:32 -- target/invalid.sh@19 -- # local length=21 ll 00:11:41.812 11:47:32 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:41.812 11:47:32 -- target/invalid.sh@21 -- # local chars 00:11:41.812 11:47:32 -- target/invalid.sh@22 -- # local string 00:11:41.812 11:47:32 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:41.812 11:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.812 11:47:32 -- target/invalid.sh@25 -- # printf %x 119 00:11:41.812 11:47:32 -- target/invalid.sh@25 -- # echo -e '\x77' 00:11:41.812 11:47:32 -- target/invalid.sh@25 -- # string+=w 00:11:41.812 11:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.812 11:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.812 11:47:32 -- target/invalid.sh@25 -- # printf %x 48 00:11:41.812 11:47:32 -- target/invalid.sh@25 -- # echo -e '\x30' 00:11:41.812 11:47:32 -- target/invalid.sh@25 -- # string+=0 00:11:41.812 11:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.812 11:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.812 11:47:32 -- target/invalid.sh@25 -- # printf %x 121 00:11:41.812 11:47:32 -- target/invalid.sh@25 -- # echo -e '\x79' 00:11:41.812 11:47:32 -- target/invalid.sh@25 -- # string+=y 00:11:41.812 11:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.812 11:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.812 11:47:32 -- target/invalid.sh@25 -- # printf %x 121 00:11:41.812 11:47:32 -- target/invalid.sh@25 -- # echo -e '\x79' 00:11:41.812 11:47:32 -- target/invalid.sh@25 -- # string+=y 00:11:41.812 11:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.812 11:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.812 11:47:32 -- target/invalid.sh@25 -- # printf %x 105 00:11:41.812 11:47:32 -- target/invalid.sh@25 -- # echo -e '\x69' 00:11:41.812 11:47:32 -- target/invalid.sh@25 -- # string+=i 00:11:41.812 11:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.812 11:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.812 11:47:32 -- target/invalid.sh@25 -- # printf %x 79 00:11:41.812 11:47:32 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:11:41.812 11:47:32 -- target/invalid.sh@25 -- # string+=O 00:11:41.812 11:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.812 11:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.812 11:47:32 -- target/invalid.sh@25 -- # printf %x 118 00:11:41.812 11:47:32 -- target/invalid.sh@25 -- # echo -e '\x76' 00:11:41.812 11:47:32 -- target/invalid.sh@25 -- # string+=v 00:11:41.812 11:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.812 11:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.812 11:47:32 -- target/invalid.sh@25 -- # printf %x 33 00:11:41.812 11:47:32 -- target/invalid.sh@25 -- # echo -e '\x21' 00:11:41.812 11:47:32 -- target/invalid.sh@25 -- # string+='!' 00:11:41.812 11:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.812 11:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.812 11:47:32 -- target/invalid.sh@25 -- # printf %x 56 00:11:41.812 11:47:32 -- target/invalid.sh@25 -- # echo -e '\x38' 00:11:41.812 11:47:32 -- target/invalid.sh@25 -- # string+=8 00:11:41.812 11:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.812 11:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.812 11:47:32 -- target/invalid.sh@25 -- # printf %x 103 00:11:41.812 11:47:32 -- target/invalid.sh@25 -- # echo -e '\x67' 00:11:41.812 11:47:32 -- target/invalid.sh@25 -- # string+=g 00:11:41.812 11:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.812 11:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.812 11:47:32 -- target/invalid.sh@25 -- # printf %x 54 00:11:41.812 11:47:32 -- target/invalid.sh@25 -- # echo -e '\x36' 00:11:41.812 11:47:32 -- target/invalid.sh@25 -- # string+=6 00:11:41.812 11:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.812 11:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.812 11:47:32 -- target/invalid.sh@25 -- # printf %x 103 00:11:41.812 11:47:32 -- target/invalid.sh@25 -- # echo -e '\x67' 00:11:41.812 11:47:32 -- target/invalid.sh@25 -- # string+=g 00:11:41.812 11:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.812 11:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.812 11:47:32 -- target/invalid.sh@25 -- # printf %x 57 00:11:42.070 11:47:32 -- target/invalid.sh@25 -- # echo -e '\x39' 00:11:42.070 11:47:32 -- target/invalid.sh@25 -- # string+=9 00:11:42.070 11:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.070 11:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.070 11:47:32 -- target/invalid.sh@25 -- # printf %x 54 00:11:42.070 11:47:32 -- target/invalid.sh@25 -- # echo -e '\x36' 00:11:42.070 11:47:32 -- target/invalid.sh@25 -- # string+=6 00:11:42.070 11:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.070 11:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.070 11:47:32 -- target/invalid.sh@25 -- # printf %x 77 00:11:42.070 11:47:32 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:11:42.070 11:47:32 -- target/invalid.sh@25 -- # string+=M 00:11:42.070 11:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.070 11:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.070 11:47:32 -- target/invalid.sh@25 -- # printf %x 45 00:11:42.070 11:47:32 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:11:42.070 11:47:32 -- target/invalid.sh@25 -- # string+=- 00:11:42.070 11:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.070 11:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.070 11:47:32 -- target/invalid.sh@25 -- # printf %x 106 00:11:42.070 11:47:32 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:11:42.070 11:47:32 -- target/invalid.sh@25 -- # string+=j 00:11:42.070 11:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.070 11:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.070 11:47:32 -- target/invalid.sh@25 -- # printf %x 52 00:11:42.070 11:47:32 -- target/invalid.sh@25 -- # echo -e '\x34' 00:11:42.070 11:47:32 -- target/invalid.sh@25 -- # string+=4 00:11:42.070 11:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.070 11:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.070 11:47:32 -- target/invalid.sh@25 -- # printf %x 124 00:11:42.070 11:47:32 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:11:42.070 11:47:32 -- target/invalid.sh@25 -- # string+='|' 00:11:42.070 11:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.070 11:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.070 11:47:32 -- target/invalid.sh@25 -- # printf %x 91 00:11:42.070 11:47:32 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:11:42.070 11:47:32 -- target/invalid.sh@25 -- # string+='[' 00:11:42.070 11:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.070 11:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.070 11:47:32 -- target/invalid.sh@25 -- # printf %x 78 00:11:42.070 11:47:32 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:11:42.070 11:47:32 -- target/invalid.sh@25 -- # string+=N 00:11:42.070 11:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.070 11:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.070 11:47:32 -- target/invalid.sh@28 -- # [[ w == \- ]] 00:11:42.070 11:47:32 -- target/invalid.sh@31 -- # echo 'w0yyiOv!8g6g96M-j4|[N' 00:11:42.070 11:47:32 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'w0yyiOv!8g6g96M-j4|[N' nqn.2016-06.io.spdk:cnode3809 00:11:42.070 [2024-04-18 11:47:32.584987] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3809: invalid serial number 'w0yyiOv!8g6g96M-j4|[N' 00:11:42.070 11:47:32 -- target/invalid.sh@54 -- # out='request: 00:11:42.070 { 00:11:42.070 "nqn": "nqn.2016-06.io.spdk:cnode3809", 00:11:42.070 "serial_number": "w0yyiOv!8g6g96M-j4|[N", 00:11:42.070 "method": "nvmf_create_subsystem", 00:11:42.070 "req_id": 1 00:11:42.070 } 00:11:42.070 Got JSON-RPC error response 00:11:42.070 response: 00:11:42.070 { 00:11:42.070 "code": -32602, 00:11:42.070 "message": "Invalid SN w0yyiOv!8g6g96M-j4|[N" 00:11:42.070 }' 00:11:42.070 11:47:32 -- target/invalid.sh@55 -- # [[ request: 00:11:42.070 { 00:11:42.070 "nqn": "nqn.2016-06.io.spdk:cnode3809", 00:11:42.070 "serial_number": "w0yyiOv!8g6g96M-j4|[N", 00:11:42.070 "method": "nvmf_create_subsystem", 00:11:42.070 "req_id": 1 00:11:42.070 } 00:11:42.070 Got JSON-RPC error response 00:11:42.070 response: 00:11:42.070 { 00:11:42.070 "code": -32602, 00:11:42.070 "message": "Invalid SN w0yyiOv!8g6g96M-j4|[N" 00:11:42.070 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:42.328 11:47:32 -- target/invalid.sh@58 -- # gen_random_s 41 00:11:42.328 11:47:32 -- target/invalid.sh@19 -- # local length=41 ll 00:11:42.328 11:47:32 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:42.328 11:47:32 -- target/invalid.sh@21 -- # local chars 00:11:42.328 11:47:32 -- target/invalid.sh@22 -- # local string 00:11:42.328 11:47:32 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:42.328 11:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.328 11:47:32 -- target/invalid.sh@25 -- # printf %x 91 00:11:42.328 11:47:32 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:11:42.328 11:47:32 -- target/invalid.sh@25 -- # string+='[' 00:11:42.328 11:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.328 11:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.328 11:47:32 -- target/invalid.sh@25 -- # printf %x 35 00:11:42.328 11:47:32 -- target/invalid.sh@25 -- # echo -e '\x23' 00:11:42.328 11:47:32 -- target/invalid.sh@25 -- # string+='#' 00:11:42.328 11:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.328 11:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.328 11:47:32 -- target/invalid.sh@25 -- # printf %x 124 00:11:42.328 11:47:32 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:11:42.328 11:47:32 -- target/invalid.sh@25 -- # string+='|' 00:11:42.328 11:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.328 11:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.328 11:47:32 -- target/invalid.sh@25 -- # printf %x 50 00:11:42.328 11:47:32 -- target/invalid.sh@25 -- # echo -e '\x32' 00:11:42.328 11:47:32 -- target/invalid.sh@25 -- # string+=2 00:11:42.328 11:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.328 11:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.328 11:47:32 -- target/invalid.sh@25 -- # printf %x 57 00:11:42.328 11:47:32 -- target/invalid.sh@25 -- # echo -e '\x39' 00:11:42.328 11:47:32 -- target/invalid.sh@25 -- # string+=9 00:11:42.328 11:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.328 11:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.328 11:47:32 -- target/invalid.sh@25 -- # printf %x 41 00:11:42.328 11:47:32 -- target/invalid.sh@25 -- # echo -e '\x29' 00:11:42.328 11:47:32 -- target/invalid.sh@25 -- # string+=')' 00:11:42.328 11:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.328 11:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.328 11:47:32 -- target/invalid.sh@25 -- # printf %x 92 00:11:42.328 11:47:32 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:11:42.328 11:47:32 -- target/invalid.sh@25 -- # string+='\' 00:11:42.328 11:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.328 11:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.328 11:47:32 -- target/invalid.sh@25 -- # printf %x 85 00:11:42.328 11:47:32 -- target/invalid.sh@25 -- # echo -e '\x55' 00:11:42.328 11:47:32 -- target/invalid.sh@25 -- # string+=U 00:11:42.328 11:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.328 11:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.328 11:47:32 -- target/invalid.sh@25 -- # printf %x 126 00:11:42.328 11:47:32 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:11:42.328 11:47:32 -- target/invalid.sh@25 -- # string+='~' 00:11:42.328 11:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.328 11:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # printf %x 123 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # string+='{' 00:11:42.329 11:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.329 11:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # printf %x 119 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # echo -e '\x77' 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # string+=w 00:11:42.329 11:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.329 11:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # printf %x 115 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # echo -e '\x73' 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # string+=s 00:11:42.329 11:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.329 11:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # printf %x 100 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # echo -e '\x64' 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # string+=d 00:11:42.329 11:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.329 11:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # printf %x 39 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # echo -e '\x27' 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # string+=\' 00:11:42.329 11:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.329 11:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # printf %x 127 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # string+=$'\177' 00:11:42.329 11:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.329 11:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # printf %x 64 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # echo -e '\x40' 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # string+=@ 00:11:42.329 11:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.329 11:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # printf %x 88 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # echo -e '\x58' 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # string+=X 00:11:42.329 11:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.329 11:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # printf %x 84 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # echo -e '\x54' 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # string+=T 00:11:42.329 11:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.329 11:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # printf %x 62 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # string+='>' 00:11:42.329 11:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.329 11:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # printf %x 94 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # string+='^' 00:11:42.329 11:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.329 11:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # printf %x 53 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # echo -e '\x35' 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # string+=5 00:11:42.329 11:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.329 11:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # printf %x 45 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # string+=- 00:11:42.329 11:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.329 11:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # printf %x 74 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # string+=J 00:11:42.329 11:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.329 11:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # printf %x 112 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # echo -e '\x70' 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # string+=p 00:11:42.329 11:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.329 11:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # printf %x 78 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # string+=N 00:11:42.329 11:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.329 11:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # printf %x 55 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # echo -e '\x37' 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # string+=7 00:11:42.329 11:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.329 11:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # printf %x 126 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # string+='~' 00:11:42.329 11:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.329 11:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # printf %x 93 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # string+=']' 00:11:42.329 11:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.329 11:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # printf %x 64 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # echo -e '\x40' 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # string+=@ 00:11:42.329 11:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.329 11:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # printf %x 80 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # echo -e '\x50' 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # string+=P 00:11:42.329 11:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.329 11:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # printf %x 39 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # echo -e '\x27' 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # string+=\' 00:11:42.329 11:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.329 11:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # printf %x 32 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # echo -e '\x20' 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # string+=' ' 00:11:42.329 11:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.329 11:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # printf %x 54 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # echo -e '\x36' 00:11:42.329 11:47:32 -- target/invalid.sh@25 -- # string+=6 00:11:42.329 11:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.329 11:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.587 11:47:32 -- target/invalid.sh@25 -- # printf %x 110 00:11:42.587 11:47:32 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:11:42.587 11:47:32 -- target/invalid.sh@25 -- # string+=n 00:11:42.588 11:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.588 11:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.588 11:47:32 -- target/invalid.sh@25 -- # printf %x 53 00:11:42.588 11:47:32 -- target/invalid.sh@25 -- # echo -e '\x35' 00:11:42.588 11:47:32 -- target/invalid.sh@25 -- # string+=5 00:11:42.588 11:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.588 11:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.588 11:47:32 -- target/invalid.sh@25 -- # printf %x 43 00:11:42.588 11:47:32 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:11:42.588 11:47:32 -- target/invalid.sh@25 -- # string+=+ 00:11:42.588 11:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.588 11:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.588 11:47:32 -- target/invalid.sh@25 -- # printf %x 86 00:11:42.588 11:47:32 -- target/invalid.sh@25 -- # echo -e '\x56' 00:11:42.588 11:47:32 -- target/invalid.sh@25 -- # string+=V 00:11:42.588 11:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.588 11:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.588 11:47:32 -- target/invalid.sh@25 -- # printf %x 83 00:11:42.588 11:47:32 -- target/invalid.sh@25 -- # echo -e '\x53' 00:11:42.588 11:47:32 -- target/invalid.sh@25 -- # string+=S 00:11:42.588 11:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.588 11:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.588 11:47:32 -- target/invalid.sh@25 -- # printf %x 65 00:11:42.588 11:47:32 -- target/invalid.sh@25 -- # echo -e '\x41' 00:11:42.588 11:47:32 -- target/invalid.sh@25 -- # string+=A 00:11:42.588 11:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.588 11:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.588 11:47:32 -- target/invalid.sh@25 -- # printf %x 73 00:11:42.588 11:47:32 -- target/invalid.sh@25 -- # echo -e '\x49' 00:11:42.588 11:47:32 -- target/invalid.sh@25 -- # string+=I 00:11:42.588 11:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.588 11:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.588 11:47:32 -- target/invalid.sh@25 -- # printf %x 32 00:11:42.588 11:47:32 -- target/invalid.sh@25 -- # echo -e '\x20' 00:11:42.588 11:47:32 -- target/invalid.sh@25 -- # string+=' ' 00:11:42.588 11:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.588 11:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.588 11:47:32 -- target/invalid.sh@28 -- # [[ [ == \- ]] 00:11:42.588 11:47:32 -- target/invalid.sh@31 -- # echo '[#|29)\U~{wsd'\''@XT>^5-JpN7~]@P'\'' 6n5+VSAI ' 00:11:42.588 11:47:32 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '[#|29)\U~{wsd'\''@XT>^5-JpN7~]@P'\'' 6n5+VSAI ' nqn.2016-06.io.spdk:cnode557 00:11:42.588 [2024-04-18 11:47:33.086712] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode557: invalid model number '[#|29)\U~{wsd'@XT>^5-JpN7~]@P' 6n5+VSAI ' 00:11:42.588 11:47:33 -- target/invalid.sh@58 -- # out='request: 00:11:42.588 { 00:11:42.588 "nqn": "nqn.2016-06.io.spdk:cnode557", 00:11:42.588 "model_number": "[#|29)\\U~{wsd'\''\u007f@XT>^5-JpN7~]@P'\'' 6n5+VSAI ", 00:11:42.588 "method": "nvmf_create_subsystem", 00:11:42.588 "req_id": 1 00:11:42.588 } 00:11:42.588 Got JSON-RPC error response 00:11:42.588 response: 00:11:42.588 { 00:11:42.588 "code": -32602, 00:11:42.588 "message": "Invalid MN [#|29)\\U~{wsd'\''\u007f@XT>^5-JpN7~]@P'\'' 6n5+VSAI " 00:11:42.588 }' 00:11:42.588 11:47:33 -- target/invalid.sh@59 -- # [[ request: 00:11:42.588 { 00:11:42.588 "nqn": "nqn.2016-06.io.spdk:cnode557", 00:11:42.588 "model_number": "[#|29)\\U~{wsd'\u007f@XT>^5-JpN7~]@P' 6n5+VSAI ", 00:11:42.588 "method": "nvmf_create_subsystem", 00:11:42.588 "req_id": 1 00:11:42.588 } 00:11:42.588 Got JSON-RPC error response 00:11:42.588 response: 00:11:42.588 { 00:11:42.588 "code": -32602, 00:11:42.588 "message": "Invalid MN [#|29)\\U~{wsd'\u007f@XT>^5-JpN7~]@P' 6n5+VSAI " 00:11:42.588 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:42.588 11:47:33 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:11:42.845 [2024-04-18 11:47:33.271411] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:42.845 11:47:33 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:11:43.102 11:47:33 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:11:43.102 11:47:33 -- target/invalid.sh@67 -- # echo '' 00:11:43.102 11:47:33 -- target/invalid.sh@67 -- # head -n 1 00:11:43.102 11:47:33 -- target/invalid.sh@67 -- # IP= 00:11:43.102 11:47:33 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:11:43.360 [2024-04-18 11:47:33.656957] nvmf_rpc.c: 792:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:11:43.360 11:47:33 -- target/invalid.sh@69 -- # out='request: 00:11:43.360 { 00:11:43.360 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:43.360 "listen_address": { 00:11:43.360 "trtype": "tcp", 00:11:43.360 "traddr": "", 00:11:43.360 "trsvcid": "4421" 00:11:43.360 }, 00:11:43.360 "method": "nvmf_subsystem_remove_listener", 00:11:43.360 "req_id": 1 00:11:43.360 } 00:11:43.360 Got JSON-RPC error response 00:11:43.360 response: 00:11:43.360 { 00:11:43.360 "code": -32602, 00:11:43.360 "message": "Invalid parameters" 00:11:43.360 }' 00:11:43.360 11:47:33 -- target/invalid.sh@70 -- # [[ request: 00:11:43.360 { 00:11:43.360 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:43.360 "listen_address": { 00:11:43.360 "trtype": "tcp", 00:11:43.360 "traddr": "", 00:11:43.360 "trsvcid": "4421" 00:11:43.360 }, 00:11:43.360 "method": "nvmf_subsystem_remove_listener", 00:11:43.360 "req_id": 1 00:11:43.360 } 00:11:43.360 Got JSON-RPC error response 00:11:43.360 response: 00:11:43.360 { 00:11:43.360 "code": -32602, 00:11:43.360 "message": "Invalid parameters" 00:11:43.360 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:11:43.360 11:47:33 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20913 -i 0 00:11:43.360 [2024-04-18 11:47:33.845543] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20913: invalid cntlid range [0-65519] 00:11:43.360 11:47:33 -- target/invalid.sh@73 -- # out='request: 00:11:43.360 { 00:11:43.360 "nqn": "nqn.2016-06.io.spdk:cnode20913", 00:11:43.360 "min_cntlid": 0, 00:11:43.360 "method": "nvmf_create_subsystem", 00:11:43.360 "req_id": 1 00:11:43.360 } 00:11:43.360 Got JSON-RPC error response 00:11:43.360 response: 00:11:43.360 { 00:11:43.360 "code": -32602, 00:11:43.360 "message": "Invalid cntlid range [0-65519]" 00:11:43.360 }' 00:11:43.360 11:47:33 -- target/invalid.sh@74 -- # [[ request: 00:11:43.360 { 00:11:43.360 "nqn": "nqn.2016-06.io.spdk:cnode20913", 00:11:43.360 "min_cntlid": 0, 00:11:43.360 "method": "nvmf_create_subsystem", 00:11:43.360 "req_id": 1 00:11:43.360 } 00:11:43.360 Got JSON-RPC error response 00:11:43.360 response: 00:11:43.360 { 00:11:43.360 "code": -32602, 00:11:43.360 "message": "Invalid cntlid range [0-65519]" 00:11:43.360 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:43.360 11:47:33 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17446 -i 65520 00:11:43.618 [2024-04-18 11:47:34.030171] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17446: invalid cntlid range [65520-65519] 00:11:43.618 11:47:34 -- target/invalid.sh@75 -- # out='request: 00:11:43.618 { 00:11:43.618 "nqn": "nqn.2016-06.io.spdk:cnode17446", 00:11:43.618 "min_cntlid": 65520, 00:11:43.618 "method": "nvmf_create_subsystem", 00:11:43.618 "req_id": 1 00:11:43.618 } 00:11:43.618 Got JSON-RPC error response 00:11:43.618 response: 00:11:43.618 { 00:11:43.618 "code": -32602, 00:11:43.618 "message": "Invalid cntlid range [65520-65519]" 00:11:43.618 }' 00:11:43.618 11:47:34 -- target/invalid.sh@76 -- # [[ request: 00:11:43.618 { 00:11:43.618 "nqn": "nqn.2016-06.io.spdk:cnode17446", 00:11:43.618 "min_cntlid": 65520, 00:11:43.618 "method": "nvmf_create_subsystem", 00:11:43.618 "req_id": 1 00:11:43.618 } 00:11:43.618 Got JSON-RPC error response 00:11:43.618 response: 00:11:43.618 { 00:11:43.618 "code": -32602, 00:11:43.618 "message": "Invalid cntlid range [65520-65519]" 00:11:43.618 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:43.618 11:47:34 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25610 -I 0 00:11:43.876 [2024-04-18 11:47:34.210788] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25610: invalid cntlid range [1-0] 00:11:43.876 11:47:34 -- target/invalid.sh@77 -- # out='request: 00:11:43.876 { 00:11:43.876 "nqn": "nqn.2016-06.io.spdk:cnode25610", 00:11:43.876 "max_cntlid": 0, 00:11:43.876 "method": "nvmf_create_subsystem", 00:11:43.876 "req_id": 1 00:11:43.876 } 00:11:43.876 Got JSON-RPC error response 00:11:43.876 response: 00:11:43.876 { 00:11:43.876 "code": -32602, 00:11:43.876 "message": "Invalid cntlid range [1-0]" 00:11:43.876 }' 00:11:43.876 11:47:34 -- target/invalid.sh@78 -- # [[ request: 00:11:43.876 { 00:11:43.876 "nqn": "nqn.2016-06.io.spdk:cnode25610", 00:11:43.876 "max_cntlid": 0, 00:11:43.876 "method": "nvmf_create_subsystem", 00:11:43.876 "req_id": 1 00:11:43.876 } 00:11:43.876 Got JSON-RPC error response 00:11:43.876 response: 00:11:43.876 { 00:11:43.876 "code": -32602, 00:11:43.876 "message": "Invalid cntlid range [1-0]" 00:11:43.876 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:43.876 11:47:34 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10915 -I 65520 00:11:43.876 [2024-04-18 11:47:34.379348] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10915: invalid cntlid range [1-65520] 00:11:43.876 11:47:34 -- target/invalid.sh@79 -- # out='request: 00:11:43.876 { 00:11:43.876 "nqn": "nqn.2016-06.io.spdk:cnode10915", 00:11:43.876 "max_cntlid": 65520, 00:11:43.876 "method": "nvmf_create_subsystem", 00:11:43.876 "req_id": 1 00:11:43.876 } 00:11:43.876 Got JSON-RPC error response 00:11:43.876 response: 00:11:43.876 { 00:11:43.876 "code": -32602, 00:11:43.876 "message": "Invalid cntlid range [1-65520]" 00:11:43.876 }' 00:11:43.876 11:47:34 -- target/invalid.sh@80 -- # [[ request: 00:11:43.876 { 00:11:43.876 "nqn": "nqn.2016-06.io.spdk:cnode10915", 00:11:43.876 "max_cntlid": 65520, 00:11:43.876 "method": "nvmf_create_subsystem", 00:11:43.876 "req_id": 1 00:11:43.876 } 00:11:43.876 Got JSON-RPC error response 00:11:43.876 response: 00:11:43.876 { 00:11:43.876 "code": -32602, 00:11:43.876 "message": "Invalid cntlid range [1-65520]" 00:11:43.876 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:43.876 11:47:34 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8122 -i 6 -I 5 00:11:44.134 [2024-04-18 11:47:34.560038] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8122: invalid cntlid range [6-5] 00:11:44.134 11:47:34 -- target/invalid.sh@83 -- # out='request: 00:11:44.134 { 00:11:44.134 "nqn": "nqn.2016-06.io.spdk:cnode8122", 00:11:44.134 "min_cntlid": 6, 00:11:44.134 "max_cntlid": 5, 00:11:44.134 "method": "nvmf_create_subsystem", 00:11:44.134 "req_id": 1 00:11:44.134 } 00:11:44.134 Got JSON-RPC error response 00:11:44.134 response: 00:11:44.134 { 00:11:44.134 "code": -32602, 00:11:44.134 "message": "Invalid cntlid range [6-5]" 00:11:44.134 }' 00:11:44.134 11:47:34 -- target/invalid.sh@84 -- # [[ request: 00:11:44.135 { 00:11:44.135 "nqn": "nqn.2016-06.io.spdk:cnode8122", 00:11:44.135 "min_cntlid": 6, 00:11:44.135 "max_cntlid": 5, 00:11:44.135 "method": "nvmf_create_subsystem", 00:11:44.135 "req_id": 1 00:11:44.135 } 00:11:44.135 Got JSON-RPC error response 00:11:44.135 response: 00:11:44.135 { 00:11:44.135 "code": -32602, 00:11:44.135 "message": "Invalid cntlid range [6-5]" 00:11:44.135 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:44.135 11:47:34 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:11:44.393 11:47:34 -- target/invalid.sh@87 -- # out='request: 00:11:44.393 { 00:11:44.393 "name": "foobar", 00:11:44.393 "method": "nvmf_delete_target", 00:11:44.393 "req_id": 1 00:11:44.393 } 00:11:44.393 Got JSON-RPC error response 00:11:44.393 response: 00:11:44.393 { 00:11:44.393 "code": -32602, 00:11:44.394 "message": "The specified target doesn'\''t exist, cannot delete it." 00:11:44.394 }' 00:11:44.394 11:47:34 -- target/invalid.sh@88 -- # [[ request: 00:11:44.394 { 00:11:44.394 "name": "foobar", 00:11:44.394 "method": "nvmf_delete_target", 00:11:44.394 "req_id": 1 00:11:44.394 } 00:11:44.394 Got JSON-RPC error response 00:11:44.394 response: 00:11:44.394 { 00:11:44.394 "code": -32602, 00:11:44.394 "message": "The specified target doesn't exist, cannot delete it." 00:11:44.394 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:11:44.394 11:47:34 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:11:44.394 11:47:34 -- target/invalid.sh@91 -- # nvmftestfini 00:11:44.394 11:47:34 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:44.394 11:47:34 -- nvmf/common.sh@117 -- # sync 00:11:44.394 11:47:34 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:44.394 11:47:34 -- nvmf/common.sh@120 -- # set +e 00:11:44.394 11:47:34 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:44.394 11:47:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:44.394 rmmod nvme_tcp 00:11:44.394 rmmod nvme_fabrics 00:11:44.394 rmmod nvme_keyring 00:11:44.394 11:47:34 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:44.394 11:47:34 -- nvmf/common.sh@124 -- # set -e 00:11:44.394 11:47:34 -- nvmf/common.sh@125 -- # return 0 00:11:44.394 11:47:34 -- nvmf/common.sh@478 -- # '[' -n 2380513 ']' 00:11:44.394 11:47:34 -- nvmf/common.sh@479 -- # killprocess 2380513 00:11:44.394 11:47:34 -- common/autotest_common.sh@936 -- # '[' -z 2380513 ']' 00:11:44.394 11:47:34 -- common/autotest_common.sh@940 -- # kill -0 2380513 00:11:44.394 11:47:34 -- common/autotest_common.sh@941 -- # uname 00:11:44.394 11:47:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:44.394 11:47:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2380513 00:11:44.394 11:47:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:44.394 11:47:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:44.394 11:47:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2380513' 00:11:44.394 killing process with pid 2380513 00:11:44.394 11:47:34 -- common/autotest_common.sh@955 -- # kill 2380513 00:11:44.394 11:47:34 -- common/autotest_common.sh@960 -- # wait 2380513 00:11:45.768 11:47:36 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:45.768 11:47:36 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:45.768 11:47:36 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:45.768 11:47:36 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:45.768 11:47:36 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:45.768 11:47:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:45.768 11:47:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:45.768 11:47:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:47.669 11:47:38 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:47.669 00:11:47.669 real 0m14.110s 00:11:47.669 user 0m22.489s 00:11:47.669 sys 0m6.292s 00:11:47.669 11:47:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:47.669 11:47:38 -- common/autotest_common.sh@10 -- # set +x 00:11:47.669 ************************************ 00:11:47.669 END TEST nvmf_invalid 00:11:47.669 ************************************ 00:11:47.669 11:47:38 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:11:47.669 11:47:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:47.669 11:47:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:47.669 11:47:38 -- common/autotest_common.sh@10 -- # set +x 00:11:47.927 ************************************ 00:11:47.927 START TEST nvmf_abort 00:11:47.927 ************************************ 00:11:47.927 11:47:38 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:11:47.927 * Looking for test storage... 00:11:48.186 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:48.186 11:47:38 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:48.186 11:47:38 -- nvmf/common.sh@7 -- # uname -s 00:11:48.186 11:47:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:48.186 11:47:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:48.186 11:47:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:48.186 11:47:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:48.186 11:47:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:48.186 11:47:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:48.186 11:47:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:48.186 11:47:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:48.186 11:47:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:48.186 11:47:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:48.186 11:47:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:11:48.186 11:47:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:11:48.186 11:47:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:48.186 11:47:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:48.186 11:47:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:48.186 11:47:38 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:48.186 11:47:38 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:48.186 11:47:38 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:48.186 11:47:38 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:48.186 11:47:38 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:48.186 11:47:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.186 11:47:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.186 11:47:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.186 11:47:38 -- paths/export.sh@5 -- # export PATH 00:11:48.186 11:47:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.186 11:47:38 -- nvmf/common.sh@47 -- # : 0 00:11:48.186 11:47:38 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:48.186 11:47:38 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:48.186 11:47:38 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:48.186 11:47:38 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:48.186 11:47:38 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:48.186 11:47:38 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:48.186 11:47:38 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:48.186 11:47:38 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:48.186 11:47:38 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:48.187 11:47:38 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:11:48.187 11:47:38 -- target/abort.sh@14 -- # nvmftestinit 00:11:48.187 11:47:38 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:48.187 11:47:38 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:48.187 11:47:38 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:48.187 11:47:38 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:48.187 11:47:38 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:48.187 11:47:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:48.187 11:47:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:48.187 11:47:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.187 11:47:38 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:48.187 11:47:38 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:48.187 11:47:38 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:48.187 11:47:38 -- common/autotest_common.sh@10 -- # set +x 00:11:54.754 11:47:44 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:54.754 11:47:44 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:54.754 11:47:44 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:54.754 11:47:44 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:54.754 11:47:44 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:54.754 11:47:44 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:54.755 11:47:44 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:54.755 11:47:44 -- nvmf/common.sh@295 -- # net_devs=() 00:11:54.755 11:47:44 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:54.755 11:47:44 -- nvmf/common.sh@296 -- # e810=() 00:11:54.755 11:47:44 -- nvmf/common.sh@296 -- # local -ga e810 00:11:54.755 11:47:44 -- nvmf/common.sh@297 -- # x722=() 00:11:54.755 11:47:44 -- nvmf/common.sh@297 -- # local -ga x722 00:11:54.755 11:47:44 -- nvmf/common.sh@298 -- # mlx=() 00:11:54.755 11:47:44 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:54.755 11:47:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:54.755 11:47:44 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:54.755 11:47:44 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:54.755 11:47:44 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:54.755 11:47:44 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:54.755 11:47:44 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:54.755 11:47:44 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:54.755 11:47:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:54.755 11:47:44 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:54.755 11:47:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:54.755 11:47:44 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:54.755 11:47:44 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:54.755 11:47:44 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:54.755 11:47:44 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:54.755 11:47:44 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:54.755 11:47:44 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:54.755 11:47:44 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:54.755 11:47:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:54.755 11:47:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:54.755 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:54.755 11:47:44 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:54.755 11:47:44 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:54.755 11:47:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:54.755 11:47:44 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:54.755 11:47:44 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:54.755 11:47:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:54.755 11:47:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:54.755 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:54.755 11:47:44 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:54.755 11:47:44 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:54.755 11:47:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:54.755 11:47:44 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:54.755 11:47:44 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:54.755 11:47:44 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:54.755 11:47:44 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:54.755 11:47:44 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:54.755 11:47:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:54.755 11:47:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:54.755 11:47:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:54.755 11:47:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:54.755 11:47:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:54.755 Found net devices under 0000:af:00.0: cvl_0_0 00:11:54.755 11:47:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:54.755 11:47:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:54.755 11:47:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:54.755 11:47:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:54.755 11:47:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:54.755 11:47:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:54.755 Found net devices under 0000:af:00.1: cvl_0_1 00:11:54.755 11:47:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:54.755 11:47:44 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:54.755 11:47:44 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:54.755 11:47:44 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:54.755 11:47:44 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:54.755 11:47:44 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:54.755 11:47:44 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:54.755 11:47:44 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:54.755 11:47:44 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:54.755 11:47:44 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:54.755 11:47:44 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:54.755 11:47:44 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:54.755 11:47:44 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:54.755 11:47:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:54.755 11:47:44 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:54.755 11:47:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:54.755 11:47:44 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:54.755 11:47:44 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:54.755 11:47:44 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:54.755 11:47:44 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:54.755 11:47:44 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:54.755 11:47:44 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:54.755 11:47:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:54.755 11:47:44 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:54.755 11:47:44 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:54.755 11:47:44 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:54.755 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:54.755 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:11:54.755 00:11:54.755 --- 10.0.0.2 ping statistics --- 00:11:54.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:54.755 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:11:54.755 11:47:44 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:54.755 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:54.755 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.232 ms 00:11:54.755 00:11:54.755 --- 10.0.0.1 ping statistics --- 00:11:54.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:54.755 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:11:54.755 11:47:44 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:54.755 11:47:44 -- nvmf/common.sh@411 -- # return 0 00:11:54.755 11:47:44 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:54.755 11:47:44 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:54.755 11:47:44 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:54.755 11:47:44 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:54.755 11:47:44 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:54.755 11:47:44 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:54.755 11:47:44 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:54.755 11:47:44 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:11:54.755 11:47:44 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:54.755 11:47:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:54.755 11:47:44 -- common/autotest_common.sh@10 -- # set +x 00:11:54.755 11:47:44 -- nvmf/common.sh@470 -- # nvmfpid=2385191 00:11:54.755 11:47:44 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:54.755 11:47:44 -- nvmf/common.sh@471 -- # waitforlisten 2385191 00:11:54.755 11:47:44 -- common/autotest_common.sh@817 -- # '[' -z 2385191 ']' 00:11:54.755 11:47:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:54.755 11:47:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:54.755 11:47:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:54.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:54.755 11:47:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:54.755 11:47:44 -- common/autotest_common.sh@10 -- # set +x 00:11:54.755 [2024-04-18 11:47:44.996485] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:11:54.755 [2024-04-18 11:47:44.996571] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:54.755 EAL: No free 2048 kB hugepages reported on node 1 00:11:54.755 [2024-04-18 11:47:45.125347] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:55.015 [2024-04-18 11:47:45.339232] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:55.015 [2024-04-18 11:47:45.339274] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:55.015 [2024-04-18 11:47:45.339287] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:55.015 [2024-04-18 11:47:45.339302] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:55.015 [2024-04-18 11:47:45.339314] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:55.015 [2024-04-18 11:47:45.339436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:55.015 [2024-04-18 11:47:45.339502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:55.015 [2024-04-18 11:47:45.339508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:55.275 11:47:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:55.275 11:47:45 -- common/autotest_common.sh@850 -- # return 0 00:11:55.275 11:47:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:55.275 11:47:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:55.275 11:47:45 -- common/autotest_common.sh@10 -- # set +x 00:11:55.275 11:47:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:55.275 11:47:45 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:11:55.275 11:47:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:55.275 11:47:45 -- common/autotest_common.sh@10 -- # set +x 00:11:55.275 [2024-04-18 11:47:45.810870] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:55.535 11:47:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:55.535 11:47:45 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:11:55.535 11:47:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:55.535 11:47:45 -- common/autotest_common.sh@10 -- # set +x 00:11:55.535 Malloc0 00:11:55.535 11:47:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:55.535 11:47:45 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:55.535 11:47:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:55.535 11:47:45 -- common/autotest_common.sh@10 -- # set +x 00:11:55.535 Delay0 00:11:55.535 11:47:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:55.535 11:47:45 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:55.535 11:47:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:55.535 11:47:45 -- common/autotest_common.sh@10 -- # set +x 00:11:55.535 11:47:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:55.535 11:47:45 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:11:55.535 11:47:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:55.535 11:47:45 -- common/autotest_common.sh@10 -- # set +x 00:11:55.535 11:47:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:55.535 11:47:45 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:55.535 11:47:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:55.535 11:47:45 -- common/autotest_common.sh@10 -- # set +x 00:11:55.535 [2024-04-18 11:47:45.956161] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:55.535 11:47:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:55.535 11:47:45 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:55.535 11:47:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:55.535 11:47:45 -- common/autotest_common.sh@10 -- # set +x 00:11:55.535 11:47:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:55.535 11:47:45 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:11:55.535 EAL: No free 2048 kB hugepages reported on node 1 00:11:55.535 [2024-04-18 11:47:46.069027] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:58.073 Initializing NVMe Controllers 00:11:58.073 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:58.073 controller IO queue size 128 less than required 00:11:58.073 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:11:58.073 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:11:58.073 Initialization complete. Launching workers. 00:11:58.073 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 124, failed: 38173 00:11:58.073 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 38231, failed to submit 66 00:11:58.073 success 38173, unsuccess 58, failed 0 00:11:58.073 11:47:48 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:58.073 11:47:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:58.073 11:47:48 -- common/autotest_common.sh@10 -- # set +x 00:11:58.073 11:47:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:58.073 11:47:48 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:11:58.073 11:47:48 -- target/abort.sh@38 -- # nvmftestfini 00:11:58.073 11:47:48 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:58.073 11:47:48 -- nvmf/common.sh@117 -- # sync 00:11:58.073 11:47:48 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:58.073 11:47:48 -- nvmf/common.sh@120 -- # set +e 00:11:58.073 11:47:48 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:58.073 11:47:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:58.073 rmmod nvme_tcp 00:11:58.073 rmmod nvme_fabrics 00:11:58.073 rmmod nvme_keyring 00:11:58.073 11:47:48 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:58.073 11:47:48 -- nvmf/common.sh@124 -- # set -e 00:11:58.073 11:47:48 -- nvmf/common.sh@125 -- # return 0 00:11:58.073 11:47:48 -- nvmf/common.sh@478 -- # '[' -n 2385191 ']' 00:11:58.073 11:47:48 -- nvmf/common.sh@479 -- # killprocess 2385191 00:11:58.073 11:47:48 -- common/autotest_common.sh@936 -- # '[' -z 2385191 ']' 00:11:58.073 11:47:48 -- common/autotest_common.sh@940 -- # kill -0 2385191 00:11:58.073 11:47:48 -- common/autotest_common.sh@941 -- # uname 00:11:58.073 11:47:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:58.073 11:47:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2385191 00:11:58.073 11:47:48 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:58.073 11:47:48 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:58.073 11:47:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2385191' 00:11:58.073 killing process with pid 2385191 00:11:58.073 11:47:48 -- common/autotest_common.sh@955 -- # kill 2385191 00:11:58.073 11:47:48 -- common/autotest_common.sh@960 -- # wait 2385191 00:11:59.453 11:47:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:59.453 11:47:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:59.453 11:47:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:59.453 11:47:49 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:59.453 11:47:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:59.453 11:47:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:59.453 11:47:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:59.453 11:47:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.362 11:47:51 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:01.362 00:12:01.362 real 0m13.450s 00:12:01.362 user 0m15.529s 00:12:01.362 sys 0m6.145s 00:12:01.362 11:47:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:01.362 11:47:51 -- common/autotest_common.sh@10 -- # set +x 00:12:01.362 ************************************ 00:12:01.362 END TEST nvmf_abort 00:12:01.362 ************************************ 00:12:01.362 11:47:51 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:01.362 11:47:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:01.362 11:47:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:01.362 11:47:51 -- common/autotest_common.sh@10 -- # set +x 00:12:01.621 ************************************ 00:12:01.621 START TEST nvmf_ns_hotplug_stress 00:12:01.621 ************************************ 00:12:01.621 11:47:52 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:01.621 * Looking for test storage... 00:12:01.621 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:01.621 11:47:52 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:01.621 11:47:52 -- nvmf/common.sh@7 -- # uname -s 00:12:01.621 11:47:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:01.621 11:47:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:01.621 11:47:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:01.621 11:47:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:01.621 11:47:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:01.621 11:47:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:01.621 11:47:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:01.621 11:47:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:01.621 11:47:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:01.621 11:47:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:01.621 11:47:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:12:01.621 11:47:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:12:01.621 11:47:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:01.621 11:47:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:01.621 11:47:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:01.621 11:47:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:01.621 11:47:52 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:01.881 11:47:52 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:01.881 11:47:52 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:01.881 11:47:52 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:01.881 11:47:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.881 11:47:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.881 11:47:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.881 11:47:52 -- paths/export.sh@5 -- # export PATH 00:12:01.881 11:47:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.881 11:47:52 -- nvmf/common.sh@47 -- # : 0 00:12:01.881 11:47:52 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:01.881 11:47:52 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:01.881 11:47:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:01.881 11:47:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:01.881 11:47:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:01.881 11:47:52 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:01.881 11:47:52 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:01.881 11:47:52 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:01.881 11:47:52 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:01.881 11:47:52 -- target/ns_hotplug_stress.sh@13 -- # nvmftestinit 00:12:01.881 11:47:52 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:01.881 11:47:52 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:01.881 11:47:52 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:01.881 11:47:52 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:01.881 11:47:52 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:01.881 11:47:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:01.881 11:47:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:01.881 11:47:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.881 11:47:52 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:01.881 11:47:52 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:01.881 11:47:52 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:01.881 11:47:52 -- common/autotest_common.sh@10 -- # set +x 00:12:08.450 11:47:58 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:08.450 11:47:58 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:08.450 11:47:58 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:08.450 11:47:58 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:08.450 11:47:58 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:08.450 11:47:58 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:08.450 11:47:58 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:08.450 11:47:58 -- nvmf/common.sh@295 -- # net_devs=() 00:12:08.450 11:47:58 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:08.450 11:47:58 -- nvmf/common.sh@296 -- # e810=() 00:12:08.450 11:47:58 -- nvmf/common.sh@296 -- # local -ga e810 00:12:08.450 11:47:58 -- nvmf/common.sh@297 -- # x722=() 00:12:08.450 11:47:58 -- nvmf/common.sh@297 -- # local -ga x722 00:12:08.450 11:47:58 -- nvmf/common.sh@298 -- # mlx=() 00:12:08.450 11:47:58 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:08.450 11:47:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:08.450 11:47:58 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:08.450 11:47:58 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:08.450 11:47:58 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:08.450 11:47:58 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:08.450 11:47:58 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:08.450 11:47:58 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:08.450 11:47:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:08.450 11:47:58 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:08.450 11:47:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:08.450 11:47:58 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:08.450 11:47:58 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:08.450 11:47:58 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:08.450 11:47:58 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:08.451 11:47:58 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:08.451 11:47:58 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:08.451 11:47:58 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:08.451 11:47:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:08.451 11:47:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:08.451 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:08.451 11:47:58 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:08.451 11:47:58 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:08.451 11:47:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:08.451 11:47:58 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:08.451 11:47:58 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:08.451 11:47:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:08.451 11:47:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:08.451 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:08.451 11:47:58 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:08.451 11:47:58 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:08.451 11:47:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:08.451 11:47:58 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:08.451 11:47:58 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:08.451 11:47:58 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:08.451 11:47:58 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:08.451 11:47:58 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:08.451 11:47:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:08.451 11:47:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:08.451 11:47:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:08.451 11:47:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:08.451 11:47:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:08.451 Found net devices under 0000:af:00.0: cvl_0_0 00:12:08.451 11:47:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:08.451 11:47:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:08.451 11:47:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:08.451 11:47:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:08.451 11:47:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:08.451 11:47:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:08.451 Found net devices under 0000:af:00.1: cvl_0_1 00:12:08.451 11:47:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:08.451 11:47:58 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:08.451 11:47:58 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:08.451 11:47:58 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:08.451 11:47:58 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:12:08.451 11:47:58 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:12:08.451 11:47:58 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:08.451 11:47:58 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:08.451 11:47:58 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:08.451 11:47:58 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:08.451 11:47:58 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:08.451 11:47:58 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:08.451 11:47:58 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:08.451 11:47:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:08.451 11:47:58 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:08.451 11:47:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:08.451 11:47:58 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:08.451 11:47:58 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:08.451 11:47:58 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:08.710 11:47:59 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:08.710 11:47:59 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:08.710 11:47:59 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:08.710 11:47:59 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:08.710 11:47:59 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:08.710 11:47:59 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:08.968 11:47:59 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:08.968 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:08.968 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:12:08.968 00:12:08.968 --- 10.0.0.2 ping statistics --- 00:12:08.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.968 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:12:08.968 11:47:59 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:08.968 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:08.968 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:12:08.968 00:12:08.968 --- 10.0.0.1 ping statistics --- 00:12:08.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.968 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:12:08.968 11:47:59 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:08.968 11:47:59 -- nvmf/common.sh@411 -- # return 0 00:12:08.968 11:47:59 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:08.968 11:47:59 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:08.968 11:47:59 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:08.968 11:47:59 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:08.968 11:47:59 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:08.968 11:47:59 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:08.968 11:47:59 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:08.968 11:47:59 -- target/ns_hotplug_stress.sh@14 -- # nvmfappstart -m 0xE 00:12:08.968 11:47:59 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:08.968 11:47:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:08.968 11:47:59 -- common/autotest_common.sh@10 -- # set +x 00:12:08.968 11:47:59 -- nvmf/common.sh@470 -- # nvmfpid=2389720 00:12:08.968 11:47:59 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:08.968 11:47:59 -- nvmf/common.sh@471 -- # waitforlisten 2389720 00:12:08.968 11:47:59 -- common/autotest_common.sh@817 -- # '[' -z 2389720 ']' 00:12:08.968 11:47:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:08.968 11:47:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:08.968 11:47:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:08.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:08.968 11:47:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:08.968 11:47:59 -- common/autotest_common.sh@10 -- # set +x 00:12:08.968 [2024-04-18 11:47:59.412302] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:12:08.968 [2024-04-18 11:47:59.412402] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:08.968 EAL: No free 2048 kB hugepages reported on node 1 00:12:09.227 [2024-04-18 11:47:59.540947] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:09.227 [2024-04-18 11:47:59.763133] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:09.227 [2024-04-18 11:47:59.763181] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:09.227 [2024-04-18 11:47:59.763194] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:09.227 [2024-04-18 11:47:59.763207] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:09.227 [2024-04-18 11:47:59.763220] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:09.227 [2024-04-18 11:47:59.763350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:09.227 [2024-04-18 11:47:59.763423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:09.227 [2024-04-18 11:47:59.763430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:09.793 11:48:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:09.793 11:48:00 -- common/autotest_common.sh@850 -- # return 0 00:12:09.794 11:48:00 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:09.794 11:48:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:09.794 11:48:00 -- common/autotest_common.sh@10 -- # set +x 00:12:09.794 11:48:00 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:09.794 11:48:00 -- target/ns_hotplug_stress.sh@16 -- # null_size=1000 00:12:09.794 11:48:00 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:10.052 [2024-04-18 11:48:00.384658] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:10.052 11:48:00 -- target/ns_hotplug_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:10.311 11:48:00 -- target/ns_hotplug_stress.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:10.311 [2024-04-18 11:48:00.767410] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:10.311 11:48:00 -- target/ns_hotplug_stress.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:10.569 11:48:00 -- target/ns_hotplug_stress.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:12:10.828 Malloc0 00:12:10.828 11:48:01 -- target/ns_hotplug_stress.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:10.828 Delay0 00:12:10.828 11:48:01 -- target/ns_hotplug_stress.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:11.086 11:48:01 -- target/ns_hotplug_stress.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:12:11.344 NULL1 00:12:11.344 11:48:01 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:11.601 11:48:01 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:12:11.601 11:48:01 -- target/ns_hotplug_stress.sh@33 -- # PERF_PID=2390396 00:12:11.601 11:48:01 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2390396 00:12:11.601 11:48:01 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:11.601 EAL: No free 2048 kB hugepages reported on node 1 00:12:11.601 11:48:02 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:11.858 11:48:02 -- target/ns_hotplug_stress.sh@40 -- # null_size=1001 00:12:11.858 11:48:02 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:12:12.116 true 00:12:12.116 11:48:02 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2390396 00:12:12.116 11:48:02 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:12.116 11:48:02 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:12.373 11:48:02 -- target/ns_hotplug_stress.sh@40 -- # null_size=1002 00:12:12.373 11:48:02 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:12:12.630 true 00:12:12.630 11:48:03 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2390396 00:12:12.630 11:48:03 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:12.887 11:48:03 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:12.887 11:48:03 -- target/ns_hotplug_stress.sh@40 -- # null_size=1003 00:12:12.887 11:48:03 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:12:13.146 true 00:12:13.146 11:48:03 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2390396 00:12:13.146 11:48:03 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:13.404 11:48:03 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:13.663 11:48:03 -- target/ns_hotplug_stress.sh@40 -- # null_size=1004 00:12:13.663 11:48:03 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:12:13.663 true 00:12:13.663 11:48:04 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2390396 00:12:13.663 11:48:04 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:13.922 11:48:04 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:14.180 11:48:04 -- target/ns_hotplug_stress.sh@40 -- # null_size=1005 00:12:14.180 11:48:04 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:12:14.439 true 00:12:14.439 11:48:04 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2390396 00:12:14.439 11:48:04 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:14.439 11:48:04 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:14.698 11:48:05 -- target/ns_hotplug_stress.sh@40 -- # null_size=1006 00:12:14.698 11:48:05 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:12:14.958 true 00:12:14.958 11:48:05 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2390396 00:12:14.958 11:48:05 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:15.216 11:48:05 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:15.216 11:48:05 -- target/ns_hotplug_stress.sh@40 -- # null_size=1007 00:12:15.216 11:48:05 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:12:15.474 true 00:12:15.474 11:48:05 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2390396 00:12:15.474 11:48:05 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:15.732 11:48:06 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:15.989 11:48:06 -- target/ns_hotplug_stress.sh@40 -- # null_size=1008 00:12:15.989 11:48:06 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:12:15.989 true 00:12:15.989 11:48:06 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2390396 00:12:15.989 11:48:06 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:16.246 11:48:06 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:16.503 11:48:06 -- target/ns_hotplug_stress.sh@40 -- # null_size=1009 00:12:16.503 11:48:06 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:12:16.503 true 00:12:16.503 11:48:07 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2390396 00:12:16.503 11:48:07 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:16.761 11:48:07 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:17.048 11:48:07 -- target/ns_hotplug_stress.sh@40 -- # null_size=1010 00:12:17.048 11:48:07 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:12:17.049 true 00:12:17.049 11:48:07 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2390396 00:12:17.049 11:48:07 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:17.306 11:48:07 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:17.563 11:48:07 -- target/ns_hotplug_stress.sh@40 -- # null_size=1011 00:12:17.563 11:48:07 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:12:17.820 true 00:12:17.820 11:48:08 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2390396 00:12:17.820 11:48:08 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:17.820 11:48:08 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:18.077 11:48:08 -- target/ns_hotplug_stress.sh@40 -- # null_size=1012 00:12:18.077 11:48:08 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:12:18.335 true 00:12:18.335 11:48:08 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2390396 00:12:18.335 11:48:08 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:18.335 11:48:08 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:18.594 11:48:09 -- target/ns_hotplug_stress.sh@40 -- # null_size=1013 00:12:18.594 11:48:09 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:12:18.852 true 00:12:18.852 11:48:09 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2390396 00:12:18.852 11:48:09 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:19.110 11:48:09 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:19.110 11:48:09 -- target/ns_hotplug_stress.sh@40 -- # null_size=1014 00:12:19.110 11:48:09 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:12:19.369 true 00:12:19.369 11:48:09 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2390396 00:12:19.369 11:48:09 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:19.626 11:48:10 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:19.885 11:48:10 -- target/ns_hotplug_stress.sh@40 -- # null_size=1015 00:12:19.885 11:48:10 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:12:19.885 true 00:12:19.885 11:48:10 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2390396 00:12:19.885 11:48:10 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:20.142 11:48:10 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:20.401 11:48:10 -- target/ns_hotplug_stress.sh@40 -- # null_size=1016 00:12:20.401 11:48:10 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:12:20.659 true 00:12:20.659 11:48:10 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2390396 00:12:20.659 11:48:10 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:20.659 11:48:11 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:20.936 11:48:11 -- target/ns_hotplug_stress.sh@40 -- # null_size=1017 00:12:20.936 11:48:11 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:12:21.196 true 00:12:21.196 11:48:11 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2390396 00:12:21.196 11:48:11 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:21.455 11:48:11 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:21.712 11:48:12 -- target/ns_hotplug_stress.sh@40 -- # null_size=1018 00:12:21.712 11:48:12 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:12:21.712 true 00:12:21.712 11:48:12 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2390396 00:12:21.712 11:48:12 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:21.970 11:48:12 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:22.228 11:48:12 -- target/ns_hotplug_stress.sh@40 -- # null_size=1019 00:12:22.228 11:48:12 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:12:22.228 true 00:12:22.228 11:48:12 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2390396 00:12:22.228 11:48:12 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:22.486 11:48:12 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:22.744 11:48:13 -- target/ns_hotplug_stress.sh@40 -- # null_size=1020 00:12:22.744 11:48:13 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:12:22.744 true 00:12:23.001 11:48:13 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2390396 00:12:23.001 11:48:13 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:23.001 11:48:13 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:23.259 11:48:13 -- target/ns_hotplug_stress.sh@40 -- # null_size=1021 00:12:23.259 11:48:13 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:12:23.517 true 00:12:23.517 11:48:13 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2390396 00:12:23.517 11:48:13 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:23.517 11:48:14 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:23.775 11:48:14 -- target/ns_hotplug_stress.sh@40 -- # null_size=1022 00:12:23.776 11:48:14 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:12:24.033 true 00:12:24.033 11:48:14 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2390396 00:12:24.033 11:48:14 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:24.290 11:48:14 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:24.290 11:48:14 -- target/ns_hotplug_stress.sh@40 -- # null_size=1023 00:12:24.290 11:48:14 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:12:24.549 true 00:12:24.549 11:48:14 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2390396 00:12:24.549 11:48:14 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:24.807 11:48:15 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:24.807 11:48:15 -- target/ns_hotplug_stress.sh@40 -- # null_size=1024 00:12:24.807 11:48:15 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:12:25.066 true 00:12:25.066 11:48:15 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2390396 00:12:25.066 11:48:15 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:25.324 11:48:15 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:25.582 11:48:15 -- target/ns_hotplug_stress.sh@40 -- # null_size=1025 00:12:25.582 11:48:15 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:12:25.582 true 00:12:25.582 11:48:16 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2390396 00:12:25.582 11:48:16 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:25.840 11:48:16 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:26.097 11:48:16 -- target/ns_hotplug_stress.sh@40 -- # null_size=1026 00:12:26.097 11:48:16 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:12:26.097 true 00:12:26.097 11:48:16 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2390396 00:12:26.097 11:48:16 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:26.355 11:48:16 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:26.614 11:48:17 -- target/ns_hotplug_stress.sh@40 -- # null_size=1027 00:12:26.614 11:48:17 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:12:26.872 true 00:12:26.872 11:48:17 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2390396 00:12:26.872 11:48:17 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:26.872 11:48:17 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:27.131 11:48:17 -- target/ns_hotplug_stress.sh@40 -- # null_size=1028 00:12:27.131 11:48:17 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:12:27.389 true 00:12:27.389 11:48:17 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2390396 00:12:27.389 11:48:17 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:27.647 11:48:17 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:27.647 11:48:18 -- target/ns_hotplug_stress.sh@40 -- # null_size=1029 00:12:27.647 11:48:18 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:12:27.906 true 00:12:27.906 11:48:18 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2390396 00:12:27.906 11:48:18 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:28.165 11:48:18 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:28.165 11:48:18 -- target/ns_hotplug_stress.sh@40 -- # null_size=1030 00:12:28.165 11:48:18 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:12:28.424 true 00:12:28.424 11:48:18 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2390396 00:12:28.424 11:48:18 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:28.683 11:48:19 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:28.950 11:48:19 -- target/ns_hotplug_stress.sh@40 -- # null_size=1031 00:12:28.950 11:48:19 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:12:28.950 true 00:12:28.950 11:48:19 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2390396 00:12:28.950 11:48:19 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:29.263 11:48:19 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:29.522 11:48:19 -- target/ns_hotplug_stress.sh@40 -- # null_size=1032 00:12:29.522 11:48:19 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:12:29.522 true 00:12:29.522 11:48:20 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2390396 00:12:29.522 11:48:20 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:29.781 11:48:20 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:30.040 11:48:20 -- target/ns_hotplug_stress.sh@40 -- # null_size=1033 00:12:30.040 11:48:20 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:12:30.300 true 00:12:30.300 11:48:20 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2390396 00:12:30.300 11:48:20 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:30.300 11:48:20 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:30.559 11:48:21 -- target/ns_hotplug_stress.sh@40 -- # null_size=1034 00:12:30.559 11:48:21 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:12:30.818 true 00:12:30.818 11:48:21 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2390396 00:12:30.818 11:48:21 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:31.077 11:48:21 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:31.077 11:48:21 -- target/ns_hotplug_stress.sh@40 -- # null_size=1035 00:12:31.077 11:48:21 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:12:31.337 true 00:12:31.337 11:48:21 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2390396 00:12:31.337 11:48:21 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:31.596 11:48:21 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:31.855 11:48:22 -- target/ns_hotplug_stress.sh@40 -- # null_size=1036 00:12:31.855 11:48:22 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:12:31.855 true 00:12:31.855 11:48:22 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2390396 00:12:31.855 11:48:22 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:32.115 11:48:22 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:32.374 11:48:22 -- target/ns_hotplug_stress.sh@40 -- # null_size=1037 00:12:32.374 11:48:22 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:12:32.374 true 00:12:32.374 11:48:22 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2390396 00:12:32.374 11:48:22 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:32.633 11:48:23 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:32.892 11:48:23 -- target/ns_hotplug_stress.sh@40 -- # null_size=1038 00:12:32.892 11:48:23 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:12:33.151 true 00:12:33.151 11:48:23 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2390396 00:12:33.151 11:48:23 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:33.151 11:48:23 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:33.411 11:48:23 -- target/ns_hotplug_stress.sh@40 -- # null_size=1039 00:12:33.411 11:48:23 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:12:33.670 true 00:12:33.670 11:48:24 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2390396 00:12:33.670 11:48:24 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:33.929 11:48:24 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:33.929 11:48:24 -- target/ns_hotplug_stress.sh@40 -- # null_size=1040 00:12:33.929 11:48:24 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:12:34.189 true 00:12:34.189 11:48:24 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2390396 00:12:34.189 11:48:24 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:34.448 11:48:24 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:34.707 11:48:25 -- target/ns_hotplug_stress.sh@40 -- # null_size=1041 00:12:34.707 11:48:25 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:12:34.707 true 00:12:34.707 11:48:25 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2390396 00:12:34.707 11:48:25 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:34.967 11:48:25 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:35.226 11:48:25 -- target/ns_hotplug_stress.sh@40 -- # null_size=1042 00:12:35.226 11:48:25 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:12:35.226 true 00:12:35.226 11:48:25 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2390396 00:12:35.226 11:48:25 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:35.485 11:48:25 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:35.744 11:48:26 -- target/ns_hotplug_stress.sh@40 -- # null_size=1043 00:12:35.744 11:48:26 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:12:36.003 true 00:12:36.003 11:48:26 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2390396 00:12:36.003 11:48:26 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:36.003 11:48:26 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:36.263 11:48:26 -- target/ns_hotplug_stress.sh@40 -- # null_size=1044 00:12:36.263 11:48:26 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:12:36.522 true 00:12:36.522 11:48:26 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2390396 00:12:36.522 11:48:26 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:36.782 11:48:27 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:36.782 11:48:27 -- target/ns_hotplug_stress.sh@40 -- # null_size=1045 00:12:36.782 11:48:27 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:12:37.041 true 00:12:37.041 11:48:27 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2390396 00:12:37.041 11:48:27 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:37.301 11:48:27 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:37.559 11:48:27 -- target/ns_hotplug_stress.sh@40 -- # null_size=1046 00:12:37.559 11:48:27 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:12:37.559 true 00:12:37.559 11:48:28 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2390396 00:12:37.559 11:48:28 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:37.818 11:48:28 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:38.077 11:48:28 -- target/ns_hotplug_stress.sh@40 -- # null_size=1047 00:12:38.077 11:48:28 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:12:38.077 true 00:12:38.336 11:48:28 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2390396 00:12:38.336 11:48:28 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:38.336 11:48:28 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:38.594 11:48:29 -- target/ns_hotplug_stress.sh@40 -- # null_size=1048 00:12:38.594 11:48:29 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:12:38.854 true 00:12:38.854 11:48:29 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2390396 00:12:38.854 11:48:29 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:39.113 11:48:29 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:39.372 11:48:29 -- target/ns_hotplug_stress.sh@40 -- # null_size=1049 00:12:39.372 11:48:29 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:12:39.372 true 00:12:39.372 11:48:29 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2390396 00:12:39.372 11:48:29 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:39.631 11:48:30 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:39.890 11:48:30 -- target/ns_hotplug_stress.sh@40 -- # null_size=1050 00:12:39.890 11:48:30 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:12:39.890 true 00:12:40.149 11:48:30 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2390396 00:12:40.149 11:48:30 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:40.149 11:48:30 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:40.408 11:48:30 -- target/ns_hotplug_stress.sh@40 -- # null_size=1051 00:12:40.408 11:48:30 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:12:40.667 true 00:12:40.667 11:48:31 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2390396 00:12:40.667 11:48:31 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:40.926 11:48:31 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:40.926 11:48:31 -- target/ns_hotplug_stress.sh@40 -- # null_size=1052 00:12:40.926 11:48:31 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:12:41.185 true 00:12:41.185 11:48:31 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2390396 00:12:41.185 11:48:31 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:41.444 11:48:31 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:41.756 11:48:31 -- target/ns_hotplug_stress.sh@40 -- # null_size=1053 00:12:41.756 11:48:31 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:12:41.756 true 00:12:41.756 11:48:32 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2390396 00:12:41.756 11:48:32 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:41.756 Initializing NVMe Controllers 00:12:41.756 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:41.756 Controller IO queue size 128, less than required. 00:12:41.756 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:41.756 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:12:41.756 Initialization complete. Launching workers. 00:12:41.756 ======================================================== 00:12:41.756 Latency(us) 00:12:41.756 Device Information : IOPS MiB/s Average min max 00:12:41.756 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 23662.23 11.55 5409.45 2365.96 9681.59 00:12:41.756 ======================================================== 00:12:41.756 Total : 23662.23 11.55 5409.45 2365.96 9681.59 00:12:41.756 00:12:42.020 11:48:32 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:42.020 11:48:32 -- target/ns_hotplug_stress.sh@40 -- # null_size=1054 00:12:42.020 11:48:32 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:12:42.280 true 00:12:42.280 11:48:32 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2390396 00:12:42.280 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 35: kill: (2390396) - No such process 00:12:42.280 11:48:32 -- target/ns_hotplug_stress.sh@44 -- # wait 2390396 00:12:42.280 11:48:32 -- target/ns_hotplug_stress.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:12:42.280 11:48:32 -- target/ns_hotplug_stress.sh@48 -- # nvmftestfini 00:12:42.280 11:48:32 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:42.280 11:48:32 -- nvmf/common.sh@117 -- # sync 00:12:42.280 11:48:32 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:42.280 11:48:32 -- nvmf/common.sh@120 -- # set +e 00:12:42.280 11:48:32 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:42.280 11:48:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:42.280 rmmod nvme_tcp 00:12:42.280 rmmod nvme_fabrics 00:12:42.280 rmmod nvme_keyring 00:12:42.280 11:48:32 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:42.280 11:48:32 -- nvmf/common.sh@124 -- # set -e 00:12:42.280 11:48:32 -- nvmf/common.sh@125 -- # return 0 00:12:42.280 11:48:32 -- nvmf/common.sh@478 -- # '[' -n 2389720 ']' 00:12:42.280 11:48:32 -- nvmf/common.sh@479 -- # killprocess 2389720 00:12:42.280 11:48:32 -- common/autotest_common.sh@936 -- # '[' -z 2389720 ']' 00:12:42.280 11:48:32 -- common/autotest_common.sh@940 -- # kill -0 2389720 00:12:42.280 11:48:32 -- common/autotest_common.sh@941 -- # uname 00:12:42.280 11:48:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:42.280 11:48:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2389720 00:12:42.540 11:48:32 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:42.540 11:48:32 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:42.540 11:48:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2389720' 00:12:42.540 killing process with pid 2389720 00:12:42.540 11:48:32 -- common/autotest_common.sh@955 -- # kill 2389720 00:12:42.540 11:48:32 -- common/autotest_common.sh@960 -- # wait 2389720 00:12:43.918 11:48:34 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:43.918 11:48:34 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:43.918 11:48:34 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:43.918 11:48:34 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:43.918 11:48:34 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:43.918 11:48:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:43.918 11:48:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:43.918 11:48:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.826 11:48:36 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:45.826 00:12:45.826 real 0m44.256s 00:12:45.826 user 2m34.823s 00:12:45.826 sys 0m17.101s 00:12:45.826 11:48:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:45.826 11:48:36 -- common/autotest_common.sh@10 -- # set +x 00:12:45.826 ************************************ 00:12:45.826 END TEST nvmf_ns_hotplug_stress 00:12:45.826 ************************************ 00:12:45.826 11:48:36 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:45.826 11:48:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:45.826 11:48:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:45.826 11:48:36 -- common/autotest_common.sh@10 -- # set +x 00:12:46.085 ************************************ 00:12:46.085 START TEST nvmf_connect_stress 00:12:46.085 ************************************ 00:12:46.085 11:48:36 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:46.085 * Looking for test storage... 00:12:46.085 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:46.085 11:48:36 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:46.085 11:48:36 -- nvmf/common.sh@7 -- # uname -s 00:12:46.085 11:48:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:46.085 11:48:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:46.085 11:48:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:46.085 11:48:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:46.085 11:48:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:46.085 11:48:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:46.085 11:48:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:46.085 11:48:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:46.085 11:48:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:46.085 11:48:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:46.085 11:48:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:12:46.085 11:48:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:12:46.085 11:48:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:46.085 11:48:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:46.085 11:48:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:46.085 11:48:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:46.345 11:48:36 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:46.345 11:48:36 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:46.345 11:48:36 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:46.345 11:48:36 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:46.345 11:48:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.345 11:48:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.345 11:48:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.345 11:48:36 -- paths/export.sh@5 -- # export PATH 00:12:46.345 11:48:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.345 11:48:36 -- nvmf/common.sh@47 -- # : 0 00:12:46.345 11:48:36 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:46.345 11:48:36 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:46.345 11:48:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:46.345 11:48:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:46.345 11:48:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:46.345 11:48:36 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:46.345 11:48:36 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:46.345 11:48:36 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:46.345 11:48:36 -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:46.345 11:48:36 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:46.345 11:48:36 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:46.345 11:48:36 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:46.345 11:48:36 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:46.345 11:48:36 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:46.345 11:48:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:46.345 11:48:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:46.345 11:48:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:46.345 11:48:36 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:46.345 11:48:36 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:46.345 11:48:36 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:46.345 11:48:36 -- common/autotest_common.sh@10 -- # set +x 00:12:52.918 11:48:43 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:52.918 11:48:43 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:52.918 11:48:43 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:52.918 11:48:43 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:52.918 11:48:43 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:52.918 11:48:43 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:52.918 11:48:43 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:52.918 11:48:43 -- nvmf/common.sh@295 -- # net_devs=() 00:12:52.918 11:48:43 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:52.918 11:48:43 -- nvmf/common.sh@296 -- # e810=() 00:12:52.918 11:48:43 -- nvmf/common.sh@296 -- # local -ga e810 00:12:52.918 11:48:43 -- nvmf/common.sh@297 -- # x722=() 00:12:52.918 11:48:43 -- nvmf/common.sh@297 -- # local -ga x722 00:12:52.918 11:48:43 -- nvmf/common.sh@298 -- # mlx=() 00:12:52.918 11:48:43 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:52.918 11:48:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:52.918 11:48:43 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:52.918 11:48:43 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:52.918 11:48:43 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:52.918 11:48:43 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:52.918 11:48:43 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:52.918 11:48:43 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:52.918 11:48:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:52.918 11:48:43 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:52.918 11:48:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:52.918 11:48:43 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:52.918 11:48:43 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:52.918 11:48:43 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:52.918 11:48:43 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:52.918 11:48:43 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:52.918 11:48:43 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:52.918 11:48:43 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:52.918 11:48:43 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:52.918 11:48:43 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:52.918 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:52.918 11:48:43 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:52.918 11:48:43 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:52.918 11:48:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:52.918 11:48:43 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:52.918 11:48:43 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:52.918 11:48:43 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:52.918 11:48:43 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:52.918 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:52.918 11:48:43 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:52.918 11:48:43 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:52.918 11:48:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:52.918 11:48:43 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:52.918 11:48:43 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:52.918 11:48:43 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:52.918 11:48:43 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:52.918 11:48:43 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:52.918 11:48:43 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:52.918 11:48:43 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:52.918 11:48:43 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:52.918 11:48:43 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:52.918 11:48:43 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:52.918 Found net devices under 0000:af:00.0: cvl_0_0 00:12:52.918 11:48:43 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:52.918 11:48:43 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:52.918 11:48:43 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:52.918 11:48:43 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:52.918 11:48:43 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:52.918 11:48:43 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:52.918 Found net devices under 0000:af:00.1: cvl_0_1 00:12:52.918 11:48:43 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:52.918 11:48:43 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:52.918 11:48:43 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:52.918 11:48:43 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:52.918 11:48:43 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:12:52.918 11:48:43 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:12:52.918 11:48:43 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:52.918 11:48:43 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:52.918 11:48:43 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:52.918 11:48:43 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:52.918 11:48:43 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:52.918 11:48:43 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:52.918 11:48:43 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:52.918 11:48:43 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:52.918 11:48:43 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:52.918 11:48:43 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:52.918 11:48:43 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:52.918 11:48:43 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:52.918 11:48:43 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:52.918 11:48:43 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:52.918 11:48:43 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:52.918 11:48:43 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:52.918 11:48:43 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:53.177 11:48:43 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:53.177 11:48:43 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:53.177 11:48:43 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:53.177 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:53.177 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.151 ms 00:12:53.177 00:12:53.177 --- 10.0.0.2 ping statistics --- 00:12:53.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:53.177 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:12:53.177 11:48:43 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:53.177 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:53.177 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:12:53.177 00:12:53.177 --- 10.0.0.1 ping statistics --- 00:12:53.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:53.177 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:12:53.177 11:48:43 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:53.178 11:48:43 -- nvmf/common.sh@411 -- # return 0 00:12:53.178 11:48:43 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:53.178 11:48:43 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:53.178 11:48:43 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:53.178 11:48:43 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:53.178 11:48:43 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:53.178 11:48:43 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:53.178 11:48:43 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:53.178 11:48:43 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:53.178 11:48:43 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:53.178 11:48:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:53.178 11:48:43 -- common/autotest_common.sh@10 -- # set +x 00:12:53.178 11:48:43 -- nvmf/common.sh@470 -- # nvmfpid=2400028 00:12:53.178 11:48:43 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:53.178 11:48:43 -- nvmf/common.sh@471 -- # waitforlisten 2400028 00:12:53.178 11:48:43 -- common/autotest_common.sh@817 -- # '[' -z 2400028 ']' 00:12:53.178 11:48:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:53.178 11:48:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:53.178 11:48:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:53.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:53.178 11:48:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:53.178 11:48:43 -- common/autotest_common.sh@10 -- # set +x 00:12:53.178 [2024-04-18 11:48:43.696359] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:12:53.178 [2024-04-18 11:48:43.696469] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:53.437 EAL: No free 2048 kB hugepages reported on node 1 00:12:53.437 [2024-04-18 11:48:43.829101] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:53.695 [2024-04-18 11:48:44.047923] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:53.695 [2024-04-18 11:48:44.047968] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:53.695 [2024-04-18 11:48:44.047981] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:53.695 [2024-04-18 11:48:44.047994] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:53.695 [2024-04-18 11:48:44.048007] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:53.695 [2024-04-18 11:48:44.048138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:53.695 [2024-04-18 11:48:44.048199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:53.695 [2024-04-18 11:48:44.048207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:53.954 11:48:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:53.954 11:48:44 -- common/autotest_common.sh@850 -- # return 0 00:12:53.954 11:48:44 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:53.954 11:48:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:53.954 11:48:44 -- common/autotest_common.sh@10 -- # set +x 00:12:54.213 11:48:44 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:54.213 11:48:44 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:54.213 11:48:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:54.213 11:48:44 -- common/autotest_common.sh@10 -- # set +x 00:12:54.213 [2024-04-18 11:48:44.509116] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:54.213 11:48:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:54.213 11:48:44 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:54.213 11:48:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:54.213 11:48:44 -- common/autotest_common.sh@10 -- # set +x 00:12:54.213 11:48:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:54.213 11:48:44 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:54.213 11:48:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:54.213 11:48:44 -- common/autotest_common.sh@10 -- # set +x 00:12:54.213 [2024-04-18 11:48:44.552338] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:54.213 11:48:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:54.213 11:48:44 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:54.213 11:48:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:54.213 11:48:44 -- common/autotest_common.sh@10 -- # set +x 00:12:54.213 NULL1 00:12:54.213 11:48:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:54.213 11:48:44 -- target/connect_stress.sh@21 -- # PERF_PID=2400284 00:12:54.213 11:48:44 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:54.213 11:48:44 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:54.213 11:48:44 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:54.213 11:48:44 -- target/connect_stress.sh@27 -- # seq 1 20 00:12:54.213 11:48:44 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:54.213 11:48:44 -- target/connect_stress.sh@28 -- # cat 00:12:54.213 11:48:44 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:54.213 11:48:44 -- target/connect_stress.sh@28 -- # cat 00:12:54.213 11:48:44 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:54.213 11:48:44 -- target/connect_stress.sh@28 -- # cat 00:12:54.213 11:48:44 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:54.213 11:48:44 -- target/connect_stress.sh@28 -- # cat 00:12:54.213 11:48:44 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:54.213 11:48:44 -- target/connect_stress.sh@28 -- # cat 00:12:54.213 11:48:44 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:54.213 11:48:44 -- target/connect_stress.sh@28 -- # cat 00:12:54.213 11:48:44 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:54.213 11:48:44 -- target/connect_stress.sh@28 -- # cat 00:12:54.213 11:48:44 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:54.214 11:48:44 -- target/connect_stress.sh@28 -- # cat 00:12:54.214 11:48:44 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:54.214 11:48:44 -- target/connect_stress.sh@28 -- # cat 00:12:54.214 11:48:44 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:54.214 11:48:44 -- target/connect_stress.sh@28 -- # cat 00:12:54.214 11:48:44 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:54.214 11:48:44 -- target/connect_stress.sh@28 -- # cat 00:12:54.214 11:48:44 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:54.214 11:48:44 -- target/connect_stress.sh@28 -- # cat 00:12:54.214 11:48:44 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:54.214 11:48:44 -- target/connect_stress.sh@28 -- # cat 00:12:54.214 EAL: No free 2048 kB hugepages reported on node 1 00:12:54.214 11:48:44 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:54.214 11:48:44 -- target/connect_stress.sh@28 -- # cat 00:12:54.214 11:48:44 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:54.214 11:48:44 -- target/connect_stress.sh@28 -- # cat 00:12:54.214 11:48:44 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:54.214 11:48:44 -- target/connect_stress.sh@28 -- # cat 00:12:54.214 11:48:44 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:54.214 11:48:44 -- target/connect_stress.sh@28 -- # cat 00:12:54.214 11:48:44 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:54.214 11:48:44 -- target/connect_stress.sh@28 -- # cat 00:12:54.214 11:48:44 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:54.214 11:48:44 -- target/connect_stress.sh@28 -- # cat 00:12:54.214 11:48:44 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:54.214 11:48:44 -- target/connect_stress.sh@28 -- # cat 00:12:54.214 11:48:44 -- target/connect_stress.sh@34 -- # kill -0 2400284 00:12:54.214 11:48:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:54.214 11:48:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:54.214 11:48:44 -- common/autotest_common.sh@10 -- # set +x 00:12:54.473 11:48:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:54.473 11:48:45 -- target/connect_stress.sh@34 -- # kill -0 2400284 00:12:54.473 11:48:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:54.473 11:48:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:54.473 11:48:45 -- common/autotest_common.sh@10 -- # set +x 00:12:55.040 11:48:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:55.040 11:48:45 -- target/connect_stress.sh@34 -- # kill -0 2400284 00:12:55.040 11:48:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:55.040 11:48:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:55.040 11:48:45 -- common/autotest_common.sh@10 -- # set +x 00:12:55.299 11:48:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:55.299 11:48:45 -- target/connect_stress.sh@34 -- # kill -0 2400284 00:12:55.299 11:48:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:55.299 11:48:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:55.299 11:48:45 -- common/autotest_common.sh@10 -- # set +x 00:12:55.558 11:48:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:55.558 11:48:45 -- target/connect_stress.sh@34 -- # kill -0 2400284 00:12:55.558 11:48:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:55.558 11:48:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:55.558 11:48:45 -- common/autotest_common.sh@10 -- # set +x 00:12:55.817 11:48:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:55.817 11:48:46 -- target/connect_stress.sh@34 -- # kill -0 2400284 00:12:55.817 11:48:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:55.817 11:48:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:55.817 11:48:46 -- common/autotest_common.sh@10 -- # set +x 00:12:56.385 11:48:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:56.385 11:48:46 -- target/connect_stress.sh@34 -- # kill -0 2400284 00:12:56.385 11:48:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:56.385 11:48:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:56.385 11:48:46 -- common/autotest_common.sh@10 -- # set +x 00:12:56.644 11:48:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:56.644 11:48:46 -- target/connect_stress.sh@34 -- # kill -0 2400284 00:12:56.644 11:48:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:56.644 11:48:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:56.644 11:48:46 -- common/autotest_common.sh@10 -- # set +x 00:12:56.902 11:48:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:56.902 11:48:47 -- target/connect_stress.sh@34 -- # kill -0 2400284 00:12:56.902 11:48:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:56.902 11:48:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:56.902 11:48:47 -- common/autotest_common.sh@10 -- # set +x 00:12:57.161 11:48:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:57.161 11:48:47 -- target/connect_stress.sh@34 -- # kill -0 2400284 00:12:57.161 11:48:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:57.161 11:48:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:57.161 11:48:47 -- common/autotest_common.sh@10 -- # set +x 00:12:57.419 11:48:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:57.420 11:48:47 -- target/connect_stress.sh@34 -- # kill -0 2400284 00:12:57.420 11:48:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:57.420 11:48:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:57.420 11:48:47 -- common/autotest_common.sh@10 -- # set +x 00:12:57.989 11:48:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:57.989 11:48:48 -- target/connect_stress.sh@34 -- # kill -0 2400284 00:12:57.989 11:48:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:57.989 11:48:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:57.989 11:48:48 -- common/autotest_common.sh@10 -- # set +x 00:12:58.248 11:48:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:58.248 11:48:48 -- target/connect_stress.sh@34 -- # kill -0 2400284 00:12:58.248 11:48:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:58.248 11:48:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:58.248 11:48:48 -- common/autotest_common.sh@10 -- # set +x 00:12:58.508 11:48:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:58.508 11:48:48 -- target/connect_stress.sh@34 -- # kill -0 2400284 00:12:58.508 11:48:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:58.508 11:48:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:58.508 11:48:48 -- common/autotest_common.sh@10 -- # set +x 00:12:58.767 11:48:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:58.767 11:48:49 -- target/connect_stress.sh@34 -- # kill -0 2400284 00:12:58.767 11:48:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:58.767 11:48:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:58.767 11:48:49 -- common/autotest_common.sh@10 -- # set +x 00:12:59.335 11:48:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:59.335 11:48:49 -- target/connect_stress.sh@34 -- # kill -0 2400284 00:12:59.335 11:48:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:59.335 11:48:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:59.335 11:48:49 -- common/autotest_common.sh@10 -- # set +x 00:12:59.622 11:48:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:59.622 11:48:49 -- target/connect_stress.sh@34 -- # kill -0 2400284 00:12:59.622 11:48:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:59.622 11:48:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:59.622 11:48:49 -- common/autotest_common.sh@10 -- # set +x 00:12:59.881 11:48:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:59.881 11:48:50 -- target/connect_stress.sh@34 -- # kill -0 2400284 00:12:59.881 11:48:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:59.881 11:48:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:59.881 11:48:50 -- common/autotest_common.sh@10 -- # set +x 00:13:00.139 11:48:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:00.139 11:48:50 -- target/connect_stress.sh@34 -- # kill -0 2400284 00:13:00.139 11:48:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:00.139 11:48:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:00.139 11:48:50 -- common/autotest_common.sh@10 -- # set +x 00:13:00.398 11:48:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:00.398 11:48:50 -- target/connect_stress.sh@34 -- # kill -0 2400284 00:13:00.398 11:48:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:00.398 11:48:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:00.398 11:48:50 -- common/autotest_common.sh@10 -- # set +x 00:13:00.967 11:48:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:00.967 11:48:51 -- target/connect_stress.sh@34 -- # kill -0 2400284 00:13:00.967 11:48:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:00.967 11:48:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:00.967 11:48:51 -- common/autotest_common.sh@10 -- # set +x 00:13:01.226 11:48:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:01.226 11:48:51 -- target/connect_stress.sh@34 -- # kill -0 2400284 00:13:01.226 11:48:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:01.226 11:48:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:01.226 11:48:51 -- common/autotest_common.sh@10 -- # set +x 00:13:01.485 11:48:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:01.485 11:48:51 -- target/connect_stress.sh@34 -- # kill -0 2400284 00:13:01.485 11:48:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:01.485 11:48:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:01.485 11:48:51 -- common/autotest_common.sh@10 -- # set +x 00:13:01.744 11:48:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:01.744 11:48:52 -- target/connect_stress.sh@34 -- # kill -0 2400284 00:13:01.744 11:48:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:01.744 11:48:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:01.744 11:48:52 -- common/autotest_common.sh@10 -- # set +x 00:13:02.004 11:48:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:02.004 11:48:52 -- target/connect_stress.sh@34 -- # kill -0 2400284 00:13:02.004 11:48:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:02.004 11:48:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:02.004 11:48:52 -- common/autotest_common.sh@10 -- # set +x 00:13:02.571 11:48:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:02.572 11:48:52 -- target/connect_stress.sh@34 -- # kill -0 2400284 00:13:02.572 11:48:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:02.572 11:48:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:02.572 11:48:52 -- common/autotest_common.sh@10 -- # set +x 00:13:02.831 11:48:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:02.831 11:48:53 -- target/connect_stress.sh@34 -- # kill -0 2400284 00:13:02.831 11:48:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:02.831 11:48:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:02.831 11:48:53 -- common/autotest_common.sh@10 -- # set +x 00:13:03.090 11:48:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:03.090 11:48:53 -- target/connect_stress.sh@34 -- # kill -0 2400284 00:13:03.090 11:48:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:03.090 11:48:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:03.090 11:48:53 -- common/autotest_common.sh@10 -- # set +x 00:13:03.350 11:48:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:03.350 11:48:53 -- target/connect_stress.sh@34 -- # kill -0 2400284 00:13:03.350 11:48:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:03.350 11:48:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:03.350 11:48:53 -- common/autotest_common.sh@10 -- # set +x 00:13:03.918 11:48:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:03.918 11:48:54 -- target/connect_stress.sh@34 -- # kill -0 2400284 00:13:03.918 11:48:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:03.918 11:48:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:03.918 11:48:54 -- common/autotest_common.sh@10 -- # set +x 00:13:04.177 11:48:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.177 11:48:54 -- target/connect_stress.sh@34 -- # kill -0 2400284 00:13:04.177 11:48:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:04.177 11:48:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.177 11:48:54 -- common/autotest_common.sh@10 -- # set +x 00:13:04.177 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:04.436 11:48:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.436 11:48:54 -- target/connect_stress.sh@34 -- # kill -0 2400284 00:13:04.436 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2400284) - No such process 00:13:04.436 11:48:54 -- target/connect_stress.sh@38 -- # wait 2400284 00:13:04.436 11:48:54 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:04.436 11:48:54 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:04.436 11:48:54 -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:04.436 11:48:54 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:04.436 11:48:54 -- nvmf/common.sh@117 -- # sync 00:13:04.436 11:48:54 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:04.436 11:48:54 -- nvmf/common.sh@120 -- # set +e 00:13:04.436 11:48:54 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:04.436 11:48:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:04.436 rmmod nvme_tcp 00:13:04.436 rmmod nvme_fabrics 00:13:04.436 rmmod nvme_keyring 00:13:04.436 11:48:54 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:04.436 11:48:54 -- nvmf/common.sh@124 -- # set -e 00:13:04.436 11:48:54 -- nvmf/common.sh@125 -- # return 0 00:13:04.436 11:48:54 -- nvmf/common.sh@478 -- # '[' -n 2400028 ']' 00:13:04.436 11:48:54 -- nvmf/common.sh@479 -- # killprocess 2400028 00:13:04.436 11:48:54 -- common/autotest_common.sh@936 -- # '[' -z 2400028 ']' 00:13:04.436 11:48:54 -- common/autotest_common.sh@940 -- # kill -0 2400028 00:13:04.436 11:48:54 -- common/autotest_common.sh@941 -- # uname 00:13:04.436 11:48:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:04.436 11:48:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2400028 00:13:04.436 11:48:54 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:04.436 11:48:54 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:04.436 11:48:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2400028' 00:13:04.436 killing process with pid 2400028 00:13:04.436 11:48:54 -- common/autotest_common.sh@955 -- # kill 2400028 00:13:04.436 11:48:54 -- common/autotest_common.sh@960 -- # wait 2400028 00:13:05.816 11:48:56 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:05.816 11:48:56 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:05.816 11:48:56 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:05.816 11:48:56 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:05.816 11:48:56 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:05.816 11:48:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:05.816 11:48:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:05.816 11:48:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:08.352 11:48:58 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:08.352 00:13:08.352 real 0m21.822s 00:13:08.352 user 0m42.927s 00:13:08.352 sys 0m10.013s 00:13:08.352 11:48:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:08.352 11:48:58 -- common/autotest_common.sh@10 -- # set +x 00:13:08.352 ************************************ 00:13:08.352 END TEST nvmf_connect_stress 00:13:08.352 ************************************ 00:13:08.352 11:48:58 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:08.352 11:48:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:08.352 11:48:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:08.352 11:48:58 -- common/autotest_common.sh@10 -- # set +x 00:13:08.352 ************************************ 00:13:08.352 START TEST nvmf_fused_ordering 00:13:08.352 ************************************ 00:13:08.352 11:48:58 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:08.352 * Looking for test storage... 00:13:08.352 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:08.352 11:48:58 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:08.352 11:48:58 -- nvmf/common.sh@7 -- # uname -s 00:13:08.352 11:48:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:08.352 11:48:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:08.352 11:48:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:08.352 11:48:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:08.352 11:48:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:08.352 11:48:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:08.352 11:48:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:08.352 11:48:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:08.352 11:48:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:08.352 11:48:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:08.352 11:48:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:13:08.352 11:48:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:13:08.352 11:48:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:08.352 11:48:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:08.352 11:48:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:08.352 11:48:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:08.352 11:48:58 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:08.352 11:48:58 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:08.352 11:48:58 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:08.352 11:48:58 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:08.352 11:48:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.353 11:48:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.353 11:48:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.353 11:48:58 -- paths/export.sh@5 -- # export PATH 00:13:08.353 11:48:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.353 11:48:58 -- nvmf/common.sh@47 -- # : 0 00:13:08.353 11:48:58 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:08.353 11:48:58 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:08.353 11:48:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:08.353 11:48:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:08.353 11:48:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:08.353 11:48:58 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:08.353 11:48:58 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:08.353 11:48:58 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:08.353 11:48:58 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:08.353 11:48:58 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:08.353 11:48:58 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:08.353 11:48:58 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:08.353 11:48:58 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:08.353 11:48:58 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:08.353 11:48:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:08.353 11:48:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:08.353 11:48:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:08.353 11:48:58 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:08.353 11:48:58 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:08.353 11:48:58 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:08.353 11:48:58 -- common/autotest_common.sh@10 -- # set +x 00:13:14.923 11:49:04 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:14.923 11:49:04 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:14.923 11:49:04 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:14.923 11:49:04 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:14.923 11:49:04 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:14.923 11:49:04 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:14.923 11:49:04 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:14.923 11:49:04 -- nvmf/common.sh@295 -- # net_devs=() 00:13:14.923 11:49:04 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:14.923 11:49:04 -- nvmf/common.sh@296 -- # e810=() 00:13:14.923 11:49:04 -- nvmf/common.sh@296 -- # local -ga e810 00:13:14.923 11:49:04 -- nvmf/common.sh@297 -- # x722=() 00:13:14.923 11:49:04 -- nvmf/common.sh@297 -- # local -ga x722 00:13:14.923 11:49:04 -- nvmf/common.sh@298 -- # mlx=() 00:13:14.923 11:49:04 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:14.923 11:49:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:14.923 11:49:04 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:14.923 11:49:04 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:14.923 11:49:04 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:14.923 11:49:04 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:14.923 11:49:04 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:14.923 11:49:04 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:14.923 11:49:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:14.923 11:49:04 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:14.923 11:49:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:14.923 11:49:04 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:14.923 11:49:04 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:14.923 11:49:04 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:14.923 11:49:04 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:14.923 11:49:04 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:14.923 11:49:04 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:14.923 11:49:04 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:14.923 11:49:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:14.923 11:49:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:14.923 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:14.923 11:49:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:14.923 11:49:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:14.923 11:49:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:14.923 11:49:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:14.923 11:49:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:14.923 11:49:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:14.923 11:49:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:14.923 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:14.923 11:49:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:14.923 11:49:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:14.923 11:49:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:14.923 11:49:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:14.923 11:49:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:14.923 11:49:04 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:14.923 11:49:04 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:14.923 11:49:04 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:14.923 11:49:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:14.923 11:49:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:14.923 11:49:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:14.923 11:49:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:14.923 11:49:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:14.923 Found net devices under 0000:af:00.0: cvl_0_0 00:13:14.923 11:49:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:14.923 11:49:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:14.923 11:49:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:14.923 11:49:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:14.923 11:49:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:14.923 11:49:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:14.923 Found net devices under 0000:af:00.1: cvl_0_1 00:13:14.923 11:49:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:14.923 11:49:04 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:14.923 11:49:04 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:14.923 11:49:04 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:14.923 11:49:04 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:14.923 11:49:04 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:14.923 11:49:04 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:14.923 11:49:04 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:14.923 11:49:04 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:14.923 11:49:04 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:14.923 11:49:04 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:14.923 11:49:04 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:14.923 11:49:04 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:14.923 11:49:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:14.923 11:49:04 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:14.923 11:49:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:14.923 11:49:04 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:14.923 11:49:04 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:14.923 11:49:04 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:14.923 11:49:04 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:14.923 11:49:04 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:14.923 11:49:04 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:14.923 11:49:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:14.923 11:49:04 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:14.923 11:49:04 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:14.923 11:49:04 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:14.923 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:14.923 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:13:14.923 00:13:14.923 --- 10.0.0.2 ping statistics --- 00:13:14.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:14.923 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:13:14.923 11:49:04 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:14.923 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:14.923 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:13:14.923 00:13:14.923 --- 10.0.0.1 ping statistics --- 00:13:14.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:14.923 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:13:14.923 11:49:05 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:14.923 11:49:05 -- nvmf/common.sh@411 -- # return 0 00:13:14.923 11:49:05 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:14.923 11:49:05 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:14.923 11:49:05 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:14.923 11:49:05 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:14.923 11:49:05 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:14.923 11:49:05 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:14.923 11:49:05 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:14.923 11:49:05 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:14.923 11:49:05 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:14.923 11:49:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:14.923 11:49:05 -- common/autotest_common.sh@10 -- # set +x 00:13:14.923 11:49:05 -- nvmf/common.sh@470 -- # nvmfpid=2405873 00:13:14.923 11:49:05 -- nvmf/common.sh@471 -- # waitforlisten 2405873 00:13:14.923 11:49:05 -- common/autotest_common.sh@817 -- # '[' -z 2405873 ']' 00:13:14.923 11:49:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:14.923 11:49:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:14.923 11:49:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:14.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:14.923 11:49:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:14.923 11:49:05 -- common/autotest_common.sh@10 -- # set +x 00:13:14.923 11:49:05 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:14.923 [2024-04-18 11:49:05.128181] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:13:14.923 [2024-04-18 11:49:05.128267] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:14.923 EAL: No free 2048 kB hugepages reported on node 1 00:13:14.923 [2024-04-18 11:49:05.255905] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:14.924 [2024-04-18 11:49:05.462913] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:14.924 [2024-04-18 11:49:05.462956] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:14.924 [2024-04-18 11:49:05.462969] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:14.924 [2024-04-18 11:49:05.462982] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:14.924 [2024-04-18 11:49:05.462992] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:14.924 [2024-04-18 11:49:05.463028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:15.492 11:49:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:15.492 11:49:05 -- common/autotest_common.sh@850 -- # return 0 00:13:15.492 11:49:05 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:15.492 11:49:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:15.492 11:49:05 -- common/autotest_common.sh@10 -- # set +x 00:13:15.492 11:49:05 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:15.492 11:49:05 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:15.492 11:49:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:15.493 11:49:05 -- common/autotest_common.sh@10 -- # set +x 00:13:15.493 [2024-04-18 11:49:05.922198] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:15.493 11:49:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:15.493 11:49:05 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:15.493 11:49:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:15.493 11:49:05 -- common/autotest_common.sh@10 -- # set +x 00:13:15.493 11:49:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:15.493 11:49:05 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:15.493 11:49:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:15.493 11:49:05 -- common/autotest_common.sh@10 -- # set +x 00:13:15.493 [2024-04-18 11:49:05.938394] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:15.493 11:49:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:15.493 11:49:05 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:15.493 11:49:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:15.493 11:49:05 -- common/autotest_common.sh@10 -- # set +x 00:13:15.493 NULL1 00:13:15.493 11:49:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:15.493 11:49:05 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:15.493 11:49:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:15.493 11:49:05 -- common/autotest_common.sh@10 -- # set +x 00:13:15.493 11:49:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:15.493 11:49:05 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:15.493 11:49:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:15.493 11:49:05 -- common/autotest_common.sh@10 -- # set +x 00:13:15.493 11:49:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:15.493 11:49:05 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:15.493 [2024-04-18 11:49:06.000204] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:13:15.493 [2024-04-18 11:49:06.000265] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2406144 ] 00:13:15.752 EAL: No free 2048 kB hugepages reported on node 1 00:13:16.321 Attached to nqn.2016-06.io.spdk:cnode1 00:13:16.321 Namespace ID: 1 size: 1GB 00:13:16.321 fused_ordering(0) 00:13:16.321 fused_ordering(1) 00:13:16.321 fused_ordering(2) 00:13:16.321 fused_ordering(3) 00:13:16.321 fused_ordering(4) 00:13:16.321 fused_ordering(5) 00:13:16.321 fused_ordering(6) 00:13:16.321 fused_ordering(7) 00:13:16.321 fused_ordering(8) 00:13:16.321 fused_ordering(9) 00:13:16.321 fused_ordering(10) 00:13:16.321 fused_ordering(11) 00:13:16.321 fused_ordering(12) 00:13:16.321 fused_ordering(13) 00:13:16.321 fused_ordering(14) 00:13:16.321 fused_ordering(15) 00:13:16.321 fused_ordering(16) 00:13:16.321 fused_ordering(17) 00:13:16.321 fused_ordering(18) 00:13:16.321 fused_ordering(19) 00:13:16.321 fused_ordering(20) 00:13:16.321 fused_ordering(21) 00:13:16.321 fused_ordering(22) 00:13:16.321 fused_ordering(23) 00:13:16.321 fused_ordering(24) 00:13:16.321 fused_ordering(25) 00:13:16.321 fused_ordering(26) 00:13:16.321 fused_ordering(27) 00:13:16.321 fused_ordering(28) 00:13:16.321 fused_ordering(29) 00:13:16.321 fused_ordering(30) 00:13:16.321 fused_ordering(31) 00:13:16.321 fused_ordering(32) 00:13:16.321 fused_ordering(33) 00:13:16.321 fused_ordering(34) 00:13:16.321 fused_ordering(35) 00:13:16.321 fused_ordering(36) 00:13:16.321 fused_ordering(37) 00:13:16.321 fused_ordering(38) 00:13:16.321 fused_ordering(39) 00:13:16.321 fused_ordering(40) 00:13:16.321 fused_ordering(41) 00:13:16.321 fused_ordering(42) 00:13:16.321 fused_ordering(43) 00:13:16.321 fused_ordering(44) 00:13:16.321 fused_ordering(45) 00:13:16.321 fused_ordering(46) 00:13:16.321 fused_ordering(47) 00:13:16.321 fused_ordering(48) 00:13:16.321 fused_ordering(49) 00:13:16.321 fused_ordering(50) 00:13:16.321 fused_ordering(51) 00:13:16.321 fused_ordering(52) 00:13:16.321 fused_ordering(53) 00:13:16.321 fused_ordering(54) 00:13:16.321 fused_ordering(55) 00:13:16.321 fused_ordering(56) 00:13:16.321 fused_ordering(57) 00:13:16.321 fused_ordering(58) 00:13:16.321 fused_ordering(59) 00:13:16.321 fused_ordering(60) 00:13:16.321 fused_ordering(61) 00:13:16.321 fused_ordering(62) 00:13:16.321 fused_ordering(63) 00:13:16.321 fused_ordering(64) 00:13:16.321 fused_ordering(65) 00:13:16.321 fused_ordering(66) 00:13:16.321 fused_ordering(67) 00:13:16.321 fused_ordering(68) 00:13:16.321 fused_ordering(69) 00:13:16.321 fused_ordering(70) 00:13:16.321 fused_ordering(71) 00:13:16.321 fused_ordering(72) 00:13:16.321 fused_ordering(73) 00:13:16.321 fused_ordering(74) 00:13:16.321 fused_ordering(75) 00:13:16.321 fused_ordering(76) 00:13:16.321 fused_ordering(77) 00:13:16.321 fused_ordering(78) 00:13:16.321 fused_ordering(79) 00:13:16.321 fused_ordering(80) 00:13:16.321 fused_ordering(81) 00:13:16.321 fused_ordering(82) 00:13:16.321 fused_ordering(83) 00:13:16.321 fused_ordering(84) 00:13:16.321 fused_ordering(85) 00:13:16.321 fused_ordering(86) 00:13:16.321 fused_ordering(87) 00:13:16.321 fused_ordering(88) 00:13:16.321 fused_ordering(89) 00:13:16.321 fused_ordering(90) 00:13:16.321 fused_ordering(91) 00:13:16.321 fused_ordering(92) 00:13:16.321 fused_ordering(93) 00:13:16.321 fused_ordering(94) 00:13:16.321 fused_ordering(95) 00:13:16.321 fused_ordering(96) 00:13:16.321 fused_ordering(97) 00:13:16.321 fused_ordering(98) 00:13:16.321 fused_ordering(99) 00:13:16.321 fused_ordering(100) 00:13:16.321 fused_ordering(101) 00:13:16.321 fused_ordering(102) 00:13:16.321 fused_ordering(103) 00:13:16.321 fused_ordering(104) 00:13:16.321 fused_ordering(105) 00:13:16.321 fused_ordering(106) 00:13:16.321 fused_ordering(107) 00:13:16.321 fused_ordering(108) 00:13:16.321 fused_ordering(109) 00:13:16.321 fused_ordering(110) 00:13:16.321 fused_ordering(111) 00:13:16.321 fused_ordering(112) 00:13:16.321 fused_ordering(113) 00:13:16.321 fused_ordering(114) 00:13:16.321 fused_ordering(115) 00:13:16.321 fused_ordering(116) 00:13:16.321 fused_ordering(117) 00:13:16.321 fused_ordering(118) 00:13:16.321 fused_ordering(119) 00:13:16.321 fused_ordering(120) 00:13:16.321 fused_ordering(121) 00:13:16.321 fused_ordering(122) 00:13:16.321 fused_ordering(123) 00:13:16.321 fused_ordering(124) 00:13:16.321 fused_ordering(125) 00:13:16.321 fused_ordering(126) 00:13:16.321 fused_ordering(127) 00:13:16.321 fused_ordering(128) 00:13:16.321 fused_ordering(129) 00:13:16.321 fused_ordering(130) 00:13:16.321 fused_ordering(131) 00:13:16.321 fused_ordering(132) 00:13:16.321 fused_ordering(133) 00:13:16.321 fused_ordering(134) 00:13:16.321 fused_ordering(135) 00:13:16.321 fused_ordering(136) 00:13:16.321 fused_ordering(137) 00:13:16.321 fused_ordering(138) 00:13:16.321 fused_ordering(139) 00:13:16.321 fused_ordering(140) 00:13:16.321 fused_ordering(141) 00:13:16.321 fused_ordering(142) 00:13:16.321 fused_ordering(143) 00:13:16.321 fused_ordering(144) 00:13:16.321 fused_ordering(145) 00:13:16.321 fused_ordering(146) 00:13:16.321 fused_ordering(147) 00:13:16.321 fused_ordering(148) 00:13:16.321 fused_ordering(149) 00:13:16.321 fused_ordering(150) 00:13:16.321 fused_ordering(151) 00:13:16.322 fused_ordering(152) 00:13:16.322 fused_ordering(153) 00:13:16.322 fused_ordering(154) 00:13:16.322 fused_ordering(155) 00:13:16.322 fused_ordering(156) 00:13:16.322 fused_ordering(157) 00:13:16.322 fused_ordering(158) 00:13:16.322 fused_ordering(159) 00:13:16.322 fused_ordering(160) 00:13:16.322 fused_ordering(161) 00:13:16.322 fused_ordering(162) 00:13:16.322 fused_ordering(163) 00:13:16.322 fused_ordering(164) 00:13:16.322 fused_ordering(165) 00:13:16.322 fused_ordering(166) 00:13:16.322 fused_ordering(167) 00:13:16.322 fused_ordering(168) 00:13:16.322 fused_ordering(169) 00:13:16.322 fused_ordering(170) 00:13:16.322 fused_ordering(171) 00:13:16.322 fused_ordering(172) 00:13:16.322 fused_ordering(173) 00:13:16.322 fused_ordering(174) 00:13:16.322 fused_ordering(175) 00:13:16.322 fused_ordering(176) 00:13:16.322 fused_ordering(177) 00:13:16.322 fused_ordering(178) 00:13:16.322 fused_ordering(179) 00:13:16.322 fused_ordering(180) 00:13:16.322 fused_ordering(181) 00:13:16.322 fused_ordering(182) 00:13:16.322 fused_ordering(183) 00:13:16.322 fused_ordering(184) 00:13:16.322 fused_ordering(185) 00:13:16.322 fused_ordering(186) 00:13:16.322 fused_ordering(187) 00:13:16.322 fused_ordering(188) 00:13:16.322 fused_ordering(189) 00:13:16.322 fused_ordering(190) 00:13:16.322 fused_ordering(191) 00:13:16.322 fused_ordering(192) 00:13:16.322 fused_ordering(193) 00:13:16.322 fused_ordering(194) 00:13:16.322 fused_ordering(195) 00:13:16.322 fused_ordering(196) 00:13:16.322 fused_ordering(197) 00:13:16.322 fused_ordering(198) 00:13:16.322 fused_ordering(199) 00:13:16.322 fused_ordering(200) 00:13:16.322 fused_ordering(201) 00:13:16.322 fused_ordering(202) 00:13:16.322 fused_ordering(203) 00:13:16.322 fused_ordering(204) 00:13:16.322 fused_ordering(205) 00:13:16.581 fused_ordering(206) 00:13:16.581 fused_ordering(207) 00:13:16.581 fused_ordering(208) 00:13:16.581 fused_ordering(209) 00:13:16.581 fused_ordering(210) 00:13:16.581 fused_ordering(211) 00:13:16.581 fused_ordering(212) 00:13:16.581 fused_ordering(213) 00:13:16.581 fused_ordering(214) 00:13:16.581 fused_ordering(215) 00:13:16.581 fused_ordering(216) 00:13:16.581 fused_ordering(217) 00:13:16.581 fused_ordering(218) 00:13:16.581 fused_ordering(219) 00:13:16.581 fused_ordering(220) 00:13:16.581 fused_ordering(221) 00:13:16.581 fused_ordering(222) 00:13:16.581 fused_ordering(223) 00:13:16.581 fused_ordering(224) 00:13:16.581 fused_ordering(225) 00:13:16.581 fused_ordering(226) 00:13:16.581 fused_ordering(227) 00:13:16.581 fused_ordering(228) 00:13:16.581 fused_ordering(229) 00:13:16.581 fused_ordering(230) 00:13:16.581 fused_ordering(231) 00:13:16.581 fused_ordering(232) 00:13:16.581 fused_ordering(233) 00:13:16.581 fused_ordering(234) 00:13:16.581 fused_ordering(235) 00:13:16.581 fused_ordering(236) 00:13:16.581 fused_ordering(237) 00:13:16.581 fused_ordering(238) 00:13:16.581 fused_ordering(239) 00:13:16.581 fused_ordering(240) 00:13:16.581 fused_ordering(241) 00:13:16.581 fused_ordering(242) 00:13:16.581 fused_ordering(243) 00:13:16.581 fused_ordering(244) 00:13:16.581 fused_ordering(245) 00:13:16.581 fused_ordering(246) 00:13:16.581 fused_ordering(247) 00:13:16.581 fused_ordering(248) 00:13:16.581 fused_ordering(249) 00:13:16.581 fused_ordering(250) 00:13:16.581 fused_ordering(251) 00:13:16.581 fused_ordering(252) 00:13:16.581 fused_ordering(253) 00:13:16.581 fused_ordering(254) 00:13:16.581 fused_ordering(255) 00:13:16.581 fused_ordering(256) 00:13:16.581 fused_ordering(257) 00:13:16.581 fused_ordering(258) 00:13:16.581 fused_ordering(259) 00:13:16.581 fused_ordering(260) 00:13:16.581 fused_ordering(261) 00:13:16.581 fused_ordering(262) 00:13:16.581 fused_ordering(263) 00:13:16.581 fused_ordering(264) 00:13:16.581 fused_ordering(265) 00:13:16.581 fused_ordering(266) 00:13:16.581 fused_ordering(267) 00:13:16.581 fused_ordering(268) 00:13:16.581 fused_ordering(269) 00:13:16.581 fused_ordering(270) 00:13:16.581 fused_ordering(271) 00:13:16.581 fused_ordering(272) 00:13:16.581 fused_ordering(273) 00:13:16.581 fused_ordering(274) 00:13:16.581 fused_ordering(275) 00:13:16.581 fused_ordering(276) 00:13:16.581 fused_ordering(277) 00:13:16.581 fused_ordering(278) 00:13:16.581 fused_ordering(279) 00:13:16.581 fused_ordering(280) 00:13:16.581 fused_ordering(281) 00:13:16.581 fused_ordering(282) 00:13:16.581 fused_ordering(283) 00:13:16.581 fused_ordering(284) 00:13:16.581 fused_ordering(285) 00:13:16.581 fused_ordering(286) 00:13:16.581 fused_ordering(287) 00:13:16.581 fused_ordering(288) 00:13:16.582 fused_ordering(289) 00:13:16.582 fused_ordering(290) 00:13:16.582 fused_ordering(291) 00:13:16.582 fused_ordering(292) 00:13:16.582 fused_ordering(293) 00:13:16.582 fused_ordering(294) 00:13:16.582 fused_ordering(295) 00:13:16.582 fused_ordering(296) 00:13:16.582 fused_ordering(297) 00:13:16.582 fused_ordering(298) 00:13:16.582 fused_ordering(299) 00:13:16.582 fused_ordering(300) 00:13:16.582 fused_ordering(301) 00:13:16.582 fused_ordering(302) 00:13:16.582 fused_ordering(303) 00:13:16.582 fused_ordering(304) 00:13:16.582 fused_ordering(305) 00:13:16.582 fused_ordering(306) 00:13:16.582 fused_ordering(307) 00:13:16.582 fused_ordering(308) 00:13:16.582 fused_ordering(309) 00:13:16.582 fused_ordering(310) 00:13:16.582 fused_ordering(311) 00:13:16.582 fused_ordering(312) 00:13:16.582 fused_ordering(313) 00:13:16.582 fused_ordering(314) 00:13:16.582 fused_ordering(315) 00:13:16.582 fused_ordering(316) 00:13:16.582 fused_ordering(317) 00:13:16.582 fused_ordering(318) 00:13:16.582 fused_ordering(319) 00:13:16.582 fused_ordering(320) 00:13:16.582 fused_ordering(321) 00:13:16.582 fused_ordering(322) 00:13:16.582 fused_ordering(323) 00:13:16.582 fused_ordering(324) 00:13:16.582 fused_ordering(325) 00:13:16.582 fused_ordering(326) 00:13:16.582 fused_ordering(327) 00:13:16.582 fused_ordering(328) 00:13:16.582 fused_ordering(329) 00:13:16.582 fused_ordering(330) 00:13:16.582 fused_ordering(331) 00:13:16.582 fused_ordering(332) 00:13:16.582 fused_ordering(333) 00:13:16.582 fused_ordering(334) 00:13:16.582 fused_ordering(335) 00:13:16.582 fused_ordering(336) 00:13:16.582 fused_ordering(337) 00:13:16.582 fused_ordering(338) 00:13:16.582 fused_ordering(339) 00:13:16.582 fused_ordering(340) 00:13:16.582 fused_ordering(341) 00:13:16.582 fused_ordering(342) 00:13:16.582 fused_ordering(343) 00:13:16.582 fused_ordering(344) 00:13:16.582 fused_ordering(345) 00:13:16.582 fused_ordering(346) 00:13:16.582 fused_ordering(347) 00:13:16.582 fused_ordering(348) 00:13:16.582 fused_ordering(349) 00:13:16.582 fused_ordering(350) 00:13:16.582 fused_ordering(351) 00:13:16.582 fused_ordering(352) 00:13:16.582 fused_ordering(353) 00:13:16.582 fused_ordering(354) 00:13:16.582 fused_ordering(355) 00:13:16.582 fused_ordering(356) 00:13:16.582 fused_ordering(357) 00:13:16.582 fused_ordering(358) 00:13:16.582 fused_ordering(359) 00:13:16.582 fused_ordering(360) 00:13:16.582 fused_ordering(361) 00:13:16.582 fused_ordering(362) 00:13:16.582 fused_ordering(363) 00:13:16.582 fused_ordering(364) 00:13:16.582 fused_ordering(365) 00:13:16.582 fused_ordering(366) 00:13:16.582 fused_ordering(367) 00:13:16.582 fused_ordering(368) 00:13:16.582 fused_ordering(369) 00:13:16.582 fused_ordering(370) 00:13:16.582 fused_ordering(371) 00:13:16.582 fused_ordering(372) 00:13:16.582 fused_ordering(373) 00:13:16.582 fused_ordering(374) 00:13:16.582 fused_ordering(375) 00:13:16.582 fused_ordering(376) 00:13:16.582 fused_ordering(377) 00:13:16.582 fused_ordering(378) 00:13:16.582 fused_ordering(379) 00:13:16.582 fused_ordering(380) 00:13:16.582 fused_ordering(381) 00:13:16.582 fused_ordering(382) 00:13:16.582 fused_ordering(383) 00:13:16.582 fused_ordering(384) 00:13:16.582 fused_ordering(385) 00:13:16.582 fused_ordering(386) 00:13:16.582 fused_ordering(387) 00:13:16.582 fused_ordering(388) 00:13:16.582 fused_ordering(389) 00:13:16.582 fused_ordering(390) 00:13:16.582 fused_ordering(391) 00:13:16.582 fused_ordering(392) 00:13:16.582 fused_ordering(393) 00:13:16.582 fused_ordering(394) 00:13:16.582 fused_ordering(395) 00:13:16.582 fused_ordering(396) 00:13:16.582 fused_ordering(397) 00:13:16.582 fused_ordering(398) 00:13:16.582 fused_ordering(399) 00:13:16.582 fused_ordering(400) 00:13:16.582 fused_ordering(401) 00:13:16.582 fused_ordering(402) 00:13:16.582 fused_ordering(403) 00:13:16.582 fused_ordering(404) 00:13:16.582 fused_ordering(405) 00:13:16.582 fused_ordering(406) 00:13:16.582 fused_ordering(407) 00:13:16.582 fused_ordering(408) 00:13:16.582 fused_ordering(409) 00:13:16.582 fused_ordering(410) 00:13:17.151 fused_ordering(411) 00:13:17.151 fused_ordering(412) 00:13:17.151 fused_ordering(413) 00:13:17.151 fused_ordering(414) 00:13:17.151 fused_ordering(415) 00:13:17.151 fused_ordering(416) 00:13:17.151 fused_ordering(417) 00:13:17.151 fused_ordering(418) 00:13:17.151 fused_ordering(419) 00:13:17.151 fused_ordering(420) 00:13:17.151 fused_ordering(421) 00:13:17.151 fused_ordering(422) 00:13:17.151 fused_ordering(423) 00:13:17.151 fused_ordering(424) 00:13:17.151 fused_ordering(425) 00:13:17.151 fused_ordering(426) 00:13:17.151 fused_ordering(427) 00:13:17.151 fused_ordering(428) 00:13:17.151 fused_ordering(429) 00:13:17.151 fused_ordering(430) 00:13:17.151 fused_ordering(431) 00:13:17.151 fused_ordering(432) 00:13:17.151 fused_ordering(433) 00:13:17.151 fused_ordering(434) 00:13:17.151 fused_ordering(435) 00:13:17.151 fused_ordering(436) 00:13:17.151 fused_ordering(437) 00:13:17.151 fused_ordering(438) 00:13:17.151 fused_ordering(439) 00:13:17.151 fused_ordering(440) 00:13:17.151 fused_ordering(441) 00:13:17.151 fused_ordering(442) 00:13:17.151 fused_ordering(443) 00:13:17.151 fused_ordering(444) 00:13:17.151 fused_ordering(445) 00:13:17.151 fused_ordering(446) 00:13:17.151 fused_ordering(447) 00:13:17.151 fused_ordering(448) 00:13:17.151 fused_ordering(449) 00:13:17.151 fused_ordering(450) 00:13:17.151 fused_ordering(451) 00:13:17.151 fused_ordering(452) 00:13:17.151 fused_ordering(453) 00:13:17.151 fused_ordering(454) 00:13:17.151 fused_ordering(455) 00:13:17.151 fused_ordering(456) 00:13:17.151 fused_ordering(457) 00:13:17.151 fused_ordering(458) 00:13:17.151 fused_ordering(459) 00:13:17.151 fused_ordering(460) 00:13:17.151 fused_ordering(461) 00:13:17.151 fused_ordering(462) 00:13:17.151 fused_ordering(463) 00:13:17.151 fused_ordering(464) 00:13:17.151 fused_ordering(465) 00:13:17.151 fused_ordering(466) 00:13:17.151 fused_ordering(467) 00:13:17.151 fused_ordering(468) 00:13:17.151 fused_ordering(469) 00:13:17.151 fused_ordering(470) 00:13:17.151 fused_ordering(471) 00:13:17.151 fused_ordering(472) 00:13:17.151 fused_ordering(473) 00:13:17.151 fused_ordering(474) 00:13:17.151 fused_ordering(475) 00:13:17.151 fused_ordering(476) 00:13:17.151 fused_ordering(477) 00:13:17.151 fused_ordering(478) 00:13:17.151 fused_ordering(479) 00:13:17.151 fused_ordering(480) 00:13:17.151 fused_ordering(481) 00:13:17.151 fused_ordering(482) 00:13:17.151 fused_ordering(483) 00:13:17.151 fused_ordering(484) 00:13:17.151 fused_ordering(485) 00:13:17.151 fused_ordering(486) 00:13:17.151 fused_ordering(487) 00:13:17.151 fused_ordering(488) 00:13:17.151 fused_ordering(489) 00:13:17.151 fused_ordering(490) 00:13:17.151 fused_ordering(491) 00:13:17.151 fused_ordering(492) 00:13:17.151 fused_ordering(493) 00:13:17.151 fused_ordering(494) 00:13:17.151 fused_ordering(495) 00:13:17.151 fused_ordering(496) 00:13:17.151 fused_ordering(497) 00:13:17.151 fused_ordering(498) 00:13:17.151 fused_ordering(499) 00:13:17.151 fused_ordering(500) 00:13:17.151 fused_ordering(501) 00:13:17.151 fused_ordering(502) 00:13:17.151 fused_ordering(503) 00:13:17.151 fused_ordering(504) 00:13:17.151 fused_ordering(505) 00:13:17.151 fused_ordering(506) 00:13:17.151 fused_ordering(507) 00:13:17.151 fused_ordering(508) 00:13:17.151 fused_ordering(509) 00:13:17.151 fused_ordering(510) 00:13:17.151 fused_ordering(511) 00:13:17.151 fused_ordering(512) 00:13:17.151 fused_ordering(513) 00:13:17.151 fused_ordering(514) 00:13:17.151 fused_ordering(515) 00:13:17.151 fused_ordering(516) 00:13:17.151 fused_ordering(517) 00:13:17.151 fused_ordering(518) 00:13:17.151 fused_ordering(519) 00:13:17.151 fused_ordering(520) 00:13:17.151 fused_ordering(521) 00:13:17.151 fused_ordering(522) 00:13:17.151 fused_ordering(523) 00:13:17.151 fused_ordering(524) 00:13:17.151 fused_ordering(525) 00:13:17.151 fused_ordering(526) 00:13:17.151 fused_ordering(527) 00:13:17.151 fused_ordering(528) 00:13:17.151 fused_ordering(529) 00:13:17.151 fused_ordering(530) 00:13:17.151 fused_ordering(531) 00:13:17.151 fused_ordering(532) 00:13:17.151 fused_ordering(533) 00:13:17.151 fused_ordering(534) 00:13:17.151 fused_ordering(535) 00:13:17.151 fused_ordering(536) 00:13:17.151 fused_ordering(537) 00:13:17.151 fused_ordering(538) 00:13:17.151 fused_ordering(539) 00:13:17.151 fused_ordering(540) 00:13:17.151 fused_ordering(541) 00:13:17.151 fused_ordering(542) 00:13:17.151 fused_ordering(543) 00:13:17.151 fused_ordering(544) 00:13:17.151 fused_ordering(545) 00:13:17.151 fused_ordering(546) 00:13:17.151 fused_ordering(547) 00:13:17.151 fused_ordering(548) 00:13:17.151 fused_ordering(549) 00:13:17.151 fused_ordering(550) 00:13:17.151 fused_ordering(551) 00:13:17.151 fused_ordering(552) 00:13:17.151 fused_ordering(553) 00:13:17.151 fused_ordering(554) 00:13:17.151 fused_ordering(555) 00:13:17.151 fused_ordering(556) 00:13:17.151 fused_ordering(557) 00:13:17.151 fused_ordering(558) 00:13:17.151 fused_ordering(559) 00:13:17.151 fused_ordering(560) 00:13:17.151 fused_ordering(561) 00:13:17.151 fused_ordering(562) 00:13:17.151 fused_ordering(563) 00:13:17.151 fused_ordering(564) 00:13:17.151 fused_ordering(565) 00:13:17.151 fused_ordering(566) 00:13:17.151 fused_ordering(567) 00:13:17.151 fused_ordering(568) 00:13:17.151 fused_ordering(569) 00:13:17.151 fused_ordering(570) 00:13:17.151 fused_ordering(571) 00:13:17.151 fused_ordering(572) 00:13:17.151 fused_ordering(573) 00:13:17.151 fused_ordering(574) 00:13:17.151 fused_ordering(575) 00:13:17.151 fused_ordering(576) 00:13:17.152 fused_ordering(577) 00:13:17.152 fused_ordering(578) 00:13:17.152 fused_ordering(579) 00:13:17.152 fused_ordering(580) 00:13:17.152 fused_ordering(581) 00:13:17.152 fused_ordering(582) 00:13:17.152 fused_ordering(583) 00:13:17.152 fused_ordering(584) 00:13:17.152 fused_ordering(585) 00:13:17.152 fused_ordering(586) 00:13:17.152 fused_ordering(587) 00:13:17.152 fused_ordering(588) 00:13:17.152 fused_ordering(589) 00:13:17.152 fused_ordering(590) 00:13:17.152 fused_ordering(591) 00:13:17.152 fused_ordering(592) 00:13:17.152 fused_ordering(593) 00:13:17.152 fused_ordering(594) 00:13:17.152 fused_ordering(595) 00:13:17.152 fused_ordering(596) 00:13:17.152 fused_ordering(597) 00:13:17.152 fused_ordering(598) 00:13:17.152 fused_ordering(599) 00:13:17.152 fused_ordering(600) 00:13:17.152 fused_ordering(601) 00:13:17.152 fused_ordering(602) 00:13:17.152 fused_ordering(603) 00:13:17.152 fused_ordering(604) 00:13:17.152 fused_ordering(605) 00:13:17.152 fused_ordering(606) 00:13:17.152 fused_ordering(607) 00:13:17.152 fused_ordering(608) 00:13:17.152 fused_ordering(609) 00:13:17.152 fused_ordering(610) 00:13:17.152 fused_ordering(611) 00:13:17.152 fused_ordering(612) 00:13:17.152 fused_ordering(613) 00:13:17.152 fused_ordering(614) 00:13:17.152 fused_ordering(615) 00:13:17.721 fused_ordering(616) 00:13:17.721 fused_ordering(617) 00:13:17.721 fused_ordering(618) 00:13:17.721 fused_ordering(619) 00:13:17.721 fused_ordering(620) 00:13:17.721 fused_ordering(621) 00:13:17.721 fused_ordering(622) 00:13:17.721 fused_ordering(623) 00:13:17.721 fused_ordering(624) 00:13:17.721 fused_ordering(625) 00:13:17.721 fused_ordering(626) 00:13:17.721 fused_ordering(627) 00:13:17.721 fused_ordering(628) 00:13:17.721 fused_ordering(629) 00:13:17.721 fused_ordering(630) 00:13:17.721 fused_ordering(631) 00:13:17.721 fused_ordering(632) 00:13:17.721 fused_ordering(633) 00:13:17.721 fused_ordering(634) 00:13:17.721 fused_ordering(635) 00:13:17.721 fused_ordering(636) 00:13:17.721 fused_ordering(637) 00:13:17.721 fused_ordering(638) 00:13:17.721 fused_ordering(639) 00:13:17.721 fused_ordering(640) 00:13:17.721 fused_ordering(641) 00:13:17.721 fused_ordering(642) 00:13:17.721 fused_ordering(643) 00:13:17.721 fused_ordering(644) 00:13:17.721 fused_ordering(645) 00:13:17.721 fused_ordering(646) 00:13:17.721 fused_ordering(647) 00:13:17.721 fused_ordering(648) 00:13:17.721 fused_ordering(649) 00:13:17.721 fused_ordering(650) 00:13:17.721 fused_ordering(651) 00:13:17.721 fused_ordering(652) 00:13:17.721 fused_ordering(653) 00:13:17.721 fused_ordering(654) 00:13:17.721 fused_ordering(655) 00:13:17.721 fused_ordering(656) 00:13:17.721 fused_ordering(657) 00:13:17.721 fused_ordering(658) 00:13:17.721 fused_ordering(659) 00:13:17.721 fused_ordering(660) 00:13:17.721 fused_ordering(661) 00:13:17.721 fused_ordering(662) 00:13:17.721 fused_ordering(663) 00:13:17.721 fused_ordering(664) 00:13:17.721 fused_ordering(665) 00:13:17.721 fused_ordering(666) 00:13:17.721 fused_ordering(667) 00:13:17.721 fused_ordering(668) 00:13:17.721 fused_ordering(669) 00:13:17.721 fused_ordering(670) 00:13:17.721 fused_ordering(671) 00:13:17.721 fused_ordering(672) 00:13:17.721 fused_ordering(673) 00:13:17.721 fused_ordering(674) 00:13:17.721 fused_ordering(675) 00:13:17.721 fused_ordering(676) 00:13:17.721 fused_ordering(677) 00:13:17.721 fused_ordering(678) 00:13:17.721 fused_ordering(679) 00:13:17.721 fused_ordering(680) 00:13:17.721 fused_ordering(681) 00:13:17.721 fused_ordering(682) 00:13:17.721 fused_ordering(683) 00:13:17.721 fused_ordering(684) 00:13:17.721 fused_ordering(685) 00:13:17.721 fused_ordering(686) 00:13:17.721 fused_ordering(687) 00:13:17.721 fused_ordering(688) 00:13:17.721 fused_ordering(689) 00:13:17.721 fused_ordering(690) 00:13:17.721 fused_ordering(691) 00:13:17.721 fused_ordering(692) 00:13:17.721 fused_ordering(693) 00:13:17.721 fused_ordering(694) 00:13:17.721 fused_ordering(695) 00:13:17.721 fused_ordering(696) 00:13:17.721 fused_ordering(697) 00:13:17.721 fused_ordering(698) 00:13:17.721 fused_ordering(699) 00:13:17.721 fused_ordering(700) 00:13:17.721 fused_ordering(701) 00:13:17.721 fused_ordering(702) 00:13:17.721 fused_ordering(703) 00:13:17.721 fused_ordering(704) 00:13:17.721 fused_ordering(705) 00:13:17.721 fused_ordering(706) 00:13:17.721 fused_ordering(707) 00:13:17.721 fused_ordering(708) 00:13:17.721 fused_ordering(709) 00:13:17.721 fused_ordering(710) 00:13:17.721 fused_ordering(711) 00:13:17.721 fused_ordering(712) 00:13:17.721 fused_ordering(713) 00:13:17.721 fused_ordering(714) 00:13:17.721 fused_ordering(715) 00:13:17.721 fused_ordering(716) 00:13:17.721 fused_ordering(717) 00:13:17.721 fused_ordering(718) 00:13:17.721 fused_ordering(719) 00:13:17.721 fused_ordering(720) 00:13:17.721 fused_ordering(721) 00:13:17.721 fused_ordering(722) 00:13:17.721 fused_ordering(723) 00:13:17.721 fused_ordering(724) 00:13:17.721 fused_ordering(725) 00:13:17.721 fused_ordering(726) 00:13:17.721 fused_ordering(727) 00:13:17.721 fused_ordering(728) 00:13:17.721 fused_ordering(729) 00:13:17.721 fused_ordering(730) 00:13:17.721 fused_ordering(731) 00:13:17.721 fused_ordering(732) 00:13:17.721 fused_ordering(733) 00:13:17.721 fused_ordering(734) 00:13:17.721 fused_ordering(735) 00:13:17.721 fused_ordering(736) 00:13:17.721 fused_ordering(737) 00:13:17.721 fused_ordering(738) 00:13:17.721 fused_ordering(739) 00:13:17.721 fused_ordering(740) 00:13:17.721 fused_ordering(741) 00:13:17.721 fused_ordering(742) 00:13:17.721 fused_ordering(743) 00:13:17.721 fused_ordering(744) 00:13:17.721 fused_ordering(745) 00:13:17.721 fused_ordering(746) 00:13:17.721 fused_ordering(747) 00:13:17.721 fused_ordering(748) 00:13:17.721 fused_ordering(749) 00:13:17.721 fused_ordering(750) 00:13:17.721 fused_ordering(751) 00:13:17.721 fused_ordering(752) 00:13:17.721 fused_ordering(753) 00:13:17.721 fused_ordering(754) 00:13:17.721 fused_ordering(755) 00:13:17.721 fused_ordering(756) 00:13:17.721 fused_ordering(757) 00:13:17.721 fused_ordering(758) 00:13:17.721 fused_ordering(759) 00:13:17.721 fused_ordering(760) 00:13:17.721 fused_ordering(761) 00:13:17.721 fused_ordering(762) 00:13:17.721 fused_ordering(763) 00:13:17.721 fused_ordering(764) 00:13:17.721 fused_ordering(765) 00:13:17.721 fused_ordering(766) 00:13:17.721 fused_ordering(767) 00:13:17.721 fused_ordering(768) 00:13:17.721 fused_ordering(769) 00:13:17.721 fused_ordering(770) 00:13:17.721 fused_ordering(771) 00:13:17.721 fused_ordering(772) 00:13:17.721 fused_ordering(773) 00:13:17.721 fused_ordering(774) 00:13:17.721 fused_ordering(775) 00:13:17.721 fused_ordering(776) 00:13:17.721 fused_ordering(777) 00:13:17.721 fused_ordering(778) 00:13:17.721 fused_ordering(779) 00:13:17.721 fused_ordering(780) 00:13:17.721 fused_ordering(781) 00:13:17.721 fused_ordering(782) 00:13:17.721 fused_ordering(783) 00:13:17.721 fused_ordering(784) 00:13:17.721 fused_ordering(785) 00:13:17.721 fused_ordering(786) 00:13:17.721 fused_ordering(787) 00:13:17.721 fused_ordering(788) 00:13:17.721 fused_ordering(789) 00:13:17.721 fused_ordering(790) 00:13:17.721 fused_ordering(791) 00:13:17.721 fused_ordering(792) 00:13:17.721 fused_ordering(793) 00:13:17.721 fused_ordering(794) 00:13:17.721 fused_ordering(795) 00:13:17.721 fused_ordering(796) 00:13:17.721 fused_ordering(797) 00:13:17.721 fused_ordering(798) 00:13:17.721 fused_ordering(799) 00:13:17.721 fused_ordering(800) 00:13:17.721 fused_ordering(801) 00:13:17.721 fused_ordering(802) 00:13:17.721 fused_ordering(803) 00:13:17.721 fused_ordering(804) 00:13:17.721 fused_ordering(805) 00:13:17.721 fused_ordering(806) 00:13:17.721 fused_ordering(807) 00:13:17.721 fused_ordering(808) 00:13:17.721 fused_ordering(809) 00:13:17.721 fused_ordering(810) 00:13:17.721 fused_ordering(811) 00:13:17.721 fused_ordering(812) 00:13:17.721 fused_ordering(813) 00:13:17.721 fused_ordering(814) 00:13:17.721 fused_ordering(815) 00:13:17.721 fused_ordering(816) 00:13:17.721 fused_ordering(817) 00:13:17.721 fused_ordering(818) 00:13:17.721 fused_ordering(819) 00:13:17.721 fused_ordering(820) 00:13:18.678 fused_ordering(821) 00:13:18.678 fused_ordering(822) 00:13:18.678 fused_ordering(823) 00:13:18.678 fused_ordering(824) 00:13:18.678 fused_ordering(825) 00:13:18.678 fused_ordering(826) 00:13:18.678 fused_ordering(827) 00:13:18.679 fused_ordering(828) 00:13:18.679 fused_ordering(829) 00:13:18.679 fused_ordering(830) 00:13:18.679 fused_ordering(831) 00:13:18.679 fused_ordering(832) 00:13:18.679 fused_ordering(833) 00:13:18.679 fused_ordering(834) 00:13:18.679 fused_ordering(835) 00:13:18.679 fused_ordering(836) 00:13:18.679 fused_ordering(837) 00:13:18.679 fused_ordering(838) 00:13:18.679 fused_ordering(839) 00:13:18.679 fused_ordering(840) 00:13:18.679 fused_ordering(841) 00:13:18.679 fused_ordering(842) 00:13:18.679 fused_ordering(843) 00:13:18.679 fused_ordering(844) 00:13:18.679 fused_ordering(845) 00:13:18.679 fused_ordering(846) 00:13:18.679 fused_ordering(847) 00:13:18.679 fused_ordering(848) 00:13:18.679 fused_ordering(849) 00:13:18.679 fused_ordering(850) 00:13:18.679 fused_ordering(851) 00:13:18.679 fused_ordering(852) 00:13:18.679 fused_ordering(853) 00:13:18.679 fused_ordering(854) 00:13:18.679 fused_ordering(855) 00:13:18.679 fused_ordering(856) 00:13:18.679 fused_ordering(857) 00:13:18.679 fused_ordering(858) 00:13:18.679 fused_ordering(859) 00:13:18.679 fused_ordering(860) 00:13:18.679 fused_ordering(861) 00:13:18.679 fused_ordering(862) 00:13:18.679 fused_ordering(863) 00:13:18.679 fused_ordering(864) 00:13:18.679 fused_ordering(865) 00:13:18.679 fused_ordering(866) 00:13:18.679 fused_ordering(867) 00:13:18.679 fused_ordering(868) 00:13:18.679 fused_ordering(869) 00:13:18.679 fused_ordering(870) 00:13:18.679 fused_ordering(871) 00:13:18.679 fused_ordering(872) 00:13:18.679 fused_ordering(873) 00:13:18.679 fused_ordering(874) 00:13:18.679 fused_ordering(875) 00:13:18.679 fused_ordering(876) 00:13:18.679 fused_ordering(877) 00:13:18.679 fused_ordering(878) 00:13:18.679 fused_ordering(879) 00:13:18.679 fused_ordering(880) 00:13:18.679 fused_ordering(881) 00:13:18.679 fused_ordering(882) 00:13:18.679 fused_ordering(883) 00:13:18.679 fused_ordering(884) 00:13:18.679 fused_ordering(885) 00:13:18.679 fused_ordering(886) 00:13:18.679 fused_ordering(887) 00:13:18.679 fused_ordering(888) 00:13:18.679 fused_ordering(889) 00:13:18.679 fused_ordering(890) 00:13:18.679 fused_ordering(891) 00:13:18.679 fused_ordering(892) 00:13:18.679 fused_ordering(893) 00:13:18.679 fused_ordering(894) 00:13:18.679 fused_ordering(895) 00:13:18.679 fused_ordering(896) 00:13:18.679 fused_ordering(897) 00:13:18.679 fused_ordering(898) 00:13:18.679 fused_ordering(899) 00:13:18.679 fused_ordering(900) 00:13:18.679 fused_ordering(901) 00:13:18.679 fused_ordering(902) 00:13:18.679 fused_ordering(903) 00:13:18.679 fused_ordering(904) 00:13:18.679 fused_ordering(905) 00:13:18.679 fused_ordering(906) 00:13:18.679 fused_ordering(907) 00:13:18.679 fused_ordering(908) 00:13:18.679 fused_ordering(909) 00:13:18.679 fused_ordering(910) 00:13:18.679 fused_ordering(911) 00:13:18.679 fused_ordering(912) 00:13:18.679 fused_ordering(913) 00:13:18.679 fused_ordering(914) 00:13:18.679 fused_ordering(915) 00:13:18.679 fused_ordering(916) 00:13:18.679 fused_ordering(917) 00:13:18.679 fused_ordering(918) 00:13:18.679 fused_ordering(919) 00:13:18.679 fused_ordering(920) 00:13:18.679 fused_ordering(921) 00:13:18.679 fused_ordering(922) 00:13:18.679 fused_ordering(923) 00:13:18.679 fused_ordering(924) 00:13:18.679 fused_ordering(925) 00:13:18.679 fused_ordering(926) 00:13:18.679 fused_ordering(927) 00:13:18.679 fused_ordering(928) 00:13:18.679 fused_ordering(929) 00:13:18.679 fused_ordering(930) 00:13:18.679 fused_ordering(931) 00:13:18.679 fused_ordering(932) 00:13:18.679 fused_ordering(933) 00:13:18.679 fused_ordering(934) 00:13:18.679 fused_ordering(935) 00:13:18.679 fused_ordering(936) 00:13:18.679 fused_ordering(937) 00:13:18.679 fused_ordering(938) 00:13:18.679 fused_ordering(939) 00:13:18.679 fused_ordering(940) 00:13:18.679 fused_ordering(941) 00:13:18.679 fused_ordering(942) 00:13:18.679 fused_ordering(943) 00:13:18.679 fused_ordering(944) 00:13:18.679 fused_ordering(945) 00:13:18.679 fused_ordering(946) 00:13:18.679 fused_ordering(947) 00:13:18.679 fused_ordering(948) 00:13:18.679 fused_ordering(949) 00:13:18.679 fused_ordering(950) 00:13:18.679 fused_ordering(951) 00:13:18.679 fused_ordering(952) 00:13:18.679 fused_ordering(953) 00:13:18.679 fused_ordering(954) 00:13:18.679 fused_ordering(955) 00:13:18.679 fused_ordering(956) 00:13:18.679 fused_ordering(957) 00:13:18.679 fused_ordering(958) 00:13:18.679 fused_ordering(959) 00:13:18.679 fused_ordering(960) 00:13:18.679 fused_ordering(961) 00:13:18.679 fused_ordering(962) 00:13:18.679 fused_ordering(963) 00:13:18.679 fused_ordering(964) 00:13:18.679 fused_ordering(965) 00:13:18.679 fused_ordering(966) 00:13:18.679 fused_ordering(967) 00:13:18.679 fused_ordering(968) 00:13:18.679 fused_ordering(969) 00:13:18.679 fused_ordering(970) 00:13:18.679 fused_ordering(971) 00:13:18.679 fused_ordering(972) 00:13:18.679 fused_ordering(973) 00:13:18.679 fused_ordering(974) 00:13:18.679 fused_ordering(975) 00:13:18.679 fused_ordering(976) 00:13:18.679 fused_ordering(977) 00:13:18.679 fused_ordering(978) 00:13:18.679 fused_ordering(979) 00:13:18.679 fused_ordering(980) 00:13:18.679 fused_ordering(981) 00:13:18.679 fused_ordering(982) 00:13:18.679 fused_ordering(983) 00:13:18.679 fused_ordering(984) 00:13:18.679 fused_ordering(985) 00:13:18.679 fused_ordering(986) 00:13:18.679 fused_ordering(987) 00:13:18.679 fused_ordering(988) 00:13:18.679 fused_ordering(989) 00:13:18.679 fused_ordering(990) 00:13:18.679 fused_ordering(991) 00:13:18.679 fused_ordering(992) 00:13:18.679 fused_ordering(993) 00:13:18.679 fused_ordering(994) 00:13:18.679 fused_ordering(995) 00:13:18.679 fused_ordering(996) 00:13:18.679 fused_ordering(997) 00:13:18.679 fused_ordering(998) 00:13:18.679 fused_ordering(999) 00:13:18.679 fused_ordering(1000) 00:13:18.679 fused_ordering(1001) 00:13:18.679 fused_ordering(1002) 00:13:18.679 fused_ordering(1003) 00:13:18.679 fused_ordering(1004) 00:13:18.679 fused_ordering(1005) 00:13:18.679 fused_ordering(1006) 00:13:18.679 fused_ordering(1007) 00:13:18.679 fused_ordering(1008) 00:13:18.679 fused_ordering(1009) 00:13:18.679 fused_ordering(1010) 00:13:18.679 fused_ordering(1011) 00:13:18.679 fused_ordering(1012) 00:13:18.679 fused_ordering(1013) 00:13:18.679 fused_ordering(1014) 00:13:18.679 fused_ordering(1015) 00:13:18.679 fused_ordering(1016) 00:13:18.679 fused_ordering(1017) 00:13:18.679 fused_ordering(1018) 00:13:18.679 fused_ordering(1019) 00:13:18.679 fused_ordering(1020) 00:13:18.679 fused_ordering(1021) 00:13:18.679 fused_ordering(1022) 00:13:18.679 fused_ordering(1023) 00:13:18.679 11:49:08 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:18.679 11:49:08 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:18.679 11:49:08 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:18.679 11:49:08 -- nvmf/common.sh@117 -- # sync 00:13:18.679 11:49:08 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:18.679 11:49:08 -- nvmf/common.sh@120 -- # set +e 00:13:18.679 11:49:08 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:18.679 11:49:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:18.679 rmmod nvme_tcp 00:13:18.679 rmmod nvme_fabrics 00:13:18.679 rmmod nvme_keyring 00:13:18.679 11:49:09 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:18.679 11:49:09 -- nvmf/common.sh@124 -- # set -e 00:13:18.679 11:49:09 -- nvmf/common.sh@125 -- # return 0 00:13:18.679 11:49:09 -- nvmf/common.sh@478 -- # '[' -n 2405873 ']' 00:13:18.679 11:49:09 -- nvmf/common.sh@479 -- # killprocess 2405873 00:13:18.679 11:49:09 -- common/autotest_common.sh@936 -- # '[' -z 2405873 ']' 00:13:18.679 11:49:09 -- common/autotest_common.sh@940 -- # kill -0 2405873 00:13:18.679 11:49:09 -- common/autotest_common.sh@941 -- # uname 00:13:18.679 11:49:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:18.679 11:49:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2405873 00:13:18.679 11:49:09 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:18.679 11:49:09 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:18.679 11:49:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2405873' 00:13:18.679 killing process with pid 2405873 00:13:18.679 11:49:09 -- common/autotest_common.sh@955 -- # kill 2405873 00:13:18.679 11:49:09 -- common/autotest_common.sh@960 -- # wait 2405873 00:13:20.059 11:49:10 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:20.059 11:49:10 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:20.059 11:49:10 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:20.059 11:49:10 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:20.059 11:49:10 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:20.059 11:49:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:20.059 11:49:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:20.059 11:49:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:21.967 11:49:12 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:21.967 00:13:21.967 real 0m13.833s 00:13:21.967 user 0m8.121s 00:13:21.967 sys 0m7.078s 00:13:21.967 11:49:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:21.967 11:49:12 -- common/autotest_common.sh@10 -- # set +x 00:13:21.967 ************************************ 00:13:21.967 END TEST nvmf_fused_ordering 00:13:21.967 ************************************ 00:13:21.967 11:49:12 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:21.967 11:49:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:21.967 11:49:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:21.967 11:49:12 -- common/autotest_common.sh@10 -- # set +x 00:13:22.227 ************************************ 00:13:22.227 START TEST nvmf_delete_subsystem 00:13:22.227 ************************************ 00:13:22.227 11:49:12 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:22.227 * Looking for test storage... 00:13:22.227 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:22.227 11:49:12 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:22.227 11:49:12 -- nvmf/common.sh@7 -- # uname -s 00:13:22.227 11:49:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:22.227 11:49:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:22.227 11:49:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:22.227 11:49:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:22.227 11:49:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:22.227 11:49:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:22.227 11:49:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:22.227 11:49:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:22.227 11:49:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:22.227 11:49:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:22.227 11:49:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:13:22.227 11:49:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:13:22.227 11:49:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:22.227 11:49:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:22.227 11:49:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:22.227 11:49:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:22.227 11:49:12 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:22.227 11:49:12 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:22.227 11:49:12 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:22.227 11:49:12 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:22.227 11:49:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.227 11:49:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.227 11:49:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.227 11:49:12 -- paths/export.sh@5 -- # export PATH 00:13:22.227 11:49:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.227 11:49:12 -- nvmf/common.sh@47 -- # : 0 00:13:22.227 11:49:12 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:22.227 11:49:12 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:22.227 11:49:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:22.227 11:49:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:22.227 11:49:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:22.227 11:49:12 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:22.227 11:49:12 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:22.227 11:49:12 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:22.227 11:49:12 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:13:22.227 11:49:12 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:22.227 11:49:12 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:22.227 11:49:12 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:22.227 11:49:12 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:22.227 11:49:12 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:22.227 11:49:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:22.227 11:49:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:22.227 11:49:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:22.227 11:49:12 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:22.227 11:49:12 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:22.227 11:49:12 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:22.227 11:49:12 -- common/autotest_common.sh@10 -- # set +x 00:13:28.800 11:49:18 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:28.800 11:49:18 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:28.800 11:49:18 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:28.800 11:49:18 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:28.800 11:49:18 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:28.800 11:49:18 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:28.800 11:49:18 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:28.800 11:49:18 -- nvmf/common.sh@295 -- # net_devs=() 00:13:28.800 11:49:18 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:28.800 11:49:18 -- nvmf/common.sh@296 -- # e810=() 00:13:28.800 11:49:18 -- nvmf/common.sh@296 -- # local -ga e810 00:13:28.800 11:49:18 -- nvmf/common.sh@297 -- # x722=() 00:13:28.800 11:49:18 -- nvmf/common.sh@297 -- # local -ga x722 00:13:28.800 11:49:18 -- nvmf/common.sh@298 -- # mlx=() 00:13:28.800 11:49:18 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:28.800 11:49:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:28.800 11:49:18 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:28.800 11:49:18 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:28.800 11:49:18 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:28.800 11:49:18 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:28.800 11:49:18 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:28.800 11:49:18 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:28.800 11:49:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:28.800 11:49:18 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:28.800 11:49:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:28.800 11:49:18 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:28.800 11:49:18 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:28.800 11:49:18 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:28.800 11:49:18 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:28.800 11:49:18 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:28.800 11:49:18 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:28.800 11:49:18 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:28.800 11:49:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:28.800 11:49:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:28.800 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:28.800 11:49:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:28.800 11:49:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:28.800 11:49:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:28.800 11:49:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:28.800 11:49:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:28.800 11:49:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:28.800 11:49:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:28.800 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:28.800 11:49:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:28.800 11:49:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:28.801 11:49:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:28.801 11:49:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:28.801 11:49:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:28.801 11:49:18 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:28.801 11:49:18 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:28.801 11:49:18 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:28.801 11:49:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:28.801 11:49:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:28.801 11:49:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:28.801 11:49:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:28.801 11:49:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:28.801 Found net devices under 0000:af:00.0: cvl_0_0 00:13:28.801 11:49:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:28.801 11:49:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:28.801 11:49:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:28.801 11:49:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:28.801 11:49:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:28.801 11:49:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:28.801 Found net devices under 0000:af:00.1: cvl_0_1 00:13:28.801 11:49:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:28.801 11:49:18 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:28.801 11:49:18 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:28.801 11:49:18 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:28.801 11:49:18 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:28.801 11:49:18 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:28.801 11:49:18 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:28.801 11:49:18 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:28.801 11:49:18 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:28.801 11:49:18 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:28.801 11:49:18 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:28.801 11:49:18 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:28.801 11:49:18 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:28.801 11:49:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:28.801 11:49:18 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:28.801 11:49:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:28.801 11:49:18 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:28.801 11:49:18 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:28.801 11:49:18 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:28.801 11:49:19 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:28.801 11:49:19 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:28.801 11:49:19 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:28.801 11:49:19 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:28.801 11:49:19 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:28.801 11:49:19 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:28.801 11:49:19 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:28.801 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:28.801 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:13:28.801 00:13:28.801 --- 10.0.0.2 ping statistics --- 00:13:28.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:28.801 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:13:28.801 11:49:19 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:28.801 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:28.801 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:13:28.801 00:13:28.801 --- 10.0.0.1 ping statistics --- 00:13:28.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:28.801 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:13:28.801 11:49:19 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:28.801 11:49:19 -- nvmf/common.sh@411 -- # return 0 00:13:28.801 11:49:19 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:28.801 11:49:19 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:28.801 11:49:19 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:28.801 11:49:19 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:28.801 11:49:19 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:28.801 11:49:19 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:28.801 11:49:19 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:28.801 11:49:19 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:13:28.801 11:49:19 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:28.801 11:49:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:28.801 11:49:19 -- common/autotest_common.sh@10 -- # set +x 00:13:28.801 11:49:19 -- nvmf/common.sh@470 -- # nvmfpid=2410389 00:13:28.801 11:49:19 -- nvmf/common.sh@471 -- # waitforlisten 2410389 00:13:28.801 11:49:19 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:13:28.801 11:49:19 -- common/autotest_common.sh@817 -- # '[' -z 2410389 ']' 00:13:28.801 11:49:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:28.801 11:49:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:28.801 11:49:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:28.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:28.801 11:49:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:28.801 11:49:19 -- common/autotest_common.sh@10 -- # set +x 00:13:28.801 [2024-04-18 11:49:19.331795] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:13:28.801 [2024-04-18 11:49:19.331884] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:29.060 EAL: No free 2048 kB hugepages reported on node 1 00:13:29.060 [2024-04-18 11:49:19.462963] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:29.319 [2024-04-18 11:49:19.682465] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:29.319 [2024-04-18 11:49:19.682513] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:29.319 [2024-04-18 11:49:19.682525] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:29.319 [2024-04-18 11:49:19.682538] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:29.319 [2024-04-18 11:49:19.682551] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:29.319 [2024-04-18 11:49:19.682618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.319 [2024-04-18 11:49:19.682628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:29.578 11:49:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:29.578 11:49:20 -- common/autotest_common.sh@850 -- # return 0 00:13:29.578 11:49:20 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:29.578 11:49:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:29.578 11:49:20 -- common/autotest_common.sh@10 -- # set +x 00:13:29.837 11:49:20 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:29.837 11:49:20 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:29.837 11:49:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:29.837 11:49:20 -- common/autotest_common.sh@10 -- # set +x 00:13:29.837 [2024-04-18 11:49:20.151548] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:29.837 11:49:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:29.837 11:49:20 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:29.837 11:49:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:29.837 11:49:20 -- common/autotest_common.sh@10 -- # set +x 00:13:29.837 11:49:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:29.837 11:49:20 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:29.837 11:49:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:29.837 11:49:20 -- common/autotest_common.sh@10 -- # set +x 00:13:29.837 [2024-04-18 11:49:20.167947] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:29.837 11:49:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:29.837 11:49:20 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:29.837 11:49:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:29.837 11:49:20 -- common/autotest_common.sh@10 -- # set +x 00:13:29.837 NULL1 00:13:29.837 11:49:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:29.837 11:49:20 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:29.837 11:49:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:29.837 11:49:20 -- common/autotest_common.sh@10 -- # set +x 00:13:29.837 Delay0 00:13:29.837 11:49:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:29.837 11:49:20 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:29.837 11:49:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:29.837 11:49:20 -- common/autotest_common.sh@10 -- # set +x 00:13:29.837 11:49:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:29.837 11:49:20 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:29.837 11:49:20 -- target/delete_subsystem.sh@28 -- # perf_pid=2410669 00:13:29.837 11:49:20 -- target/delete_subsystem.sh@30 -- # sleep 2 00:13:29.837 EAL: No free 2048 kB hugepages reported on node 1 00:13:29.837 [2024-04-18 11:49:20.273496] subsystem.c:1431:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:31.742 11:49:22 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:31.742 11:49:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:31.742 11:49:22 -- common/autotest_common.sh@10 -- # set +x 00:13:32.002 Read completed with error (sct=0, sc=8) 00:13:32.002 Read completed with error (sct=0, sc=8) 00:13:32.002 Read completed with error (sct=0, sc=8) 00:13:32.002 starting I/O failed: -6 00:13:32.002 Write completed with error (sct=0, sc=8) 00:13:32.002 Read completed with error (sct=0, sc=8) 00:13:32.002 Read completed with error (sct=0, sc=8) 00:13:32.002 Read completed with error (sct=0, sc=8) 00:13:32.002 starting I/O failed: -6 00:13:32.002 Read completed with error (sct=0, sc=8) 00:13:32.002 Read completed with error (sct=0, sc=8) 00:13:32.002 Read completed with error (sct=0, sc=8) 00:13:32.002 Read completed with error (sct=0, sc=8) 00:13:32.002 starting I/O failed: -6 00:13:32.002 Read completed with error (sct=0, sc=8) 00:13:32.002 Read completed with error (sct=0, sc=8) 00:13:32.002 Write completed with error (sct=0, sc=8) 00:13:32.002 Write completed with error (sct=0, sc=8) 00:13:32.002 starting I/O failed: -6 00:13:32.002 Read completed with error (sct=0, sc=8) 00:13:32.002 Read completed with error (sct=0, sc=8) 00:13:32.003 Write completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 starting I/O failed: -6 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Write completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 starting I/O failed: -6 00:13:32.003 Write completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 starting I/O failed: -6 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Write completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Write completed with error (sct=0, sc=8) 00:13:32.003 starting I/O failed: -6 00:13:32.003 Write completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 starting I/O failed: -6 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Write completed with error (sct=0, sc=8) 00:13:32.003 starting I/O failed: -6 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 starting I/O failed: -6 00:13:32.003 Write completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Write completed with error (sct=0, sc=8) 00:13:32.003 Write completed with error (sct=0, sc=8) 00:13:32.003 starting I/O failed: -6 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 [2024-04-18 11:49:22.544334] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000002840 is same with the state(5) to be set 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 starting I/O failed: -6 00:13:32.003 Write completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 starting I/O failed: -6 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Write completed with error (sct=0, sc=8) 00:13:32.003 starting I/O failed: -6 00:13:32.003 Write completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 starting I/O failed: -6 00:13:32.003 Write completed with error (sct=0, sc=8) 00:13:32.003 Write completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 starting I/O failed: -6 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 starting I/O failed: -6 00:13:32.003 Write completed with error (sct=0, sc=8) 00:13:32.003 Write completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 starting I/O failed: -6 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 starting I/O failed: -6 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Write completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Write completed with error (sct=0, sc=8) 00:13:32.003 starting I/O failed: -6 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Write completed with error (sct=0, sc=8) 00:13:32.003 starting I/O failed: -6 00:13:32.003 [2024-04-18 11:49:22.545198] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000010640 is same with the state(5) to be set 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Write completed with error (sct=0, sc=8) 00:13:32.003 Write completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Write completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Write completed with error (sct=0, sc=8) 00:13:32.003 Write completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Write completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Write completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Write completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Write completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Write completed with error (sct=0, sc=8) 00:13:32.003 Write completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Write completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 [2024-04-18 11:49:22.545975] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000010240 is same with the state(5) to be set 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Write completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Write completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Write completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Write completed with error (sct=0, sc=8) 00:13:32.003 Write completed with error (sct=0, sc=8) 00:13:32.003 Write completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Write completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Write completed with error (sct=0, sc=8) 00:13:32.003 Write completed with error (sct=0, sc=8) 00:13:32.003 Write completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Write completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Write completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Write completed with error (sct=0, sc=8) 00:13:32.003 Write completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.003 Write completed with error (sct=0, sc=8) 00:13:32.003 Read completed with error (sct=0, sc=8) 00:13:32.004 Read completed with error (sct=0, sc=8) 00:13:32.004 Read completed with error (sct=0, sc=8) 00:13:32.004 Read completed with error (sct=0, sc=8) 00:13:32.004 Write completed with error (sct=0, sc=8) 00:13:32.004 Read completed with error (sct=0, sc=8) 00:13:32.004 [2024-04-18 11:49:22.546692] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000010040 is same with the state(5) to be set 00:13:33.382 [2024-04-18 11:49:23.497652] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000002240 is same with the state(5) to be set 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Write completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Write completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 [2024-04-18 11:49:23.547214] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000010440 is same with the state(5) to be set 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Write completed with error (sct=0, sc=8) 00:13:33.382 Write completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Write completed with error (sct=0, sc=8) 00:13:33.382 Write completed with error (sct=0, sc=8) 00:13:33.382 Write completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Write completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Write completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 [2024-04-18 11:49:23.548706] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000002440 is same with the state(5) to be set 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Write completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Write completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Write completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Write completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Write completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Write completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Write completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Write completed with error (sct=0, sc=8) 00:13:33.382 Write completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 [2024-04-18 11:49:23.549426] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000002640 is same with the state(5) to be set 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Write completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Write completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Write completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Write completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Write completed with error (sct=0, sc=8) 00:13:33.382 Write completed with error (sct=0, sc=8) 00:13:33.382 Write completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.382 Read completed with error (sct=0, sc=8) 00:13:33.383 [2024-04-18 11:49:23.550190] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000002a40 is same with the state(5) to be set 00:13:33.383 [2024-04-18 11:49:23.555582] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000002240 (9): Bad file descriptor 00:13:33.383 11:49:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:33.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:13:33.383 11:49:23 -- target/delete_subsystem.sh@34 -- # delay=0 00:13:33.383 11:49:23 -- target/delete_subsystem.sh@35 -- # kill -0 2410669 00:13:33.383 11:49:23 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:13:33.383 Initializing NVMe Controllers 00:13:33.383 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:33.383 Controller IO queue size 128, less than required. 00:13:33.383 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:33.383 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:33.383 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:33.383 Initialization complete. Launching workers. 00:13:33.383 ======================================================== 00:13:33.383 Latency(us) 00:13:33.383 Device Information : IOPS MiB/s Average min max 00:13:33.383 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 179.11 0.09 959004.69 1249.05 1014899.32 00:13:33.383 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 157.78 0.08 868337.12 797.23 1012917.20 00:13:33.383 ======================================================== 00:13:33.383 Total : 336.89 0.16 916541.82 797.23 1014899.32 00:13:33.383 00:13:33.642 11:49:24 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:13:33.642 11:49:24 -- target/delete_subsystem.sh@35 -- # kill -0 2410669 00:13:33.642 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2410669) - No such process 00:13:33.642 11:49:24 -- target/delete_subsystem.sh@45 -- # NOT wait 2410669 00:13:33.642 11:49:24 -- common/autotest_common.sh@638 -- # local es=0 00:13:33.642 11:49:24 -- common/autotest_common.sh@640 -- # valid_exec_arg wait 2410669 00:13:33.642 11:49:24 -- common/autotest_common.sh@626 -- # local arg=wait 00:13:33.642 11:49:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:33.642 11:49:24 -- common/autotest_common.sh@630 -- # type -t wait 00:13:33.642 11:49:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:33.642 11:49:24 -- common/autotest_common.sh@641 -- # wait 2410669 00:13:33.642 11:49:24 -- common/autotest_common.sh@641 -- # es=1 00:13:33.642 11:49:24 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:33.642 11:49:24 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:13:33.642 11:49:24 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:33.642 11:49:24 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:33.642 11:49:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:33.642 11:49:24 -- common/autotest_common.sh@10 -- # set +x 00:13:33.642 11:49:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:33.642 11:49:24 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:33.642 11:49:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:33.642 11:49:24 -- common/autotest_common.sh@10 -- # set +x 00:13:33.642 [2024-04-18 11:49:24.080477] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:33.642 11:49:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:33.642 11:49:24 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:33.642 11:49:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:33.642 11:49:24 -- common/autotest_common.sh@10 -- # set +x 00:13:33.642 11:49:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:33.643 11:49:24 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:33.643 11:49:24 -- target/delete_subsystem.sh@54 -- # perf_pid=2411238 00:13:33.643 11:49:24 -- target/delete_subsystem.sh@56 -- # delay=0 00:13:33.643 11:49:24 -- target/delete_subsystem.sh@57 -- # kill -0 2411238 00:13:33.643 11:49:24 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:33.643 EAL: No free 2048 kB hugepages reported on node 1 00:13:33.643 [2024-04-18 11:49:24.165728] subsystem.c:1431:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:34.211 11:49:24 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:34.211 11:49:24 -- target/delete_subsystem.sh@57 -- # kill -0 2411238 00:13:34.211 11:49:24 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:34.780 11:49:25 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:34.780 11:49:25 -- target/delete_subsystem.sh@57 -- # kill -0 2411238 00:13:34.780 11:49:25 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:35.348 11:49:25 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:35.348 11:49:25 -- target/delete_subsystem.sh@57 -- # kill -0 2411238 00:13:35.348 11:49:25 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:35.606 11:49:26 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:35.606 11:49:26 -- target/delete_subsystem.sh@57 -- # kill -0 2411238 00:13:35.606 11:49:26 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:36.173 11:49:26 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:36.173 11:49:26 -- target/delete_subsystem.sh@57 -- # kill -0 2411238 00:13:36.173 11:49:26 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:36.777 11:49:27 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:36.777 11:49:27 -- target/delete_subsystem.sh@57 -- # kill -0 2411238 00:13:36.777 11:49:27 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:37.057 Initializing NVMe Controllers 00:13:37.057 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:37.057 Controller IO queue size 128, less than required. 00:13:37.057 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:37.057 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:37.057 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:37.057 Initialization complete. Launching workers. 00:13:37.057 ======================================================== 00:13:37.057 Latency(us) 00:13:37.057 Device Information : IOPS MiB/s Average min max 00:13:37.057 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005131.48 1000202.33 1043554.28 00:13:37.057 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005518.28 1000417.18 1013593.10 00:13:37.057 ======================================================== 00:13:37.057 Total : 256.00 0.12 1005324.88 1000202.33 1043554.28 00:13:37.057 00:13:37.316 11:49:27 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:37.316 11:49:27 -- target/delete_subsystem.sh@57 -- # kill -0 2411238 00:13:37.316 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2411238) - No such process 00:13:37.316 11:49:27 -- target/delete_subsystem.sh@67 -- # wait 2411238 00:13:37.316 11:49:27 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:13:37.316 11:49:27 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:13:37.316 11:49:27 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:37.316 11:49:27 -- nvmf/common.sh@117 -- # sync 00:13:37.316 11:49:27 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:37.316 11:49:27 -- nvmf/common.sh@120 -- # set +e 00:13:37.316 11:49:27 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:37.316 11:49:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:37.316 rmmod nvme_tcp 00:13:37.316 rmmod nvme_fabrics 00:13:37.316 rmmod nvme_keyring 00:13:37.316 11:49:27 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:37.316 11:49:27 -- nvmf/common.sh@124 -- # set -e 00:13:37.316 11:49:27 -- nvmf/common.sh@125 -- # return 0 00:13:37.316 11:49:27 -- nvmf/common.sh@478 -- # '[' -n 2410389 ']' 00:13:37.316 11:49:27 -- nvmf/common.sh@479 -- # killprocess 2410389 00:13:37.316 11:49:27 -- common/autotest_common.sh@936 -- # '[' -z 2410389 ']' 00:13:37.316 11:49:27 -- common/autotest_common.sh@940 -- # kill -0 2410389 00:13:37.316 11:49:27 -- common/autotest_common.sh@941 -- # uname 00:13:37.316 11:49:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:37.316 11:49:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2410389 00:13:37.316 11:49:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:37.316 11:49:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:37.316 11:49:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2410389' 00:13:37.316 killing process with pid 2410389 00:13:37.316 11:49:27 -- common/autotest_common.sh@955 -- # kill 2410389 00:13:37.316 11:49:27 -- common/autotest_common.sh@960 -- # wait 2410389 00:13:38.694 11:49:28 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:38.694 11:49:28 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:38.694 11:49:28 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:38.694 11:49:28 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:38.694 11:49:28 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:38.694 11:49:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:38.694 11:49:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:38.695 11:49:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:40.602 11:49:31 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:40.602 00:13:40.602 real 0m18.488s 00:13:40.602 user 0m31.472s 00:13:40.602 sys 0m6.795s 00:13:40.602 11:49:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:40.602 11:49:31 -- common/autotest_common.sh@10 -- # set +x 00:13:40.602 ************************************ 00:13:40.602 END TEST nvmf_delete_subsystem 00:13:40.602 ************************************ 00:13:40.602 11:49:31 -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:40.602 11:49:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:40.602 11:49:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:40.602 11:49:31 -- common/autotest_common.sh@10 -- # set +x 00:13:40.860 ************************************ 00:13:40.860 START TEST nvmf_ns_masking 00:13:40.860 ************************************ 00:13:40.860 11:49:31 -- common/autotest_common.sh@1111 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:40.860 * Looking for test storage... 00:13:41.119 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:41.119 11:49:31 -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:41.119 11:49:31 -- nvmf/common.sh@7 -- # uname -s 00:13:41.119 11:49:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:41.119 11:49:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:41.119 11:49:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:41.119 11:49:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:41.119 11:49:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:41.119 11:49:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:41.119 11:49:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:41.119 11:49:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:41.119 11:49:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:41.119 11:49:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:41.119 11:49:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:13:41.119 11:49:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:13:41.119 11:49:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:41.119 11:49:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:41.119 11:49:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:41.119 11:49:31 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:41.119 11:49:31 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:41.119 11:49:31 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:41.119 11:49:31 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:41.119 11:49:31 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:41.119 11:49:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.119 11:49:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.119 11:49:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.119 11:49:31 -- paths/export.sh@5 -- # export PATH 00:13:41.119 11:49:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.119 11:49:31 -- nvmf/common.sh@47 -- # : 0 00:13:41.119 11:49:31 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:41.119 11:49:31 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:41.119 11:49:31 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:41.119 11:49:31 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:41.119 11:49:31 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:41.119 11:49:31 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:41.119 11:49:31 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:41.119 11:49:31 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:41.119 11:49:31 -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:41.119 11:49:31 -- target/ns_masking.sh@11 -- # loops=5 00:13:41.119 11:49:31 -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:41.120 11:49:31 -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:13:41.120 11:49:31 -- target/ns_masking.sh@15 -- # uuidgen 00:13:41.120 11:49:31 -- target/ns_masking.sh@15 -- # HOSTID=99f07099-ff77-470b-992b-d58bdf7722eb 00:13:41.120 11:49:31 -- target/ns_masking.sh@44 -- # nvmftestinit 00:13:41.120 11:49:31 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:41.120 11:49:31 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:41.120 11:49:31 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:41.120 11:49:31 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:41.120 11:49:31 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:41.120 11:49:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:41.120 11:49:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:41.120 11:49:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:41.120 11:49:31 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:41.120 11:49:31 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:41.120 11:49:31 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:41.120 11:49:31 -- common/autotest_common.sh@10 -- # set +x 00:13:47.689 11:49:37 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:47.689 11:49:37 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:47.689 11:49:37 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:47.689 11:49:37 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:47.689 11:49:37 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:47.689 11:49:37 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:47.689 11:49:37 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:47.689 11:49:37 -- nvmf/common.sh@295 -- # net_devs=() 00:13:47.689 11:49:37 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:47.689 11:49:37 -- nvmf/common.sh@296 -- # e810=() 00:13:47.689 11:49:37 -- nvmf/common.sh@296 -- # local -ga e810 00:13:47.689 11:49:37 -- nvmf/common.sh@297 -- # x722=() 00:13:47.689 11:49:37 -- nvmf/common.sh@297 -- # local -ga x722 00:13:47.689 11:49:37 -- nvmf/common.sh@298 -- # mlx=() 00:13:47.689 11:49:37 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:47.689 11:49:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:47.689 11:49:37 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:47.689 11:49:37 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:47.689 11:49:37 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:47.689 11:49:37 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:47.689 11:49:37 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:47.689 11:49:37 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:47.689 11:49:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:47.689 11:49:37 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:47.689 11:49:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:47.689 11:49:37 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:47.689 11:49:37 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:47.689 11:49:37 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:47.689 11:49:37 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:47.689 11:49:37 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:47.689 11:49:37 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:47.689 11:49:37 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:47.689 11:49:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:47.689 11:49:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:47.689 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:47.689 11:49:37 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:47.689 11:49:37 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:47.689 11:49:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:47.689 11:49:37 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:47.689 11:49:37 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:47.689 11:49:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:47.689 11:49:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:47.689 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:47.689 11:49:37 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:47.689 11:49:37 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:47.689 11:49:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:47.689 11:49:37 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:47.689 11:49:37 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:47.689 11:49:37 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:47.689 11:49:37 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:47.689 11:49:37 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:47.689 11:49:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:47.689 11:49:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:47.689 11:49:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:47.689 11:49:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:47.689 11:49:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:47.689 Found net devices under 0000:af:00.0: cvl_0_0 00:13:47.689 11:49:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:47.689 11:49:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:47.689 11:49:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:47.689 11:49:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:47.689 11:49:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:47.689 11:49:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:47.689 Found net devices under 0000:af:00.1: cvl_0_1 00:13:47.689 11:49:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:47.689 11:49:37 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:47.689 11:49:37 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:47.689 11:49:37 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:47.689 11:49:37 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:47.689 11:49:37 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:47.689 11:49:37 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:47.689 11:49:37 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:47.689 11:49:37 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:47.689 11:49:37 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:47.689 11:49:37 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:47.689 11:49:37 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:47.689 11:49:37 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:47.689 11:49:37 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:47.689 11:49:37 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:47.689 11:49:37 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:47.690 11:49:37 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:47.690 11:49:37 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:47.690 11:49:37 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:47.690 11:49:37 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:47.690 11:49:37 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:47.690 11:49:37 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:47.690 11:49:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:47.690 11:49:37 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:47.690 11:49:37 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:47.690 11:49:37 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:47.690 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:47.690 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:13:47.690 00:13:47.690 --- 10.0.0.2 ping statistics --- 00:13:47.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.690 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:13:47.690 11:49:37 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:47.690 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:47.690 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:13:47.690 00:13:47.690 --- 10.0.0.1 ping statistics --- 00:13:47.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.690 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:13:47.690 11:49:37 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:47.690 11:49:37 -- nvmf/common.sh@411 -- # return 0 00:13:47.690 11:49:37 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:47.690 11:49:37 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:47.690 11:49:37 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:47.690 11:49:37 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:47.690 11:49:37 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:47.690 11:49:37 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:47.690 11:49:37 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:47.690 11:49:37 -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:13:47.690 11:49:37 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:47.690 11:49:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:47.690 11:49:37 -- common/autotest_common.sh@10 -- # set +x 00:13:47.690 11:49:37 -- nvmf/common.sh@470 -- # nvmfpid=2415720 00:13:47.690 11:49:37 -- nvmf/common.sh@471 -- # waitforlisten 2415720 00:13:47.690 11:49:37 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:47.690 11:49:37 -- common/autotest_common.sh@817 -- # '[' -z 2415720 ']' 00:13:47.690 11:49:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.690 11:49:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:47.690 11:49:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:47.690 11:49:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:47.690 11:49:37 -- common/autotest_common.sh@10 -- # set +x 00:13:47.690 [2024-04-18 11:49:38.081089] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:13:47.690 [2024-04-18 11:49:38.081180] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:47.690 EAL: No free 2048 kB hugepages reported on node 1 00:13:47.690 [2024-04-18 11:49:38.210511] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:47.950 [2024-04-18 11:49:38.425262] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:47.950 [2024-04-18 11:49:38.425308] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:47.950 [2024-04-18 11:49:38.425320] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:47.950 [2024-04-18 11:49:38.425350] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:47.950 [2024-04-18 11:49:38.425359] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:47.950 [2024-04-18 11:49:38.425443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:47.950 [2024-04-18 11:49:38.425556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:47.950 [2024-04-18 11:49:38.425575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.950 [2024-04-18 11:49:38.425584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:48.518 11:49:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:48.518 11:49:38 -- common/autotest_common.sh@850 -- # return 0 00:13:48.518 11:49:38 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:48.518 11:49:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:48.518 11:49:38 -- common/autotest_common.sh@10 -- # set +x 00:13:48.518 11:49:38 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:48.518 11:49:38 -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:48.518 [2024-04-18 11:49:39.044245] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:48.778 11:49:39 -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:13:48.778 11:49:39 -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:13:48.778 11:49:39 -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:48.778 Malloc1 00:13:49.037 11:49:39 -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:49.037 Malloc2 00:13:49.296 11:49:39 -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:49.296 11:49:39 -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:49.554 11:49:39 -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:49.813 [2024-04-18 11:49:40.118648] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:49.813 11:49:40 -- target/ns_masking.sh@61 -- # connect 00:13:49.813 11:49:40 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 99f07099-ff77-470b-992b-d58bdf7722eb -a 10.0.0.2 -s 4420 -i 4 00:13:49.813 11:49:40 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:13:49.813 11:49:40 -- common/autotest_common.sh@1184 -- # local i=0 00:13:49.813 11:49:40 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:49.813 11:49:40 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:13:49.813 11:49:40 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:52.349 11:49:42 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:52.349 11:49:42 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:52.349 11:49:42 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:52.349 11:49:42 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:13:52.349 11:49:42 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:52.349 11:49:42 -- common/autotest_common.sh@1194 -- # return 0 00:13:52.349 11:49:42 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:13:52.349 11:49:42 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:52.349 11:49:42 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:13:52.349 11:49:42 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:13:52.349 11:49:42 -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:13:52.349 11:49:42 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:52.349 11:49:42 -- target/ns_masking.sh@39 -- # grep 0x1 00:13:52.349 [ 0]:0x1 00:13:52.349 11:49:42 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:52.349 11:49:42 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:52.349 11:49:42 -- target/ns_masking.sh@40 -- # nguid=e21e490b90ba42bf92e55aec7c6942d3 00:13:52.349 11:49:42 -- target/ns_masking.sh@41 -- # [[ e21e490b90ba42bf92e55aec7c6942d3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:52.349 11:49:42 -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:52.349 11:49:42 -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:13:52.349 11:49:42 -- target/ns_masking.sh@39 -- # grep 0x1 00:13:52.349 11:49:42 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:52.349 [ 0]:0x1 00:13:52.349 11:49:42 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:52.349 11:49:42 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:52.349 11:49:42 -- target/ns_masking.sh@40 -- # nguid=e21e490b90ba42bf92e55aec7c6942d3 00:13:52.349 11:49:42 -- target/ns_masking.sh@41 -- # [[ e21e490b90ba42bf92e55aec7c6942d3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:52.349 11:49:42 -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:13:52.349 11:49:42 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:52.349 11:49:42 -- target/ns_masking.sh@39 -- # grep 0x2 00:13:52.349 [ 1]:0x2 00:13:52.349 11:49:42 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:52.349 11:49:42 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:52.349 11:49:42 -- target/ns_masking.sh@40 -- # nguid=269916de2994470bac82ddc966292924 00:13:52.349 11:49:42 -- target/ns_masking.sh@41 -- # [[ 269916de2994470bac82ddc966292924 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:52.349 11:49:42 -- target/ns_masking.sh@69 -- # disconnect 00:13:52.349 11:49:42 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:52.349 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:52.349 11:49:42 -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.608 11:49:43 -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:52.866 11:49:43 -- target/ns_masking.sh@77 -- # connect 1 00:13:52.866 11:49:43 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 99f07099-ff77-470b-992b-d58bdf7722eb -a 10.0.0.2 -s 4420 -i 4 00:13:52.866 11:49:43 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:52.866 11:49:43 -- common/autotest_common.sh@1184 -- # local i=0 00:13:52.866 11:49:43 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:52.866 11:49:43 -- common/autotest_common.sh@1186 -- # [[ -n 1 ]] 00:13:52.866 11:49:43 -- common/autotest_common.sh@1187 -- # nvme_device_counter=1 00:13:52.866 11:49:43 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:55.397 11:49:45 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:55.397 11:49:45 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:55.397 11:49:45 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:55.397 11:49:45 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:13:55.397 11:49:45 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:55.397 11:49:45 -- common/autotest_common.sh@1194 -- # return 0 00:13:55.397 11:49:45 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:13:55.398 11:49:45 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:55.398 11:49:45 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:13:55.398 11:49:45 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:13:55.398 11:49:45 -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:13:55.398 11:49:45 -- common/autotest_common.sh@638 -- # local es=0 00:13:55.398 11:49:45 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:13:55.398 11:49:45 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:13:55.398 11:49:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:55.398 11:49:45 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:13:55.398 11:49:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:55.398 11:49:45 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:13:55.398 11:49:45 -- target/ns_masking.sh@39 -- # grep 0x1 00:13:55.398 11:49:45 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:55.398 11:49:45 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:55.398 11:49:45 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:55.398 11:49:45 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:13:55.398 11:49:45 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:55.398 11:49:45 -- common/autotest_common.sh@641 -- # es=1 00:13:55.398 11:49:45 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:55.398 11:49:45 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:13:55.398 11:49:45 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:55.398 11:49:45 -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:13:55.398 11:49:45 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:55.398 11:49:45 -- target/ns_masking.sh@39 -- # grep 0x2 00:13:55.398 [ 0]:0x2 00:13:55.398 11:49:45 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:55.398 11:49:45 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:55.398 11:49:45 -- target/ns_masking.sh@40 -- # nguid=269916de2994470bac82ddc966292924 00:13:55.398 11:49:45 -- target/ns_masking.sh@41 -- # [[ 269916de2994470bac82ddc966292924 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:55.398 11:49:45 -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:55.398 11:49:45 -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:13:55.398 11:49:45 -- target/ns_masking.sh@39 -- # grep 0x1 00:13:55.398 11:49:45 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:55.398 [ 0]:0x1 00:13:55.398 11:49:45 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:55.398 11:49:45 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:55.398 11:49:45 -- target/ns_masking.sh@40 -- # nguid=e21e490b90ba42bf92e55aec7c6942d3 00:13:55.398 11:49:45 -- target/ns_masking.sh@41 -- # [[ e21e490b90ba42bf92e55aec7c6942d3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:55.398 11:49:45 -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:13:55.398 11:49:45 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:55.398 11:49:45 -- target/ns_masking.sh@39 -- # grep 0x2 00:13:55.398 [ 1]:0x2 00:13:55.398 11:49:45 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:55.398 11:49:45 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:55.398 11:49:45 -- target/ns_masking.sh@40 -- # nguid=269916de2994470bac82ddc966292924 00:13:55.398 11:49:45 -- target/ns_masking.sh@41 -- # [[ 269916de2994470bac82ddc966292924 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:55.398 11:49:45 -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:55.657 11:49:46 -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:13:55.657 11:49:46 -- common/autotest_common.sh@638 -- # local es=0 00:13:55.657 11:49:46 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:13:55.657 11:49:46 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:13:55.657 11:49:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:55.657 11:49:46 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:13:55.657 11:49:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:55.657 11:49:46 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:13:55.657 11:49:46 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:55.657 11:49:46 -- target/ns_masking.sh@39 -- # grep 0x1 00:13:55.657 11:49:46 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:55.657 11:49:46 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:55.657 11:49:46 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:13:55.657 11:49:46 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:55.657 11:49:46 -- common/autotest_common.sh@641 -- # es=1 00:13:55.657 11:49:46 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:55.657 11:49:46 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:13:55.657 11:49:46 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:55.657 11:49:46 -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:13:55.657 11:49:46 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:55.657 11:49:46 -- target/ns_masking.sh@39 -- # grep 0x2 00:13:55.657 [ 0]:0x2 00:13:55.657 11:49:46 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:55.657 11:49:46 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:55.657 11:49:46 -- target/ns_masking.sh@40 -- # nguid=269916de2994470bac82ddc966292924 00:13:55.657 11:49:46 -- target/ns_masking.sh@41 -- # [[ 269916de2994470bac82ddc966292924 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:55.657 11:49:46 -- target/ns_masking.sh@91 -- # disconnect 00:13:55.657 11:49:46 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:55.916 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.916 11:49:46 -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:55.916 11:49:46 -- target/ns_masking.sh@95 -- # connect 2 00:13:55.916 11:49:46 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 99f07099-ff77-470b-992b-d58bdf7722eb -a 10.0.0.2 -s 4420 -i 4 00:13:56.175 11:49:46 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:56.175 11:49:46 -- common/autotest_common.sh@1184 -- # local i=0 00:13:56.175 11:49:46 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:56.175 11:49:46 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:13:56.175 11:49:46 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:13:56.175 11:49:46 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:58.153 11:49:48 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:58.153 11:49:48 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:58.153 11:49:48 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:58.153 11:49:48 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:13:58.153 11:49:48 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:58.153 11:49:48 -- common/autotest_common.sh@1194 -- # return 0 00:13:58.153 11:49:48 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:13:58.153 11:49:48 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:58.412 11:49:48 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:13:58.412 11:49:48 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:13:58.412 11:49:48 -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:13:58.412 11:49:48 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:58.412 11:49:48 -- target/ns_masking.sh@39 -- # grep 0x1 00:13:58.412 [ 0]:0x1 00:13:58.412 11:49:48 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:58.412 11:49:48 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:58.412 11:49:48 -- target/ns_masking.sh@40 -- # nguid=e21e490b90ba42bf92e55aec7c6942d3 00:13:58.412 11:49:48 -- target/ns_masking.sh@41 -- # [[ e21e490b90ba42bf92e55aec7c6942d3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:58.412 11:49:48 -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:13:58.412 11:49:48 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:58.412 11:49:48 -- target/ns_masking.sh@39 -- # grep 0x2 00:13:58.412 [ 1]:0x2 00:13:58.412 11:49:48 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:58.412 11:49:48 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:58.412 11:49:48 -- target/ns_masking.sh@40 -- # nguid=269916de2994470bac82ddc966292924 00:13:58.412 11:49:48 -- target/ns_masking.sh@41 -- # [[ 269916de2994470bac82ddc966292924 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:58.412 11:49:48 -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:58.671 11:49:49 -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:13:58.671 11:49:49 -- common/autotest_common.sh@638 -- # local es=0 00:13:58.671 11:49:49 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:13:58.671 11:49:49 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:13:58.671 11:49:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:58.671 11:49:49 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:13:58.671 11:49:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:58.671 11:49:49 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:13:58.671 11:49:49 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:58.671 11:49:49 -- target/ns_masking.sh@39 -- # grep 0x1 00:13:58.671 11:49:49 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:58.671 11:49:49 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:58.671 11:49:49 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:13:58.671 11:49:49 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:58.671 11:49:49 -- common/autotest_common.sh@641 -- # es=1 00:13:58.671 11:49:49 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:58.671 11:49:49 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:13:58.671 11:49:49 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:58.671 11:49:49 -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:13:58.671 11:49:49 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:58.671 11:49:49 -- target/ns_masking.sh@39 -- # grep 0x2 00:13:58.671 [ 0]:0x2 00:13:58.671 11:49:49 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:58.671 11:49:49 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:58.671 11:49:49 -- target/ns_masking.sh@40 -- # nguid=269916de2994470bac82ddc966292924 00:13:58.671 11:49:49 -- target/ns_masking.sh@41 -- # [[ 269916de2994470bac82ddc966292924 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:58.671 11:49:49 -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:58.671 11:49:49 -- common/autotest_common.sh@638 -- # local es=0 00:13:58.671 11:49:49 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:58.671 11:49:49 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:58.671 11:49:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:58.671 11:49:49 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:58.671 11:49:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:58.672 11:49:49 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:58.672 11:49:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:58.672 11:49:49 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:58.672 11:49:49 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:58.672 11:49:49 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:58.931 [2024-04-18 11:49:49.360016] nvmf_rpc.c:1779:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:58.931 request: 00:13:58.931 { 00:13:58.931 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:58.931 "nsid": 2, 00:13:58.931 "host": "nqn.2016-06.io.spdk:host1", 00:13:58.931 "method": "nvmf_ns_remove_host", 00:13:58.931 "req_id": 1 00:13:58.931 } 00:13:58.931 Got JSON-RPC error response 00:13:58.931 response: 00:13:58.931 { 00:13:58.931 "code": -32602, 00:13:58.931 "message": "Invalid parameters" 00:13:58.931 } 00:13:58.931 11:49:49 -- common/autotest_common.sh@641 -- # es=1 00:13:58.931 11:49:49 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:58.931 11:49:49 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:13:58.931 11:49:49 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:58.931 11:49:49 -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:13:58.931 11:49:49 -- common/autotest_common.sh@638 -- # local es=0 00:13:58.931 11:49:49 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:13:58.931 11:49:49 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:13:58.931 11:49:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:58.931 11:49:49 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:13:58.931 11:49:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:58.931 11:49:49 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:13:58.931 11:49:49 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:58.931 11:49:49 -- target/ns_masking.sh@39 -- # grep 0x1 00:13:58.931 11:49:49 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:58.931 11:49:49 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:58.931 11:49:49 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:13:58.931 11:49:49 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:58.931 11:49:49 -- common/autotest_common.sh@641 -- # es=1 00:13:58.931 11:49:49 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:58.931 11:49:49 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:13:58.931 11:49:49 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:58.931 11:49:49 -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:13:58.931 11:49:49 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:58.931 11:49:49 -- target/ns_masking.sh@39 -- # grep 0x2 00:13:58.931 [ 0]:0x2 00:13:58.931 11:49:49 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:58.931 11:49:49 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:59.190 11:49:49 -- target/ns_masking.sh@40 -- # nguid=269916de2994470bac82ddc966292924 00:13:59.190 11:49:49 -- target/ns_masking.sh@41 -- # [[ 269916de2994470bac82ddc966292924 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:59.190 11:49:49 -- target/ns_masking.sh@108 -- # disconnect 00:13:59.190 11:49:49 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:59.190 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:59.190 11:49:49 -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:59.190 11:49:49 -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:13:59.190 11:49:49 -- target/ns_masking.sh@114 -- # nvmftestfini 00:13:59.190 11:49:49 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:59.190 11:49:49 -- nvmf/common.sh@117 -- # sync 00:13:59.190 11:49:49 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:59.190 11:49:49 -- nvmf/common.sh@120 -- # set +e 00:13:59.190 11:49:49 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:59.190 11:49:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:59.190 rmmod nvme_tcp 00:13:59.449 rmmod nvme_fabrics 00:13:59.449 rmmod nvme_keyring 00:13:59.449 11:49:49 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:59.449 11:49:49 -- nvmf/common.sh@124 -- # set -e 00:13:59.449 11:49:49 -- nvmf/common.sh@125 -- # return 0 00:13:59.449 11:49:49 -- nvmf/common.sh@478 -- # '[' -n 2415720 ']' 00:13:59.449 11:49:49 -- nvmf/common.sh@479 -- # killprocess 2415720 00:13:59.449 11:49:49 -- common/autotest_common.sh@936 -- # '[' -z 2415720 ']' 00:13:59.449 11:49:49 -- common/autotest_common.sh@940 -- # kill -0 2415720 00:13:59.449 11:49:49 -- common/autotest_common.sh@941 -- # uname 00:13:59.449 11:49:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:59.449 11:49:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2415720 00:13:59.449 11:49:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:59.449 11:49:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:59.449 11:49:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2415720' 00:13:59.449 killing process with pid 2415720 00:13:59.449 11:49:49 -- common/autotest_common.sh@955 -- # kill 2415720 00:13:59.449 11:49:49 -- common/autotest_common.sh@960 -- # wait 2415720 00:14:01.356 11:49:51 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:01.356 11:49:51 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:01.356 11:49:51 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:01.356 11:49:51 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:01.356 11:49:51 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:01.356 11:49:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:01.356 11:49:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:01.356 11:49:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:03.262 11:49:53 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:03.262 00:14:03.262 real 0m22.264s 00:14:03.262 user 0m54.431s 00:14:03.262 sys 0m7.519s 00:14:03.262 11:49:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:03.262 11:49:53 -- common/autotest_common.sh@10 -- # set +x 00:14:03.262 ************************************ 00:14:03.262 END TEST nvmf_ns_masking 00:14:03.262 ************************************ 00:14:03.262 11:49:53 -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:14:03.262 11:49:53 -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:03.262 11:49:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:03.262 11:49:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:03.262 11:49:53 -- common/autotest_common.sh@10 -- # set +x 00:14:03.262 ************************************ 00:14:03.262 START TEST nvmf_nvme_cli 00:14:03.262 ************************************ 00:14:03.262 11:49:53 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:03.521 * Looking for test storage... 00:14:03.521 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:03.521 11:49:53 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:03.521 11:49:53 -- nvmf/common.sh@7 -- # uname -s 00:14:03.521 11:49:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:03.521 11:49:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:03.521 11:49:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:03.521 11:49:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:03.521 11:49:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:03.521 11:49:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:03.521 11:49:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:03.521 11:49:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:03.521 11:49:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:03.521 11:49:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:03.521 11:49:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:14:03.521 11:49:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:14:03.521 11:49:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:03.521 11:49:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:03.521 11:49:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:03.521 11:49:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:03.521 11:49:53 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:03.521 11:49:53 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:03.521 11:49:53 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:03.521 11:49:53 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:03.521 11:49:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.521 11:49:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.521 11:49:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.521 11:49:53 -- paths/export.sh@5 -- # export PATH 00:14:03.521 11:49:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.521 11:49:53 -- nvmf/common.sh@47 -- # : 0 00:14:03.521 11:49:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:03.521 11:49:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:03.521 11:49:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:03.521 11:49:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:03.521 11:49:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:03.521 11:49:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:03.521 11:49:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:03.521 11:49:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:03.521 11:49:53 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:03.521 11:49:53 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:03.521 11:49:53 -- target/nvme_cli.sh@14 -- # devs=() 00:14:03.521 11:49:53 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:03.521 11:49:53 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:03.521 11:49:53 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:03.521 11:49:53 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:03.521 11:49:53 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:03.521 11:49:53 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:03.521 11:49:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:03.521 11:49:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:03.521 11:49:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:03.521 11:49:53 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:03.521 11:49:53 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:03.521 11:49:53 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:03.521 11:49:53 -- common/autotest_common.sh@10 -- # set +x 00:14:10.089 11:50:00 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:10.089 11:50:00 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:10.089 11:50:00 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:10.089 11:50:00 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:10.090 11:50:00 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:10.090 11:50:00 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:10.090 11:50:00 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:10.090 11:50:00 -- nvmf/common.sh@295 -- # net_devs=() 00:14:10.090 11:50:00 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:10.090 11:50:00 -- nvmf/common.sh@296 -- # e810=() 00:14:10.090 11:50:00 -- nvmf/common.sh@296 -- # local -ga e810 00:14:10.090 11:50:00 -- nvmf/common.sh@297 -- # x722=() 00:14:10.090 11:50:00 -- nvmf/common.sh@297 -- # local -ga x722 00:14:10.090 11:50:00 -- nvmf/common.sh@298 -- # mlx=() 00:14:10.090 11:50:00 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:10.090 11:50:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:10.090 11:50:00 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:10.090 11:50:00 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:10.090 11:50:00 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:10.090 11:50:00 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:10.090 11:50:00 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:10.090 11:50:00 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:10.090 11:50:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:10.090 11:50:00 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:10.090 11:50:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:10.090 11:50:00 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:10.090 11:50:00 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:10.090 11:50:00 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:10.090 11:50:00 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:10.090 11:50:00 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:10.090 11:50:00 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:10.090 11:50:00 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:10.090 11:50:00 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:10.090 11:50:00 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:10.090 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:10.090 11:50:00 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:10.090 11:50:00 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:10.090 11:50:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:10.090 11:50:00 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:10.090 11:50:00 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:10.090 11:50:00 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:10.090 11:50:00 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:10.090 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:10.090 11:50:00 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:10.090 11:50:00 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:10.090 11:50:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:10.090 11:50:00 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:10.090 11:50:00 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:10.090 11:50:00 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:10.090 11:50:00 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:10.090 11:50:00 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:10.090 11:50:00 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:10.090 11:50:00 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:10.090 11:50:00 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:10.090 11:50:00 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:10.090 11:50:00 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:10.090 Found net devices under 0000:af:00.0: cvl_0_0 00:14:10.090 11:50:00 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:10.090 11:50:00 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:10.090 11:50:00 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:10.090 11:50:00 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:10.090 11:50:00 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:10.090 11:50:00 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:10.090 Found net devices under 0000:af:00.1: cvl_0_1 00:14:10.090 11:50:00 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:10.090 11:50:00 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:10.090 11:50:00 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:10.090 11:50:00 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:10.090 11:50:00 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:14:10.090 11:50:00 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:14:10.090 11:50:00 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:10.090 11:50:00 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:10.090 11:50:00 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:10.090 11:50:00 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:10.090 11:50:00 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:10.090 11:50:00 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:10.090 11:50:00 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:10.090 11:50:00 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:10.090 11:50:00 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:10.090 11:50:00 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:10.090 11:50:00 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:10.090 11:50:00 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:10.090 11:50:00 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:10.349 11:50:00 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:10.349 11:50:00 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:10.349 11:50:00 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:10.349 11:50:00 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:10.349 11:50:00 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:10.349 11:50:00 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:10.349 11:50:00 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:10.349 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:10.349 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:14:10.349 00:14:10.349 --- 10.0.0.2 ping statistics --- 00:14:10.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.349 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:14:10.349 11:50:00 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:10.349 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:10.349 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.242 ms 00:14:10.349 00:14:10.349 --- 10.0.0.1 ping statistics --- 00:14:10.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.349 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:14:10.349 11:50:00 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:10.349 11:50:00 -- nvmf/common.sh@411 -- # return 0 00:14:10.349 11:50:00 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:10.349 11:50:00 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:10.349 11:50:00 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:10.349 11:50:00 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:10.349 11:50:00 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:10.349 11:50:00 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:10.349 11:50:00 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:10.349 11:50:00 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:10.349 11:50:00 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:10.349 11:50:00 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:10.349 11:50:00 -- common/autotest_common.sh@10 -- # set +x 00:14:10.349 11:50:00 -- nvmf/common.sh@470 -- # nvmfpid=2421972 00:14:10.349 11:50:00 -- nvmf/common.sh@471 -- # waitforlisten 2421972 00:14:10.349 11:50:00 -- common/autotest_common.sh@817 -- # '[' -z 2421972 ']' 00:14:10.349 11:50:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.349 11:50:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:10.349 11:50:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.349 11:50:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:10.349 11:50:00 -- common/autotest_common.sh@10 -- # set +x 00:14:10.349 11:50:00 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:10.609 [2024-04-18 11:50:00.927487] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:10.609 [2024-04-18 11:50:00.927596] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:10.609 EAL: No free 2048 kB hugepages reported on node 1 00:14:10.609 [2024-04-18 11:50:01.064864] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:10.869 [2024-04-18 11:50:01.285283] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:10.869 [2024-04-18 11:50:01.285330] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:10.869 [2024-04-18 11:50:01.285342] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:10.869 [2024-04-18 11:50:01.285355] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:10.869 [2024-04-18 11:50:01.285365] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:10.869 [2024-04-18 11:50:01.285440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:10.869 [2024-04-18 11:50:01.285465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:10.869 [2024-04-18 11:50:01.285536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.869 [2024-04-18 11:50:01.285543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:11.437 11:50:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:11.437 11:50:01 -- common/autotest_common.sh@850 -- # return 0 00:14:11.437 11:50:01 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:11.437 11:50:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:11.437 11:50:01 -- common/autotest_common.sh@10 -- # set +x 00:14:11.437 11:50:01 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:11.437 11:50:01 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:11.437 11:50:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:11.437 11:50:01 -- common/autotest_common.sh@10 -- # set +x 00:14:11.437 [2024-04-18 11:50:01.753889] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:11.437 11:50:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:11.437 11:50:01 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:11.437 11:50:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:11.437 11:50:01 -- common/autotest_common.sh@10 -- # set +x 00:14:11.437 Malloc0 00:14:11.437 11:50:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:11.437 11:50:01 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:11.437 11:50:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:11.437 11:50:01 -- common/autotest_common.sh@10 -- # set +x 00:14:11.437 Malloc1 00:14:11.437 11:50:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:11.437 11:50:01 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:11.437 11:50:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:11.437 11:50:01 -- common/autotest_common.sh@10 -- # set +x 00:14:11.437 11:50:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:11.437 11:50:01 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:11.437 11:50:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:11.437 11:50:01 -- common/autotest_common.sh@10 -- # set +x 00:14:11.437 11:50:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:11.437 11:50:01 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:11.437 11:50:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:11.437 11:50:01 -- common/autotest_common.sh@10 -- # set +x 00:14:11.437 11:50:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:11.437 11:50:01 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:11.437 11:50:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:11.437 11:50:01 -- common/autotest_common.sh@10 -- # set +x 00:14:11.437 [2024-04-18 11:50:01.963987] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:11.437 11:50:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:11.437 11:50:01 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:11.437 11:50:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:11.437 11:50:01 -- common/autotest_common.sh@10 -- # set +x 00:14:11.437 11:50:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:11.437 11:50:01 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 4420 00:14:11.696 00:14:11.696 Discovery Log Number of Records 2, Generation counter 2 00:14:11.696 =====Discovery Log Entry 0====== 00:14:11.696 trtype: tcp 00:14:11.696 adrfam: ipv4 00:14:11.696 subtype: current discovery subsystem 00:14:11.696 treq: not required 00:14:11.696 portid: 0 00:14:11.696 trsvcid: 4420 00:14:11.696 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:11.696 traddr: 10.0.0.2 00:14:11.696 eflags: explicit discovery connections, duplicate discovery information 00:14:11.696 sectype: none 00:14:11.696 =====Discovery Log Entry 1====== 00:14:11.696 trtype: tcp 00:14:11.696 adrfam: ipv4 00:14:11.696 subtype: nvme subsystem 00:14:11.696 treq: not required 00:14:11.696 portid: 0 00:14:11.696 trsvcid: 4420 00:14:11.696 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:11.696 traddr: 10.0.0.2 00:14:11.696 eflags: none 00:14:11.696 sectype: none 00:14:11.696 11:50:02 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:11.696 11:50:02 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:11.696 11:50:02 -- nvmf/common.sh@511 -- # local dev _ 00:14:11.696 11:50:02 -- nvmf/common.sh@513 -- # read -r dev _ 00:14:11.696 11:50:02 -- nvmf/common.sh@510 -- # nvme list 00:14:11.696 11:50:02 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:14:11.696 11:50:02 -- nvmf/common.sh@513 -- # read -r dev _ 00:14:11.696 11:50:02 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:14:11.696 11:50:02 -- nvmf/common.sh@513 -- # read -r dev _ 00:14:11.696 11:50:02 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:11.696 11:50:02 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:13.073 11:50:03 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:13.073 11:50:03 -- common/autotest_common.sh@1184 -- # local i=0 00:14:13.073 11:50:03 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:14:13.073 11:50:03 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:14:13.073 11:50:03 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:14:13.073 11:50:03 -- common/autotest_common.sh@1191 -- # sleep 2 00:14:15.607 11:50:05 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:14:15.607 11:50:05 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:14:15.607 11:50:05 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:14:15.607 11:50:05 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:14:15.607 11:50:05 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:14:15.607 11:50:05 -- common/autotest_common.sh@1194 -- # return 0 00:14:15.607 11:50:05 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:15.607 11:50:05 -- nvmf/common.sh@511 -- # local dev _ 00:14:15.607 11:50:05 -- nvmf/common.sh@513 -- # read -r dev _ 00:14:15.607 11:50:05 -- nvmf/common.sh@510 -- # nvme list 00:14:15.607 11:50:05 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:14:15.607 11:50:05 -- nvmf/common.sh@513 -- # read -r dev _ 00:14:15.607 11:50:05 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:14:15.607 11:50:05 -- nvmf/common.sh@513 -- # read -r dev _ 00:14:15.607 11:50:05 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:15.607 11:50:05 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:14:15.607 11:50:05 -- nvmf/common.sh@513 -- # read -r dev _ 00:14:15.608 11:50:05 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:15.608 11:50:05 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:14:15.608 11:50:05 -- nvmf/common.sh@513 -- # read -r dev _ 00:14:15.608 11:50:05 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:14:15.608 /dev/nvme0n1 ]] 00:14:15.608 11:50:05 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:15.608 11:50:05 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:15.608 11:50:05 -- nvmf/common.sh@511 -- # local dev _ 00:14:15.608 11:50:05 -- nvmf/common.sh@510 -- # nvme list 00:14:15.608 11:50:05 -- nvmf/common.sh@513 -- # read -r dev _ 00:14:15.608 11:50:05 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:14:15.608 11:50:05 -- nvmf/common.sh@513 -- # read -r dev _ 00:14:15.608 11:50:05 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:14:15.608 11:50:05 -- nvmf/common.sh@513 -- # read -r dev _ 00:14:15.608 11:50:05 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:15.608 11:50:05 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:14:15.608 11:50:05 -- nvmf/common.sh@513 -- # read -r dev _ 00:14:15.608 11:50:05 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:15.608 11:50:05 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:14:15.608 11:50:05 -- nvmf/common.sh@513 -- # read -r dev _ 00:14:15.608 11:50:05 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:15.608 11:50:05 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:15.608 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.608 11:50:05 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:15.608 11:50:05 -- common/autotest_common.sh@1205 -- # local i=0 00:14:15.608 11:50:05 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:14:15.608 11:50:05 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:15.608 11:50:05 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:14:15.608 11:50:05 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:15.608 11:50:05 -- common/autotest_common.sh@1217 -- # return 0 00:14:15.608 11:50:05 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:15.608 11:50:05 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:15.608 11:50:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:15.608 11:50:05 -- common/autotest_common.sh@10 -- # set +x 00:14:15.608 11:50:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:15.608 11:50:05 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:15.608 11:50:05 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:15.608 11:50:05 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:15.608 11:50:05 -- nvmf/common.sh@117 -- # sync 00:14:15.608 11:50:05 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:15.608 11:50:05 -- nvmf/common.sh@120 -- # set +e 00:14:15.608 11:50:05 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:15.608 11:50:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:15.608 rmmod nvme_tcp 00:14:15.608 rmmod nvme_fabrics 00:14:15.608 rmmod nvme_keyring 00:14:15.608 11:50:05 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:15.608 11:50:05 -- nvmf/common.sh@124 -- # set -e 00:14:15.608 11:50:05 -- nvmf/common.sh@125 -- # return 0 00:14:15.608 11:50:05 -- nvmf/common.sh@478 -- # '[' -n 2421972 ']' 00:14:15.608 11:50:05 -- nvmf/common.sh@479 -- # killprocess 2421972 00:14:15.608 11:50:05 -- common/autotest_common.sh@936 -- # '[' -z 2421972 ']' 00:14:15.608 11:50:05 -- common/autotest_common.sh@940 -- # kill -0 2421972 00:14:15.608 11:50:05 -- common/autotest_common.sh@941 -- # uname 00:14:15.608 11:50:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:15.608 11:50:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2421972 00:14:15.608 11:50:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:15.608 11:50:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:15.608 11:50:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2421972' 00:14:15.608 killing process with pid 2421972 00:14:15.608 11:50:06 -- common/autotest_common.sh@955 -- # kill 2421972 00:14:15.608 11:50:06 -- common/autotest_common.sh@960 -- # wait 2421972 00:14:17.549 11:50:07 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:17.549 11:50:07 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:17.549 11:50:07 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:17.549 11:50:07 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:17.549 11:50:07 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:17.549 11:50:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:17.549 11:50:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:17.549 11:50:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:19.460 11:50:09 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:19.460 00:14:19.460 real 0m16.044s 00:14:19.460 user 0m25.842s 00:14:19.460 sys 0m6.309s 00:14:19.460 11:50:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:19.460 11:50:09 -- common/autotest_common.sh@10 -- # set +x 00:14:19.460 ************************************ 00:14:19.460 END TEST nvmf_nvme_cli 00:14:19.460 ************************************ 00:14:19.460 11:50:09 -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:14:19.460 11:50:09 -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:19.460 11:50:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:19.460 11:50:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:19.460 11:50:09 -- common/autotest_common.sh@10 -- # set +x 00:14:19.460 ************************************ 00:14:19.460 START TEST nvmf_vfio_user 00:14:19.460 ************************************ 00:14:19.460 11:50:09 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:19.720 * Looking for test storage... 00:14:19.720 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:19.720 11:50:10 -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:19.720 11:50:10 -- nvmf/common.sh@7 -- # uname -s 00:14:19.720 11:50:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:19.720 11:50:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:19.720 11:50:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:19.720 11:50:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:19.720 11:50:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:19.720 11:50:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:19.720 11:50:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:19.720 11:50:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:19.720 11:50:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:19.720 11:50:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:19.720 11:50:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:14:19.720 11:50:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:14:19.720 11:50:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:19.720 11:50:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:19.720 11:50:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:19.720 11:50:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:19.720 11:50:10 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:19.720 11:50:10 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:19.720 11:50:10 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:19.720 11:50:10 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:19.720 11:50:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.720 11:50:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.720 11:50:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.720 11:50:10 -- paths/export.sh@5 -- # export PATH 00:14:19.720 11:50:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.720 11:50:10 -- nvmf/common.sh@47 -- # : 0 00:14:19.720 11:50:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:19.720 11:50:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:19.720 11:50:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:19.720 11:50:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:19.720 11:50:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:19.720 11:50:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:19.720 11:50:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:19.720 11:50:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:19.720 11:50:10 -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:19.720 11:50:10 -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:19.720 11:50:10 -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:19.720 11:50:10 -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:19.720 11:50:10 -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:19.720 11:50:10 -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:19.720 11:50:10 -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:19.720 11:50:10 -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:19.720 11:50:10 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:19.720 11:50:10 -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:19.720 11:50:10 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2423711 00:14:19.720 11:50:10 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2423711' 00:14:19.720 Process pid: 2423711 00:14:19.720 11:50:10 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:19.720 11:50:10 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2423711 00:14:19.720 11:50:10 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:19.720 11:50:10 -- common/autotest_common.sh@817 -- # '[' -z 2423711 ']' 00:14:19.720 11:50:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:19.720 11:50:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:19.720 11:50:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:19.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:19.720 11:50:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:19.720 11:50:10 -- common/autotest_common.sh@10 -- # set +x 00:14:19.720 [2024-04-18 11:50:10.171706] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:19.720 [2024-04-18 11:50:10.171795] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:19.720 EAL: No free 2048 kB hugepages reported on node 1 00:14:19.979 [2024-04-18 11:50:10.292316] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:19.979 [2024-04-18 11:50:10.512629] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:19.979 [2024-04-18 11:50:10.512681] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:19.979 [2024-04-18 11:50:10.512693] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:19.979 [2024-04-18 11:50:10.512706] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:19.979 [2024-04-18 11:50:10.512715] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:19.979 [2024-04-18 11:50:10.512795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:19.979 [2024-04-18 11:50:10.512870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:19.979 [2024-04-18 11:50:10.512928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.979 [2024-04-18 11:50:10.512937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:20.546 11:50:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:20.546 11:50:10 -- common/autotest_common.sh@850 -- # return 0 00:14:20.546 11:50:10 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:21.482 11:50:11 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:21.741 11:50:12 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:21.741 11:50:12 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:21.741 11:50:12 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:21.741 11:50:12 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:21.741 11:50:12 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:22.000 Malloc1 00:14:22.000 11:50:12 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:22.260 11:50:12 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:22.260 11:50:12 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:22.519 11:50:12 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:22.519 11:50:12 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:22.519 11:50:12 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:22.779 Malloc2 00:14:22.779 11:50:13 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:23.038 11:50:13 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:23.038 11:50:13 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:23.296 11:50:13 -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:23.296 11:50:13 -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:23.296 11:50:13 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:23.296 11:50:13 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:23.296 11:50:13 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:23.296 11:50:13 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:23.296 [2024-04-18 11:50:13.809063] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:23.296 [2024-04-18 11:50:13.809130] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2424269 ] 00:14:23.296 EAL: No free 2048 kB hugepages reported on node 1 00:14:23.557 [2024-04-18 11:50:13.856883] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:23.557 [2024-04-18 11:50:13.866110] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:23.557 [2024-04-18 11:50:13.866142] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f6a584ad000 00:14:23.557 [2024-04-18 11:50:13.867084] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:23.557 [2024-04-18 11:50:13.868093] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:23.557 [2024-04-18 11:50:13.869089] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:23.557 [2024-04-18 11:50:13.870098] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:23.557 [2024-04-18 11:50:13.871101] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:23.557 [2024-04-18 11:50:13.872109] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:23.557 [2024-04-18 11:50:13.873110] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:23.557 [2024-04-18 11:50:13.874116] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:23.557 [2024-04-18 11:50:13.875119] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:23.557 [2024-04-18 11:50:13.875140] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f6a584a2000 00:14:23.557 [2024-04-18 11:50:13.876251] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:23.557 [2024-04-18 11:50:13.888683] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:23.557 [2024-04-18 11:50:13.888720] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:14:23.557 [2024-04-18 11:50:13.894225] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:23.557 [2024-04-18 11:50:13.894359] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:23.557 [2024-04-18 11:50:13.895174] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:14:23.557 [2024-04-18 11:50:13.895204] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:14:23.557 [2024-04-18 11:50:13.895216] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:14:23.557 [2024-04-18 11:50:13.896223] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:23.557 [2024-04-18 11:50:13.896245] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:14:23.557 [2024-04-18 11:50:13.896262] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:14:23.557 [2024-04-18 11:50:13.897229] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:23.557 [2024-04-18 11:50:13.897248] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:14:23.557 [2024-04-18 11:50:13.897263] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:14:23.557 [2024-04-18 11:50:13.898235] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:23.557 [2024-04-18 11:50:13.898254] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:23.557 [2024-04-18 11:50:13.899244] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:23.557 [2024-04-18 11:50:13.899263] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:14:23.557 [2024-04-18 11:50:13.899272] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:14:23.557 [2024-04-18 11:50:13.899289] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:23.557 [2024-04-18 11:50:13.899399] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:14:23.557 [2024-04-18 11:50:13.899410] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:23.557 [2024-04-18 11:50:13.899423] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:23.557 [2024-04-18 11:50:13.900256] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:23.558 [2024-04-18 11:50:13.901251] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:23.558 [2024-04-18 11:50:13.902267] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:23.558 [2024-04-18 11:50:13.903259] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:23.558 [2024-04-18 11:50:13.903335] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:23.558 [2024-04-18 11:50:13.904281] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:23.558 [2024-04-18 11:50:13.904296] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:23.558 [2024-04-18 11:50:13.904308] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:14:23.558 [2024-04-18 11:50:13.904333] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:14:23.558 [2024-04-18 11:50:13.904352] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:14:23.558 [2024-04-18 11:50:13.904379] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:23.558 [2024-04-18 11:50:13.904391] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:23.558 [2024-04-18 11:50:13.904419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:23.558 [2024-04-18 11:50:13.904468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:23.558 [2024-04-18 11:50:13.904487] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:14:23.558 [2024-04-18 11:50:13.904498] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:14:23.558 [2024-04-18 11:50:13.904507] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:14:23.558 [2024-04-18 11:50:13.904518] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:23.558 [2024-04-18 11:50:13.904527] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:14:23.558 [2024-04-18 11:50:13.904538] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:14:23.558 [2024-04-18 11:50:13.904551] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:14:23.558 [2024-04-18 11:50:13.904571] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:14:23.558 [2024-04-18 11:50:13.904588] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:23.558 [2024-04-18 11:50:13.904609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:23.558 [2024-04-18 11:50:13.904628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:23.558 [2024-04-18 11:50:13.904645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:23.558 [2024-04-18 11:50:13.904659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:23.558 [2024-04-18 11:50:13.904674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:23.558 [2024-04-18 11:50:13.904682] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:23.558 [2024-04-18 11:50:13.904697] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:23.558 [2024-04-18 11:50:13.904710] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:23.558 [2024-04-18 11:50:13.904723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:23.558 [2024-04-18 11:50:13.904733] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:14:23.558 [2024-04-18 11:50:13.904744] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:23.558 [2024-04-18 11:50:13.904755] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:14:23.558 [2024-04-18 11:50:13.904769] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:23.558 [2024-04-18 11:50:13.904782] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:23.558 [2024-04-18 11:50:13.904797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:23.558 [2024-04-18 11:50:13.904869] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:14:23.558 [2024-04-18 11:50:13.904890] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:23.558 [2024-04-18 11:50:13.904906] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:23.558 [2024-04-18 11:50:13.904920] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:23.558 [2024-04-18 11:50:13.904930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:23.558 [2024-04-18 11:50:13.904953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:23.558 [2024-04-18 11:50:13.904982] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:14:23.558 [2024-04-18 11:50:13.905000] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:14:23.558 [2024-04-18 11:50:13.905013] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:14:23.558 [2024-04-18 11:50:13.905032] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:23.558 [2024-04-18 11:50:13.905041] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:23.558 [2024-04-18 11:50:13.905053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:23.558 [2024-04-18 11:50:13.905079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:23.558 [2024-04-18 11:50:13.905102] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:23.558 [2024-04-18 11:50:13.905118] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:23.558 [2024-04-18 11:50:13.905137] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:23.558 [2024-04-18 11:50:13.905146] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:23.558 [2024-04-18 11:50:13.905160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:23.558 [2024-04-18 11:50:13.905173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:23.558 [2024-04-18 11:50:13.905194] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:23.558 [2024-04-18 11:50:13.905206] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:14:23.558 [2024-04-18 11:50:13.905222] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:14:23.558 [2024-04-18 11:50:13.905233] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:23.558 [2024-04-18 11:50:13.905244] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:14:23.558 [2024-04-18 11:50:13.905256] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:14:23.558 [2024-04-18 11:50:13.905267] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:14:23.558 [2024-04-18 11:50:13.905276] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:14:23.558 [2024-04-18 11:50:13.905318] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:23.558 [2024-04-18 11:50:13.905331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:23.558 [2024-04-18 11:50:13.905349] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:23.558 [2024-04-18 11:50:13.905360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:23.558 [2024-04-18 11:50:13.905378] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:23.558 [2024-04-18 11:50:13.905389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:23.558 [2024-04-18 11:50:13.905408] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:23.558 [2024-04-18 11:50:13.905418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:23.558 [2024-04-18 11:50:13.905445] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:23.558 [2024-04-18 11:50:13.905464] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:23.558 [2024-04-18 11:50:13.905473] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:23.558 [2024-04-18 11:50:13.905483] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:23.558 [2024-04-18 11:50:13.905496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:23.558 [2024-04-18 11:50:13.905509] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:23.558 [2024-04-18 11:50:13.905520] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:23.559 [2024-04-18 11:50:13.905531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:23.559 [2024-04-18 11:50:13.905545] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:23.559 [2024-04-18 11:50:13.905553] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:23.559 [2024-04-18 11:50:13.905565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:23.559 [2024-04-18 11:50:13.905579] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:23.559 [2024-04-18 11:50:13.905589] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:23.559 [2024-04-18 11:50:13.905603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:23.559 [2024-04-18 11:50:13.905620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:23.559 [2024-04-18 11:50:13.905644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:23.559 [2024-04-18 11:50:13.905666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:23.559 [2024-04-18 11:50:13.905681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:23.559 ===================================================== 00:14:23.559 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:23.559 ===================================================== 00:14:23.559 Controller Capabilities/Features 00:14:23.559 ================================ 00:14:23.559 Vendor ID: 4e58 00:14:23.559 Subsystem Vendor ID: 4e58 00:14:23.559 Serial Number: SPDK1 00:14:23.559 Model Number: SPDK bdev Controller 00:14:23.559 Firmware Version: 24.05 00:14:23.559 Recommended Arb Burst: 6 00:14:23.559 IEEE OUI Identifier: 8d 6b 50 00:14:23.559 Multi-path I/O 00:14:23.559 May have multiple subsystem ports: Yes 00:14:23.559 May have multiple controllers: Yes 00:14:23.559 Associated with SR-IOV VF: No 00:14:23.559 Max Data Transfer Size: 131072 00:14:23.559 Max Number of Namespaces: 32 00:14:23.559 Max Number of I/O Queues: 127 00:14:23.559 NVMe Specification Version (VS): 1.3 00:14:23.559 NVMe Specification Version (Identify): 1.3 00:14:23.559 Maximum Queue Entries: 256 00:14:23.559 Contiguous Queues Required: Yes 00:14:23.559 Arbitration Mechanisms Supported 00:14:23.559 Weighted Round Robin: Not Supported 00:14:23.559 Vendor Specific: Not Supported 00:14:23.559 Reset Timeout: 15000 ms 00:14:23.559 Doorbell Stride: 4 bytes 00:14:23.559 NVM Subsystem Reset: Not Supported 00:14:23.559 Command Sets Supported 00:14:23.559 NVM Command Set: Supported 00:14:23.559 Boot Partition: Not Supported 00:14:23.559 Memory Page Size Minimum: 4096 bytes 00:14:23.559 Memory Page Size Maximum: 4096 bytes 00:14:23.559 Persistent Memory Region: Not Supported 00:14:23.559 Optional Asynchronous Events Supported 00:14:23.559 Namespace Attribute Notices: Supported 00:14:23.559 Firmware Activation Notices: Not Supported 00:14:23.559 ANA Change Notices: Not Supported 00:14:23.559 PLE Aggregate Log Change Notices: Not Supported 00:14:23.559 LBA Status Info Alert Notices: Not Supported 00:14:23.559 EGE Aggregate Log Change Notices: Not Supported 00:14:23.559 Normal NVM Subsystem Shutdown event: Not Supported 00:14:23.559 Zone Descriptor Change Notices: Not Supported 00:14:23.559 Discovery Log Change Notices: Not Supported 00:14:23.559 Controller Attributes 00:14:23.559 128-bit Host Identifier: Supported 00:14:23.559 Non-Operational Permissive Mode: Not Supported 00:14:23.559 NVM Sets: Not Supported 00:14:23.559 Read Recovery Levels: Not Supported 00:14:23.559 Endurance Groups: Not Supported 00:14:23.559 Predictable Latency Mode: Not Supported 00:14:23.559 Traffic Based Keep ALive: Not Supported 00:14:23.559 Namespace Granularity: Not Supported 00:14:23.559 SQ Associations: Not Supported 00:14:23.559 UUID List: Not Supported 00:14:23.559 Multi-Domain Subsystem: Not Supported 00:14:23.559 Fixed Capacity Management: Not Supported 00:14:23.559 Variable Capacity Management: Not Supported 00:14:23.559 Delete Endurance Group: Not Supported 00:14:23.559 Delete NVM Set: Not Supported 00:14:23.559 Extended LBA Formats Supported: Not Supported 00:14:23.559 Flexible Data Placement Supported: Not Supported 00:14:23.559 00:14:23.559 Controller Memory Buffer Support 00:14:23.559 ================================ 00:14:23.559 Supported: No 00:14:23.559 00:14:23.559 Persistent Memory Region Support 00:14:23.559 ================================ 00:14:23.559 Supported: No 00:14:23.559 00:14:23.559 Admin Command Set Attributes 00:14:23.559 ============================ 00:14:23.559 Security Send/Receive: Not Supported 00:14:23.559 Format NVM: Not Supported 00:14:23.559 Firmware Activate/Download: Not Supported 00:14:23.559 Namespace Management: Not Supported 00:14:23.559 Device Self-Test: Not Supported 00:14:23.559 Directives: Not Supported 00:14:23.559 NVMe-MI: Not Supported 00:14:23.559 Virtualization Management: Not Supported 00:14:23.559 Doorbell Buffer Config: Not Supported 00:14:23.559 Get LBA Status Capability: Not Supported 00:14:23.559 Command & Feature Lockdown Capability: Not Supported 00:14:23.559 Abort Command Limit: 4 00:14:23.559 Async Event Request Limit: 4 00:14:23.559 Number of Firmware Slots: N/A 00:14:23.559 Firmware Slot 1 Read-Only: N/A 00:14:23.559 Firmware Activation Without Reset: N/A 00:14:23.559 Multiple Update Detection Support: N/A 00:14:23.559 Firmware Update Granularity: No Information Provided 00:14:23.559 Per-Namespace SMART Log: No 00:14:23.559 Asymmetric Namespace Access Log Page: Not Supported 00:14:23.559 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:23.559 Command Effects Log Page: Supported 00:14:23.559 Get Log Page Extended Data: Supported 00:14:23.559 Telemetry Log Pages: Not Supported 00:14:23.559 Persistent Event Log Pages: Not Supported 00:14:23.559 Supported Log Pages Log Page: May Support 00:14:23.559 Commands Supported & Effects Log Page: Not Supported 00:14:23.559 Feature Identifiers & Effects Log Page:May Support 00:14:23.559 NVMe-MI Commands & Effects Log Page: May Support 00:14:23.559 Data Area 4 for Telemetry Log: Not Supported 00:14:23.559 Error Log Page Entries Supported: 128 00:14:23.559 Keep Alive: Supported 00:14:23.559 Keep Alive Granularity: 10000 ms 00:14:23.559 00:14:23.559 NVM Command Set Attributes 00:14:23.559 ========================== 00:14:23.559 Submission Queue Entry Size 00:14:23.559 Max: 64 00:14:23.559 Min: 64 00:14:23.559 Completion Queue Entry Size 00:14:23.559 Max: 16 00:14:23.559 Min: 16 00:14:23.559 Number of Namespaces: 32 00:14:23.559 Compare Command: Supported 00:14:23.559 Write Uncorrectable Command: Not Supported 00:14:23.559 Dataset Management Command: Supported 00:14:23.559 Write Zeroes Command: Supported 00:14:23.559 Set Features Save Field: Not Supported 00:14:23.559 Reservations: Not Supported 00:14:23.559 Timestamp: Not Supported 00:14:23.559 Copy: Supported 00:14:23.559 Volatile Write Cache: Present 00:14:23.559 Atomic Write Unit (Normal): 1 00:14:23.559 Atomic Write Unit (PFail): 1 00:14:23.559 Atomic Compare & Write Unit: 1 00:14:23.559 Fused Compare & Write: Supported 00:14:23.559 Scatter-Gather List 00:14:23.559 SGL Command Set: Supported (Dword aligned) 00:14:23.559 SGL Keyed: Not Supported 00:14:23.559 SGL Bit Bucket Descriptor: Not Supported 00:14:23.559 SGL Metadata Pointer: Not Supported 00:14:23.559 Oversized SGL: Not Supported 00:14:23.559 SGL Metadata Address: Not Supported 00:14:23.559 SGL Offset: Not Supported 00:14:23.559 Transport SGL Data Block: Not Supported 00:14:23.559 Replay Protected Memory Block: Not Supported 00:14:23.559 00:14:23.559 Firmware Slot Information 00:14:23.559 ========================= 00:14:23.559 Active slot: 1 00:14:23.559 Slot 1 Firmware Revision: 24.05 00:14:23.559 00:14:23.559 00:14:23.559 Commands Supported and Effects 00:14:23.559 ============================== 00:14:23.559 Admin Commands 00:14:23.559 -------------- 00:14:23.559 Get Log Page (02h): Supported 00:14:23.559 Identify (06h): Supported 00:14:23.559 Abort (08h): Supported 00:14:23.559 Set Features (09h): Supported 00:14:23.559 Get Features (0Ah): Supported 00:14:23.559 Asynchronous Event Request (0Ch): Supported 00:14:23.559 Keep Alive (18h): Supported 00:14:23.559 I/O Commands 00:14:23.559 ------------ 00:14:23.559 Flush (00h): Supported LBA-Change 00:14:23.559 Write (01h): Supported LBA-Change 00:14:23.559 Read (02h): Supported 00:14:23.559 Compare (05h): Supported 00:14:23.559 Write Zeroes (08h): Supported LBA-Change 00:14:23.559 Dataset Management (09h): Supported LBA-Change 00:14:23.559 Copy (19h): Supported LBA-Change 00:14:23.559 Unknown (79h): Supported LBA-Change 00:14:23.559 Unknown (7Ah): Supported 00:14:23.559 00:14:23.559 Error Log 00:14:23.559 ========= 00:14:23.559 00:14:23.559 Arbitration 00:14:23.559 =========== 00:14:23.560 Arbitration Burst: 1 00:14:23.560 00:14:23.560 Power Management 00:14:23.560 ================ 00:14:23.560 Number of Power States: 1 00:14:23.560 Current Power State: Power State #0 00:14:23.560 Power State #0: 00:14:23.560 Max Power: 0.00 W 00:14:23.560 Non-Operational State: Operational 00:14:23.560 Entry Latency: Not Reported 00:14:23.560 Exit Latency: Not Reported 00:14:23.560 Relative Read Throughput: 0 00:14:23.560 Relative Read Latency: 0 00:14:23.560 Relative Write Throughput: 0 00:14:23.560 Relative Write Latency: 0 00:14:23.560 Idle Power: Not Reported 00:14:23.560 Active Power: Not Reported 00:14:23.560 Non-Operational Permissive Mode: Not Supported 00:14:23.560 00:14:23.560 Health Information 00:14:23.560 ================== 00:14:23.560 Critical Warnings: 00:14:23.560 Available Spare Space: OK 00:14:23.560 Temperature: OK 00:14:23.560 Device Reliability: OK 00:14:23.560 Read Only: No 00:14:23.560 Volatile Memory Backup: OK 00:14:23.560 Current Temperature: 0 Kelvin (-2[2024-04-18 11:50:13.905842] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:23.560 [2024-04-18 11:50:13.905855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:23.560 [2024-04-18 11:50:13.905902] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:14:23.560 [2024-04-18 11:50:13.905916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:23.560 [2024-04-18 11:50:13.905930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:23.560 [2024-04-18 11:50:13.905941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:23.560 [2024-04-18 11:50:13.905954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:23.560 [2024-04-18 11:50:13.909465] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:23.560 [2024-04-18 11:50:13.909495] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:23.560 [2024-04-18 11:50:13.910308] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:23.560 [2024-04-18 11:50:13.910374] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:14:23.560 [2024-04-18 11:50:13.910388] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:14:23.560 [2024-04-18 11:50:13.911334] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:23.560 [2024-04-18 11:50:13.911360] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:14:23.560 [2024-04-18 11:50:13.912110] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:23.560 [2024-04-18 11:50:13.914467] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:23.560 73 Celsius) 00:14:23.560 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:23.560 Available Spare: 0% 00:14:23.560 Available Spare Threshold: 0% 00:14:23.560 Life Percentage Used: 0% 00:14:23.560 Data Units Read: 0 00:14:23.560 Data Units Written: 0 00:14:23.560 Host Read Commands: 0 00:14:23.560 Host Write Commands: 0 00:14:23.560 Controller Busy Time: 0 minutes 00:14:23.560 Power Cycles: 0 00:14:23.560 Power On Hours: 0 hours 00:14:23.560 Unsafe Shutdowns: 0 00:14:23.560 Unrecoverable Media Errors: 0 00:14:23.560 Lifetime Error Log Entries: 0 00:14:23.560 Warning Temperature Time: 0 minutes 00:14:23.560 Critical Temperature Time: 0 minutes 00:14:23.560 00:14:23.560 Number of Queues 00:14:23.560 ================ 00:14:23.560 Number of I/O Submission Queues: 127 00:14:23.560 Number of I/O Completion Queues: 127 00:14:23.560 00:14:23.560 Active Namespaces 00:14:23.560 ================= 00:14:23.560 Namespace ID:1 00:14:23.560 Error Recovery Timeout: Unlimited 00:14:23.560 Command Set Identifier: NVM (00h) 00:14:23.560 Deallocate: Supported 00:14:23.560 Deallocated/Unwritten Error: Not Supported 00:14:23.560 Deallocated Read Value: Unknown 00:14:23.560 Deallocate in Write Zeroes: Not Supported 00:14:23.560 Deallocated Guard Field: 0xFFFF 00:14:23.560 Flush: Supported 00:14:23.560 Reservation: Supported 00:14:23.560 Namespace Sharing Capabilities: Multiple Controllers 00:14:23.560 Size (in LBAs): 131072 (0GiB) 00:14:23.560 Capacity (in LBAs): 131072 (0GiB) 00:14:23.560 Utilization (in LBAs): 131072 (0GiB) 00:14:23.560 NGUID: 4815050131D3491180DDAFB9267302C8 00:14:23.560 UUID: 48150501-31d3-4911-80dd-afb9267302c8 00:14:23.560 Thin Provisioning: Not Supported 00:14:23.560 Per-NS Atomic Units: Yes 00:14:23.560 Atomic Boundary Size (Normal): 0 00:14:23.560 Atomic Boundary Size (PFail): 0 00:14:23.560 Atomic Boundary Offset: 0 00:14:23.560 Maximum Single Source Range Length: 65535 00:14:23.560 Maximum Copy Length: 65535 00:14:23.560 Maximum Source Range Count: 1 00:14:23.560 NGUID/EUI64 Never Reused: No 00:14:23.560 Namespace Write Protected: No 00:14:23.560 Number of LBA Formats: 1 00:14:23.560 Current LBA Format: LBA Format #00 00:14:23.560 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:23.560 00:14:23.560 11:50:14 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:23.560 EAL: No free 2048 kB hugepages reported on node 1 00:14:23.818 [2024-04-18 11:50:14.236706] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:29.091 [2024-04-18 11:50:19.262894] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:29.091 Initializing NVMe Controllers 00:14:29.091 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:29.091 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:29.091 Initialization complete. Launching workers. 00:14:29.091 ======================================================== 00:14:29.091 Latency(us) 00:14:29.091 Device Information : IOPS MiB/s Average min max 00:14:29.091 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39883.79 155.80 3208.67 1055.78 7525.62 00:14:29.091 ======================================================== 00:14:29.091 Total : 39883.79 155.80 3208.67 1055.78 7525.62 00:14:29.091 00:14:29.091 11:50:19 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:29.091 EAL: No free 2048 kB hugepages reported on node 1 00:14:29.091 [2024-04-18 11:50:19.577433] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:34.361 [2024-04-18 11:50:24.610393] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:34.361 Initializing NVMe Controllers 00:14:34.361 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:34.361 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:34.362 Initialization complete. Launching workers. 00:14:34.362 ======================================================== 00:14:34.362 Latency(us) 00:14:34.362 Device Information : IOPS MiB/s Average min max 00:14:34.362 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16043.65 62.67 7977.42 6821.70 8977.76 00:14:34.362 ======================================================== 00:14:34.362 Total : 16043.65 62.67 7977.42 6821.70 8977.76 00:14:34.362 00:14:34.362 11:50:24 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:34.362 EAL: No free 2048 kB hugepages reported on node 1 00:14:34.621 [2024-04-18 11:50:24.973079] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:39.936 [2024-04-18 11:50:30.067590] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:39.936 Initializing NVMe Controllers 00:14:39.936 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:39.936 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:39.936 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:14:39.936 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:14:39.936 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:14:39.936 Initialization complete. Launching workers. 00:14:39.936 Starting thread on core 2 00:14:39.936 Starting thread on core 3 00:14:39.936 Starting thread on core 1 00:14:39.936 11:50:30 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:14:39.936 EAL: No free 2048 kB hugepages reported on node 1 00:14:40.195 [2024-04-18 11:50:30.530036] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:43.484 [2024-04-18 11:50:33.678104] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:43.484 Initializing NVMe Controllers 00:14:43.484 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:43.484 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:43.484 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:14:43.484 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:14:43.484 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:14:43.484 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:14:43.484 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:43.484 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:43.484 Initialization complete. Launching workers. 00:14:43.484 Starting thread on core 1 with urgent priority queue 00:14:43.484 Starting thread on core 2 with urgent priority queue 00:14:43.484 Starting thread on core 3 with urgent priority queue 00:14:43.484 Starting thread on core 0 with urgent priority queue 00:14:43.484 SPDK bdev Controller (SPDK1 ) core 0: 512.00 IO/s 195.31 secs/100000 ios 00:14:43.484 SPDK bdev Controller (SPDK1 ) core 1: 597.33 IO/s 167.41 secs/100000 ios 00:14:43.484 SPDK bdev Controller (SPDK1 ) core 2: 597.33 IO/s 167.41 secs/100000 ios 00:14:43.484 SPDK bdev Controller (SPDK1 ) core 3: 533.33 IO/s 187.50 secs/100000 ios 00:14:43.484 ======================================================== 00:14:43.484 00:14:43.484 11:50:33 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:43.484 EAL: No free 2048 kB hugepages reported on node 1 00:14:43.743 [2024-04-18 11:50:34.145069] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:43.743 [2024-04-18 11:50:34.179608] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:43.743 Initializing NVMe Controllers 00:14:43.743 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:43.743 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:43.743 Namespace ID: 1 size: 0GB 00:14:43.743 Initialization complete. 00:14:43.743 INFO: using host memory buffer for IO 00:14:43.743 Hello world! 00:14:43.743 11:50:34 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:44.002 EAL: No free 2048 kB hugepages reported on node 1 00:14:44.260 [2024-04-18 11:50:34.630003] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:45.197 Initializing NVMe Controllers 00:14:45.197 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:45.197 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:45.197 Initialization complete. Launching workers. 00:14:45.197 submit (in ns) avg, min, max = 8563.2, 3293.6, 4003161.6 00:14:45.197 complete (in ns) avg, min, max = 20511.5, 1838.4, 4001576.8 00:14:45.197 00:14:45.197 Submit histogram 00:14:45.197 ================ 00:14:45.197 Range in us Cumulative Count 00:14:45.197 3.277 - 3.302: 0.0131% ( 2) 00:14:45.197 3.302 - 3.328: 0.0328% ( 3) 00:14:45.197 3.328 - 3.354: 0.1311% ( 15) 00:14:45.197 3.354 - 3.379: 1.2451% ( 170) 00:14:45.197 3.379 - 3.405: 5.5439% ( 656) 00:14:45.197 3.405 - 3.430: 13.4797% ( 1211) 00:14:45.197 3.430 - 3.456: 23.5911% ( 1543) 00:14:45.198 3.456 - 3.482: 35.1048% ( 1757) 00:14:45.198 3.482 - 3.507: 46.4089% ( 1725) 00:14:45.198 3.507 - 3.533: 54.1219% ( 1177) 00:14:45.198 3.533 - 3.558: 59.8296% ( 871) 00:14:45.198 3.558 - 3.584: 64.8820% ( 771) 00:14:45.198 3.584 - 3.610: 70.1900% ( 810) 00:14:45.198 3.610 - 3.635: 77.7523% ( 1154) 00:14:45.198 3.635 - 3.661: 82.5229% ( 728) 00:14:45.198 3.661 - 3.686: 84.7051% ( 333) 00:14:45.198 3.686 - 3.712: 85.8585% ( 176) 00:14:45.198 3.712 - 3.738: 87.3460% ( 227) 00:14:45.198 3.738 - 3.763: 88.9843% ( 250) 00:14:45.198 3.763 - 3.789: 90.9567% ( 301) 00:14:45.198 3.789 - 3.814: 92.6802% ( 263) 00:14:45.198 3.814 - 3.840: 93.8663% ( 181) 00:14:45.198 3.840 - 3.866: 94.8558% ( 151) 00:14:45.198 3.866 - 3.891: 95.8978% ( 159) 00:14:45.198 3.891 - 3.917: 96.5858% ( 105) 00:14:45.198 3.917 - 3.942: 97.1232% ( 82) 00:14:45.198 3.942 - 3.968: 97.5295% ( 62) 00:14:45.198 3.968 - 3.994: 97.8506% ( 49) 00:14:45.198 3.994 - 4.019: 97.9685% ( 18) 00:14:45.198 4.019 - 4.045: 98.1520% ( 28) 00:14:45.198 4.045 - 4.070: 98.2176% ( 10) 00:14:45.198 4.070 - 4.096: 98.2896% ( 11) 00:14:45.198 4.096 - 4.122: 98.3290% ( 6) 00:14:45.198 4.122 - 4.147: 98.3552% ( 4) 00:14:45.198 4.147 - 4.173: 98.3748% ( 3) 00:14:45.198 4.173 - 4.198: 98.4273% ( 8) 00:14:45.198 4.198 - 4.224: 98.4404% ( 2) 00:14:45.198 4.224 - 4.250: 98.4600% ( 3) 00:14:45.198 4.250 - 4.275: 98.4928% ( 5) 00:14:45.198 4.275 - 4.301: 98.5256% ( 5) 00:14:45.198 4.301 - 4.326: 98.5845% ( 9) 00:14:45.198 4.326 - 4.352: 98.6042% ( 3) 00:14:45.198 4.352 - 4.378: 98.6501% ( 7) 00:14:45.198 4.378 - 4.403: 98.6828% ( 5) 00:14:45.198 4.403 - 4.429: 98.7221% ( 6) 00:14:45.198 4.429 - 4.454: 98.7549% ( 5) 00:14:45.198 4.454 - 4.480: 98.7942% ( 6) 00:14:45.198 4.480 - 4.506: 98.8270% ( 5) 00:14:45.198 4.506 - 4.531: 98.8598% ( 5) 00:14:45.198 4.531 - 4.557: 98.9056% ( 7) 00:14:45.198 4.582 - 4.608: 98.9253% ( 3) 00:14:45.198 4.608 - 4.634: 98.9450% ( 3) 00:14:45.198 4.634 - 4.659: 99.0039% ( 9) 00:14:45.198 4.659 - 4.685: 99.0236% ( 3) 00:14:45.198 4.685 - 4.710: 99.0498% ( 4) 00:14:45.198 4.710 - 4.736: 99.0826% ( 5) 00:14:45.198 4.736 - 4.762: 99.0957% ( 2) 00:14:45.198 4.787 - 4.813: 99.1153% ( 3) 00:14:45.198 4.813 - 4.838: 99.1415% ( 4) 00:14:45.198 4.838 - 4.864: 99.1481% ( 1) 00:14:45.198 4.864 - 4.890: 99.1612% ( 2) 00:14:45.198 4.890 - 4.915: 99.1743% ( 2) 00:14:45.198 4.915 - 4.941: 99.1809% ( 1) 00:14:45.198 4.966 - 4.992: 99.1874% ( 1) 00:14:45.198 5.018 - 5.043: 99.1940% ( 1) 00:14:45.198 5.146 - 5.171: 99.2005% ( 1) 00:14:45.198 5.171 - 5.197: 99.2071% ( 1) 00:14:45.198 5.402 - 5.427: 99.2136% ( 1) 00:14:45.198 5.427 - 5.453: 99.2267% ( 2) 00:14:45.198 5.453 - 5.478: 99.2464% ( 3) 00:14:45.198 5.504 - 5.530: 99.2529% ( 1) 00:14:45.198 5.530 - 5.555: 99.2595% ( 1) 00:14:45.198 5.555 - 5.581: 99.2792% ( 3) 00:14:45.198 5.581 - 5.606: 99.2857% ( 1) 00:14:45.198 5.606 - 5.632: 99.2923% ( 1) 00:14:45.198 5.658 - 5.683: 99.3054% ( 2) 00:14:45.198 5.683 - 5.709: 99.3185% ( 2) 00:14:45.198 5.709 - 5.734: 99.3381% ( 3) 00:14:45.198 5.760 - 5.786: 99.3578% ( 3) 00:14:45.198 5.811 - 5.837: 99.3840% ( 4) 00:14:45.198 5.837 - 5.862: 99.3906% ( 1) 00:14:45.198 5.862 - 5.888: 99.3971% ( 1) 00:14:45.198 5.888 - 5.914: 99.4233% ( 4) 00:14:45.198 5.965 - 5.990: 99.4364% ( 2) 00:14:45.198 6.016 - 6.042: 99.4430% ( 1) 00:14:45.198 6.042 - 6.067: 99.4561% ( 2) 00:14:45.198 6.118 - 6.144: 99.4626% ( 1) 00:14:45.198 6.221 - 6.246: 99.4692% ( 1) 00:14:45.198 6.246 - 6.272: 99.4758% ( 1) 00:14:45.198 6.400 - 6.426: 99.4889% ( 2) 00:14:45.198 6.426 - 6.451: 99.4954% ( 1) 00:14:45.198 6.451 - 6.477: 99.5020% ( 1) 00:14:45.198 6.528 - 6.554: 99.5085% ( 1) 00:14:45.198 6.554 - 6.605: 99.5151% ( 1) 00:14:45.198 6.605 - 6.656: 99.5216% ( 1) 00:14:45.198 6.810 - 6.861: 99.5282% ( 1) 00:14:45.198 6.861 - 6.912: 99.5347% ( 1) 00:14:45.198 6.912 - 6.963: 99.5544% ( 3) 00:14:45.198 7.014 - 7.066: 99.5609% ( 1) 00:14:45.198 7.117 - 7.168: 99.5806% ( 3) 00:14:45.198 7.219 - 7.270: 99.5872% ( 1) 00:14:45.198 7.270 - 7.322: 99.5937% ( 1) 00:14:45.198 7.322 - 7.373: 99.6003% ( 1) 00:14:45.198 7.475 - 7.526: 99.6068% ( 1) 00:14:45.198 7.526 - 7.578: 99.6199% ( 2) 00:14:45.198 7.578 - 7.629: 99.6396% ( 3) 00:14:45.198 7.629 - 7.680: 99.6592% ( 3) 00:14:45.198 7.680 - 7.731: 99.6723% ( 2) 00:14:45.198 7.731 - 7.782: 99.6920% ( 3) 00:14:45.198 7.782 - 7.834: 99.6986% ( 1) 00:14:45.198 7.834 - 7.885: 99.7117% ( 2) 00:14:45.198 7.936 - 7.987: 99.7313% ( 3) 00:14:45.198 7.987 - 8.038: 99.7444% ( 2) 00:14:45.198 8.090 - 8.141: 99.7510% ( 1) 00:14:45.198 8.243 - 8.294: 99.7575% ( 1) 00:14:45.198 8.346 - 8.397: 99.7641% ( 1) 00:14:45.198 8.448 - 8.499: 99.7706% ( 1) 00:14:45.198 8.550 - 8.602: 99.7772% ( 1) 00:14:45.198 8.806 - 8.858: 99.7969% ( 3) 00:14:45.198 8.909 - 8.960: 99.8034% ( 1) 00:14:45.198 8.960 - 9.011: 99.8100% ( 1) 00:14:45.198 9.011 - 9.062: 99.8165% ( 1) 00:14:45.198 9.267 - 9.318: 99.8231% ( 1) 00:14:45.198 9.318 - 9.370: 99.8296% ( 1) 00:14:45.198 9.728 - 9.779: 99.8362% ( 1) 00:14:45.198 9.984 - 10.035: 99.8427% ( 1) 00:14:45.198 10.240 - 10.291: 99.8493% ( 1) 00:14:45.198 11.930 - 11.981: 99.8558% ( 1) 00:14:45.198 13.722 - 13.824: 99.8624% ( 1) 00:14:45.198 14.029 - 14.131: 99.8689% ( 1) 00:14:45.198 18.637 - 18.739: 99.8755% ( 1) 00:14:45.198 3984.589 - 4010.803: 100.0000% ( 19) 00:14:45.198 00:14:45.198 Complete histogram 00:14:45.198 ================== 00:14:45.198 Range in us Cumulative Count 00:14:45.198 1.830 - 1.843: 0.0393% ( 6) 00:14:45.198 1.843 - 1.856: 0.1704% ( 20) 00:14:45.198 1.856 - 1.869: 0.2621% ( 14) 00:14:45.198 1.869 - 1.882: 0.3801% ( 18) 00:14:45.198 1.882 - 1.894: 8.7156% ( 1272) 00:14:45.198 1.894 - 1.907: 44.9345% ( 5527) 00:14:45.198 1.907 - 1.920: 72.2608% ( 4170) 00:14:45.198 1.920 - 1.933: 85.0786% ( 1956) 00:14:45.198 1.933 - 1.946: 92.3460% ( 1109) 00:14:45.198 1.946 - 1.958: 95.4456% ( 473) 00:14:45.198 1.958 - 1.971: 96.8545% ( 215) 00:14:45.198 1.971 - 1.984: 97.7785% ( 141) 00:14:45.198 1.984 - 1.997: 98.2831% ( 77) 00:14:45.198 1.997 - 2.010: 98.5125% ( 35) 00:14:45.198 2.010 - 2.022: 98.6239% ( 17) 00:14:45.198 2.022 - 2.035: 98.6435% ( 3) 00:14:45.198 2.035 - 2.048: 98.6566% ( 2) 00:14:45.198 2.048 - 2.061: 98.6959% ( 6) 00:14:45.198 2.061 - 2.074: 98.7287% ( 5) 00:14:45.198 2.074 - 2.086: 98.7615% ( 5) 00:14:45.198 2.086 - 2.099: 98.7811% ( 3) 00:14:45.198 2.125 - 2.138: 98.7942% ( 2) 00:14:45.198 2.138 - 2.150: 98.8008% ( 1) 00:14:45.198 2.150 - 2.163: 98.8204% ( 3) 00:14:45.198 2.176 - 2.189: 98.8270% ( 1) 00:14:45.198 2.202 - 2.214: 98.8336% ( 1) 00:14:45.198 2.214 - 2.227: 98.8467% ( 2) 00:14:45.198 2.227 - 2.240: 98.8598% ( 2) 00:14:45.198 2.240 - 2.253: 98.8925% ( 5) 00:14:45.198 2.253 - 2.266: 98.8991% ( 1) 00:14:45.198 2.266 - 2.278: 98.9187% ( 3) 00:14:45.198 2.278 - 2.291: 98.9384% ( 3) 00:14:45.198 2.304 - 2.3[2024-04-18 11:50:35.652493] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:45.198 17: 98.9515% ( 2) 00:14:45.198 2.317 - 2.330: 98.9712% ( 3) 00:14:45.198 2.342 - 2.355: 98.9777% ( 1) 00:14:45.198 2.368 - 2.381: 98.9843% ( 1) 00:14:45.198 2.394 - 2.406: 98.9974% ( 2) 00:14:45.198 2.406 - 2.419: 99.0039% ( 1) 00:14:45.198 2.419 - 2.432: 99.0170% ( 2) 00:14:45.198 2.470 - 2.483: 99.0367% ( 3) 00:14:45.198 2.509 - 2.522: 99.0498% ( 2) 00:14:45.198 2.534 - 2.547: 99.0564% ( 1) 00:14:45.198 2.611 - 2.624: 99.0629% ( 1) 00:14:45.198 2.714 - 2.726: 99.0695% ( 1) 00:14:45.198 2.726 - 2.739: 99.0760% ( 1) 00:14:45.198 2.739 - 2.752: 99.0957% ( 3) 00:14:45.198 2.790 - 2.803: 99.1022% ( 1) 00:14:45.198 2.816 - 2.829: 99.1153% ( 2) 00:14:45.198 2.854 - 2.867: 99.1350% ( 3) 00:14:45.198 2.880 - 2.893: 99.1481% ( 2) 00:14:45.198 2.893 - 2.906: 99.1678% ( 3) 00:14:45.198 2.906 - 2.918: 99.1743% ( 1) 00:14:45.198 2.931 - 2.944: 99.1874% ( 2) 00:14:45.198 2.944 - 2.957: 99.1940% ( 1) 00:14:45.198 2.957 - 2.970: 99.2005% ( 1) 00:14:45.198 2.970 - 2.982: 99.2071% ( 1) 00:14:45.198 3.008 - 3.021: 99.2136% ( 1) 00:14:45.198 3.034 - 3.046: 99.2202% ( 1) 00:14:45.198 3.110 - 3.123: 99.2398% ( 3) 00:14:45.198 3.123 - 3.136: 99.2464% ( 1) 00:14:45.199 3.174 - 3.187: 99.2529% ( 1) 00:14:45.199 3.200 - 3.213: 99.2595% ( 1) 00:14:45.199 3.302 - 3.328: 99.2661% ( 1) 00:14:45.199 3.328 - 3.354: 99.2726% ( 1) 00:14:45.199 3.379 - 3.405: 99.2792% ( 1) 00:14:45.199 3.430 - 3.456: 99.2857% ( 1) 00:14:45.199 3.763 - 3.789: 99.2923% ( 1) 00:14:45.199 3.789 - 3.814: 99.2988% ( 1) 00:14:45.199 3.891 - 3.917: 99.3119% ( 2) 00:14:45.199 3.917 - 3.942: 99.3185% ( 1) 00:14:45.199 3.942 - 3.968: 99.3250% ( 1) 00:14:45.199 3.968 - 3.994: 99.3381% ( 2) 00:14:45.199 4.019 - 4.045: 99.3447% ( 1) 00:14:45.199 4.301 - 4.326: 99.3512% ( 1) 00:14:45.199 4.531 - 4.557: 99.3644% ( 2) 00:14:45.199 4.582 - 4.608: 99.3709% ( 1) 00:14:45.199 4.762 - 4.787: 99.3775% ( 1) 00:14:45.199 4.992 - 5.018: 99.3840% ( 1) 00:14:45.199 5.197 - 5.222: 99.3906% ( 1) 00:14:45.199 5.427 - 5.453: 99.3971% ( 1) 00:14:45.199 5.453 - 5.478: 99.4037% ( 1) 00:14:45.199 5.478 - 5.504: 99.4102% ( 1) 00:14:45.199 5.709 - 5.734: 99.4168% ( 1) 00:14:45.199 5.837 - 5.862: 99.4233% ( 1) 00:14:45.199 5.914 - 5.939: 99.4299% ( 1) 00:14:45.199 5.965 - 5.990: 99.4364% ( 1) 00:14:45.199 6.118 - 6.144: 99.4430% ( 1) 00:14:45.199 6.246 - 6.272: 99.4495% ( 1) 00:14:45.199 6.400 - 6.426: 99.4561% ( 1) 00:14:45.199 6.426 - 6.451: 99.4626% ( 1) 00:14:45.199 6.554 - 6.605: 99.4692% ( 1) 00:14:45.199 6.656 - 6.707: 99.4758% ( 1) 00:14:45.199 6.758 - 6.810: 99.4889% ( 2) 00:14:45.199 7.014 - 7.066: 99.4954% ( 1) 00:14:45.199 7.066 - 7.117: 99.5020% ( 1) 00:14:45.199 8.038 - 8.090: 99.5085% ( 1) 00:14:45.199 12.237 - 12.288: 99.5151% ( 1) 00:14:45.199 12.698 - 12.749: 99.5216% ( 1) 00:14:45.199 16.077 - 16.179: 99.5282% ( 1) 00:14:45.199 17.613 - 17.715: 99.5347% ( 1) 00:14:45.199 3801.088 - 3827.302: 99.5413% ( 1) 00:14:45.199 3984.589 - 4010.803: 100.0000% ( 70) 00:14:45.199 00:14:45.458 11:50:35 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:14:45.458 11:50:35 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:45.458 11:50:35 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:14:45.458 11:50:35 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:14:45.458 11:50:35 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:45.458 [2024-04-18 11:50:35.913734] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:14:45.458 [ 00:14:45.458 { 00:14:45.458 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:45.458 "subtype": "Discovery", 00:14:45.458 "listen_addresses": [], 00:14:45.458 "allow_any_host": true, 00:14:45.458 "hosts": [] 00:14:45.458 }, 00:14:45.458 { 00:14:45.458 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:45.458 "subtype": "NVMe", 00:14:45.458 "listen_addresses": [ 00:14:45.458 { 00:14:45.458 "transport": "VFIOUSER", 00:14:45.458 "trtype": "VFIOUSER", 00:14:45.458 "adrfam": "IPv4", 00:14:45.458 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:45.458 "trsvcid": "0" 00:14:45.458 } 00:14:45.458 ], 00:14:45.458 "allow_any_host": true, 00:14:45.458 "hosts": [], 00:14:45.459 "serial_number": "SPDK1", 00:14:45.459 "model_number": "SPDK bdev Controller", 00:14:45.459 "max_namespaces": 32, 00:14:45.459 "min_cntlid": 1, 00:14:45.459 "max_cntlid": 65519, 00:14:45.459 "namespaces": [ 00:14:45.459 { 00:14:45.459 "nsid": 1, 00:14:45.459 "bdev_name": "Malloc1", 00:14:45.459 "name": "Malloc1", 00:14:45.459 "nguid": "4815050131D3491180DDAFB9267302C8", 00:14:45.459 "uuid": "48150501-31d3-4911-80dd-afb9267302c8" 00:14:45.459 } 00:14:45.459 ] 00:14:45.459 }, 00:14:45.459 { 00:14:45.459 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:45.459 "subtype": "NVMe", 00:14:45.459 "listen_addresses": [ 00:14:45.459 { 00:14:45.459 "transport": "VFIOUSER", 00:14:45.459 "trtype": "VFIOUSER", 00:14:45.459 "adrfam": "IPv4", 00:14:45.459 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:45.459 "trsvcid": "0" 00:14:45.459 } 00:14:45.459 ], 00:14:45.459 "allow_any_host": true, 00:14:45.459 "hosts": [], 00:14:45.459 "serial_number": "SPDK2", 00:14:45.459 "model_number": "SPDK bdev Controller", 00:14:45.459 "max_namespaces": 32, 00:14:45.459 "min_cntlid": 1, 00:14:45.459 "max_cntlid": 65519, 00:14:45.459 "namespaces": [ 00:14:45.459 { 00:14:45.459 "nsid": 1, 00:14:45.459 "bdev_name": "Malloc2", 00:14:45.459 "name": "Malloc2", 00:14:45.459 "nguid": "F1EA0103DC3947F4B1DEF0B464945694", 00:14:45.459 "uuid": "f1ea0103-dc39-47f4-b1de-f0b464945694" 00:14:45.459 } 00:14:45.459 ] 00:14:45.459 } 00:14:45.459 ] 00:14:45.459 11:50:35 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:45.459 11:50:35 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:14:45.459 11:50:35 -- target/nvmf_vfio_user.sh@34 -- # aerpid=2428009 00:14:45.459 11:50:35 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:45.459 11:50:35 -- common/autotest_common.sh@1251 -- # local i=0 00:14:45.459 11:50:35 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:45.459 11:50:35 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:45.459 11:50:35 -- common/autotest_common.sh@1262 -- # return 0 00:14:45.459 11:50:35 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:45.459 11:50:35 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:14:45.718 EAL: No free 2048 kB hugepages reported on node 1 00:14:45.718 Malloc3 00:14:45.718 11:50:36 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:14:45.718 [2024-04-18 11:50:36.257493] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:45.977 [2024-04-18 11:50:36.405675] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:45.977 11:50:36 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:45.977 Asynchronous Event Request test 00:14:45.977 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:45.977 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:45.977 Registering asynchronous event callbacks... 00:14:45.977 Starting namespace attribute notice tests for all controllers... 00:14:45.977 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:45.977 aer_cb - Changed Namespace 00:14:45.977 Cleaning up... 00:14:46.237 [ 00:14:46.237 { 00:14:46.237 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:46.237 "subtype": "Discovery", 00:14:46.237 "listen_addresses": [], 00:14:46.237 "allow_any_host": true, 00:14:46.237 "hosts": [] 00:14:46.237 }, 00:14:46.237 { 00:14:46.237 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:46.237 "subtype": "NVMe", 00:14:46.237 "listen_addresses": [ 00:14:46.237 { 00:14:46.237 "transport": "VFIOUSER", 00:14:46.237 "trtype": "VFIOUSER", 00:14:46.237 "adrfam": "IPv4", 00:14:46.237 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:46.237 "trsvcid": "0" 00:14:46.237 } 00:14:46.237 ], 00:14:46.237 "allow_any_host": true, 00:14:46.237 "hosts": [], 00:14:46.237 "serial_number": "SPDK1", 00:14:46.237 "model_number": "SPDK bdev Controller", 00:14:46.237 "max_namespaces": 32, 00:14:46.237 "min_cntlid": 1, 00:14:46.237 "max_cntlid": 65519, 00:14:46.237 "namespaces": [ 00:14:46.237 { 00:14:46.237 "nsid": 1, 00:14:46.237 "bdev_name": "Malloc1", 00:14:46.237 "name": "Malloc1", 00:14:46.237 "nguid": "4815050131D3491180DDAFB9267302C8", 00:14:46.237 "uuid": "48150501-31d3-4911-80dd-afb9267302c8" 00:14:46.237 }, 00:14:46.237 { 00:14:46.237 "nsid": 2, 00:14:46.237 "bdev_name": "Malloc3", 00:14:46.237 "name": "Malloc3", 00:14:46.237 "nguid": "59011534B35D4686A5450D0665106BD3", 00:14:46.237 "uuid": "59011534-b35d-4686-a545-0d0665106bd3" 00:14:46.237 } 00:14:46.237 ] 00:14:46.237 }, 00:14:46.237 { 00:14:46.237 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:46.237 "subtype": "NVMe", 00:14:46.237 "listen_addresses": [ 00:14:46.237 { 00:14:46.237 "transport": "VFIOUSER", 00:14:46.237 "trtype": "VFIOUSER", 00:14:46.237 "adrfam": "IPv4", 00:14:46.237 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:46.237 "trsvcid": "0" 00:14:46.237 } 00:14:46.237 ], 00:14:46.237 "allow_any_host": true, 00:14:46.237 "hosts": [], 00:14:46.237 "serial_number": "SPDK2", 00:14:46.237 "model_number": "SPDK bdev Controller", 00:14:46.237 "max_namespaces": 32, 00:14:46.237 "min_cntlid": 1, 00:14:46.237 "max_cntlid": 65519, 00:14:46.237 "namespaces": [ 00:14:46.237 { 00:14:46.237 "nsid": 1, 00:14:46.237 "bdev_name": "Malloc2", 00:14:46.237 "name": "Malloc2", 00:14:46.237 "nguid": "F1EA0103DC3947F4B1DEF0B464945694", 00:14:46.237 "uuid": "f1ea0103-dc39-47f4-b1de-f0b464945694" 00:14:46.237 } 00:14:46.237 ] 00:14:46.237 } 00:14:46.237 ] 00:14:46.237 11:50:36 -- target/nvmf_vfio_user.sh@44 -- # wait 2428009 00:14:46.237 11:50:36 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:46.237 11:50:36 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:46.237 11:50:36 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:14:46.237 11:50:36 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:46.237 [2024-04-18 11:50:36.661006] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:46.237 [2024-04-18 11:50:36.661089] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2428270 ] 00:14:46.237 EAL: No free 2048 kB hugepages reported on node 1 00:14:46.237 [2024-04-18 11:50:36.710795] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:14:46.237 [2024-04-18 11:50:36.721496] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:46.237 [2024-04-18 11:50:36.721529] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fc0c6562000 00:14:46.237 [2024-04-18 11:50:36.722495] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:46.237 [2024-04-18 11:50:36.723508] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:46.238 [2024-04-18 11:50:36.724528] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:46.238 [2024-04-18 11:50:36.725522] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:46.238 [2024-04-18 11:50:36.726527] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:46.238 [2024-04-18 11:50:36.727538] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:46.238 [2024-04-18 11:50:36.728547] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:46.238 [2024-04-18 11:50:36.729549] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:46.238 [2024-04-18 11:50:36.730559] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:46.238 [2024-04-18 11:50:36.730582] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fc0c6557000 00:14:46.238 [2024-04-18 11:50:36.731678] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:46.238 [2024-04-18 11:50:36.744001] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:14:46.238 [2024-04-18 11:50:36.744037] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:14:46.238 [2024-04-18 11:50:36.746125] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:46.238 [2024-04-18 11:50:36.746259] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:46.238 [2024-04-18 11:50:36.747139] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:14:46.238 [2024-04-18 11:50:36.747164] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:14:46.238 [2024-04-18 11:50:36.747176] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:14:46.238 [2024-04-18 11:50:36.748460] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:14:46.238 [2024-04-18 11:50:36.748483] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:14:46.238 [2024-04-18 11:50:36.748497] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:14:46.238 [2024-04-18 11:50:36.749250] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:46.238 [2024-04-18 11:50:36.749267] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:14:46.238 [2024-04-18 11:50:36.749284] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:14:46.238 [2024-04-18 11:50:36.750252] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:14:46.238 [2024-04-18 11:50:36.750274] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:46.238 [2024-04-18 11:50:36.751259] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:14:46.238 [2024-04-18 11:50:36.751280] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:14:46.238 [2024-04-18 11:50:36.751289] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:14:46.238 [2024-04-18 11:50:36.751305] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:46.238 [2024-04-18 11:50:36.751415] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:14:46.238 [2024-04-18 11:50:36.751426] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:46.238 [2024-04-18 11:50:36.751438] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:14:46.238 [2024-04-18 11:50:36.755466] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:14:46.238 [2024-04-18 11:50:36.756293] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:14:46.238 [2024-04-18 11:50:36.757294] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:46.238 [2024-04-18 11:50:36.758296] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:46.238 [2024-04-18 11:50:36.758353] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:46.238 [2024-04-18 11:50:36.759303] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:14:46.238 [2024-04-18 11:50:36.759321] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:46.238 [2024-04-18 11:50:36.759333] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:14:46.238 [2024-04-18 11:50:36.759359] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:14:46.238 [2024-04-18 11:50:36.759387] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:14:46.238 [2024-04-18 11:50:36.759411] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:46.238 [2024-04-18 11:50:36.759423] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:46.238 [2024-04-18 11:50:36.759441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:46.238 [2024-04-18 11:50:36.765472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:46.238 [2024-04-18 11:50:36.765498] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:14:46.238 [2024-04-18 11:50:36.765510] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:14:46.238 [2024-04-18 11:50:36.765519] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:14:46.238 [2024-04-18 11:50:36.765529] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:46.238 [2024-04-18 11:50:36.765539] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:14:46.238 [2024-04-18 11:50:36.765549] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:14:46.238 [2024-04-18 11:50:36.765563] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:14:46.238 [2024-04-18 11:50:36.765582] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:14:46.238 [2024-04-18 11:50:36.765600] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:46.238 [2024-04-18 11:50:36.773470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:46.238 [2024-04-18 11:50:36.773500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:46.238 [2024-04-18 11:50:36.773518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:46.238 [2024-04-18 11:50:36.773530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:46.238 [2024-04-18 11:50:36.773544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:46.238 [2024-04-18 11:50:36.773555] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:14:46.238 [2024-04-18 11:50:36.773572] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:46.238 [2024-04-18 11:50:36.773585] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:46.238 [2024-04-18 11:50:36.781468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:46.238 [2024-04-18 11:50:36.781487] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:14:46.238 [2024-04-18 11:50:36.781500] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:46.238 [2024-04-18 11:50:36.781512] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:14:46.238 [2024-04-18 11:50:36.781526] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:14:46.238 [2024-04-18 11:50:36.781543] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:46.499 [2024-04-18 11:50:36.789467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:46.499 [2024-04-18 11:50:36.789548] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:14:46.499 [2024-04-18 11:50:36.789572] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:14:46.499 [2024-04-18 11:50:36.789588] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:46.499 [2024-04-18 11:50:36.789602] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:46.499 [2024-04-18 11:50:36.789614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:46.499 [2024-04-18 11:50:36.797467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:46.499 [2024-04-18 11:50:36.797512] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:14:46.499 [2024-04-18 11:50:36.797531] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:14:46.499 [2024-04-18 11:50:36.797546] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:14:46.499 [2024-04-18 11:50:36.797567] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:46.499 [2024-04-18 11:50:36.797576] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:46.499 [2024-04-18 11:50:36.797592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:46.499 [2024-04-18 11:50:36.805464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:46.499 [2024-04-18 11:50:36.805498] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:46.499 [2024-04-18 11:50:36.805512] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:46.499 [2024-04-18 11:50:36.805532] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:46.499 [2024-04-18 11:50:36.805541] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:46.499 [2024-04-18 11:50:36.805556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:46.499 [2024-04-18 11:50:36.813459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:46.499 [2024-04-18 11:50:36.813492] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:46.499 [2024-04-18 11:50:36.813506] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:14:46.499 [2024-04-18 11:50:36.813523] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:14:46.499 [2024-04-18 11:50:36.813533] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:46.499 [2024-04-18 11:50:36.813548] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:14:46.499 [2024-04-18 11:50:36.813557] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:14:46.499 [2024-04-18 11:50:36.813569] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:14:46.499 [2024-04-18 11:50:36.813578] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:14:46.499 [2024-04-18 11:50:36.813615] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:46.499 [2024-04-18 11:50:36.821464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:46.499 [2024-04-18 11:50:36.821496] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:46.499 [2024-04-18 11:50:36.829461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:46.499 [2024-04-18 11:50:36.829494] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:46.499 [2024-04-18 11:50:36.837463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:46.499 [2024-04-18 11:50:36.837493] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:46.499 [2024-04-18 11:50:36.845462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:46.499 [2024-04-18 11:50:36.845513] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:46.499 [2024-04-18 11:50:36.845523] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:46.499 [2024-04-18 11:50:36.845532] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:46.499 [2024-04-18 11:50:36.845542] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:46.499 [2024-04-18 11:50:36.845555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:46.499 [2024-04-18 11:50:36.845568] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:46.499 [2024-04-18 11:50:36.845581] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:46.499 [2024-04-18 11:50:36.845591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:46.499 [2024-04-18 11:50:36.845606] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:46.499 [2024-04-18 11:50:36.845614] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:46.499 [2024-04-18 11:50:36.845626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:46.499 [2024-04-18 11:50:36.845642] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:46.499 [2024-04-18 11:50:36.845653] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:46.499 [2024-04-18 11:50:36.845665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:46.499 [2024-04-18 11:50:36.853465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:46.499 [2024-04-18 11:50:36.853499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:46.499 [2024-04-18 11:50:36.853522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:46.499 [2024-04-18 11:50:36.853534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:46.499 ===================================================== 00:14:46.499 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:46.499 ===================================================== 00:14:46.499 Controller Capabilities/Features 00:14:46.499 ================================ 00:14:46.499 Vendor ID: 4e58 00:14:46.499 Subsystem Vendor ID: 4e58 00:14:46.499 Serial Number: SPDK2 00:14:46.499 Model Number: SPDK bdev Controller 00:14:46.499 Firmware Version: 24.05 00:14:46.499 Recommended Arb Burst: 6 00:14:46.499 IEEE OUI Identifier: 8d 6b 50 00:14:46.499 Multi-path I/O 00:14:46.499 May have multiple subsystem ports: Yes 00:14:46.499 May have multiple controllers: Yes 00:14:46.499 Associated with SR-IOV VF: No 00:14:46.499 Max Data Transfer Size: 131072 00:14:46.499 Max Number of Namespaces: 32 00:14:46.499 Max Number of I/O Queues: 127 00:14:46.499 NVMe Specification Version (VS): 1.3 00:14:46.499 NVMe Specification Version (Identify): 1.3 00:14:46.499 Maximum Queue Entries: 256 00:14:46.499 Contiguous Queues Required: Yes 00:14:46.499 Arbitration Mechanisms Supported 00:14:46.499 Weighted Round Robin: Not Supported 00:14:46.499 Vendor Specific: Not Supported 00:14:46.499 Reset Timeout: 15000 ms 00:14:46.499 Doorbell Stride: 4 bytes 00:14:46.499 NVM Subsystem Reset: Not Supported 00:14:46.499 Command Sets Supported 00:14:46.499 NVM Command Set: Supported 00:14:46.499 Boot Partition: Not Supported 00:14:46.499 Memory Page Size Minimum: 4096 bytes 00:14:46.499 Memory Page Size Maximum: 4096 bytes 00:14:46.499 Persistent Memory Region: Not Supported 00:14:46.499 Optional Asynchronous Events Supported 00:14:46.499 Namespace Attribute Notices: Supported 00:14:46.499 Firmware Activation Notices: Not Supported 00:14:46.499 ANA Change Notices: Not Supported 00:14:46.499 PLE Aggregate Log Change Notices: Not Supported 00:14:46.499 LBA Status Info Alert Notices: Not Supported 00:14:46.499 EGE Aggregate Log Change Notices: Not Supported 00:14:46.499 Normal NVM Subsystem Shutdown event: Not Supported 00:14:46.499 Zone Descriptor Change Notices: Not Supported 00:14:46.499 Discovery Log Change Notices: Not Supported 00:14:46.499 Controller Attributes 00:14:46.499 128-bit Host Identifier: Supported 00:14:46.499 Non-Operational Permissive Mode: Not Supported 00:14:46.499 NVM Sets: Not Supported 00:14:46.499 Read Recovery Levels: Not Supported 00:14:46.499 Endurance Groups: Not Supported 00:14:46.499 Predictable Latency Mode: Not Supported 00:14:46.499 Traffic Based Keep ALive: Not Supported 00:14:46.499 Namespace Granularity: Not Supported 00:14:46.499 SQ Associations: Not Supported 00:14:46.499 UUID List: Not Supported 00:14:46.499 Multi-Domain Subsystem: Not Supported 00:14:46.499 Fixed Capacity Management: Not Supported 00:14:46.499 Variable Capacity Management: Not Supported 00:14:46.499 Delete Endurance Group: Not Supported 00:14:46.499 Delete NVM Set: Not Supported 00:14:46.499 Extended LBA Formats Supported: Not Supported 00:14:46.500 Flexible Data Placement Supported: Not Supported 00:14:46.500 00:14:46.500 Controller Memory Buffer Support 00:14:46.500 ================================ 00:14:46.500 Supported: No 00:14:46.500 00:14:46.500 Persistent Memory Region Support 00:14:46.500 ================================ 00:14:46.500 Supported: No 00:14:46.500 00:14:46.500 Admin Command Set Attributes 00:14:46.500 ============================ 00:14:46.500 Security Send/Receive: Not Supported 00:14:46.500 Format NVM: Not Supported 00:14:46.500 Firmware Activate/Download: Not Supported 00:14:46.500 Namespace Management: Not Supported 00:14:46.500 Device Self-Test: Not Supported 00:14:46.500 Directives: Not Supported 00:14:46.500 NVMe-MI: Not Supported 00:14:46.500 Virtualization Management: Not Supported 00:14:46.500 Doorbell Buffer Config: Not Supported 00:14:46.500 Get LBA Status Capability: Not Supported 00:14:46.500 Command & Feature Lockdown Capability: Not Supported 00:14:46.500 Abort Command Limit: 4 00:14:46.500 Async Event Request Limit: 4 00:14:46.500 Number of Firmware Slots: N/A 00:14:46.500 Firmware Slot 1 Read-Only: N/A 00:14:46.500 Firmware Activation Without Reset: N/A 00:14:46.500 Multiple Update Detection Support: N/A 00:14:46.500 Firmware Update Granularity: No Information Provided 00:14:46.500 Per-Namespace SMART Log: No 00:14:46.500 Asymmetric Namespace Access Log Page: Not Supported 00:14:46.500 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:14:46.500 Command Effects Log Page: Supported 00:14:46.500 Get Log Page Extended Data: Supported 00:14:46.500 Telemetry Log Pages: Not Supported 00:14:46.500 Persistent Event Log Pages: Not Supported 00:14:46.500 Supported Log Pages Log Page: May Support 00:14:46.500 Commands Supported & Effects Log Page: Not Supported 00:14:46.500 Feature Identifiers & Effects Log Page:May Support 00:14:46.500 NVMe-MI Commands & Effects Log Page: May Support 00:14:46.500 Data Area 4 for Telemetry Log: Not Supported 00:14:46.500 Error Log Page Entries Supported: 128 00:14:46.500 Keep Alive: Supported 00:14:46.500 Keep Alive Granularity: 10000 ms 00:14:46.500 00:14:46.500 NVM Command Set Attributes 00:14:46.500 ========================== 00:14:46.500 Submission Queue Entry Size 00:14:46.500 Max: 64 00:14:46.500 Min: 64 00:14:46.500 Completion Queue Entry Size 00:14:46.500 Max: 16 00:14:46.500 Min: 16 00:14:46.500 Number of Namespaces: 32 00:14:46.500 Compare Command: Supported 00:14:46.500 Write Uncorrectable Command: Not Supported 00:14:46.500 Dataset Management Command: Supported 00:14:46.500 Write Zeroes Command: Supported 00:14:46.500 Set Features Save Field: Not Supported 00:14:46.500 Reservations: Not Supported 00:14:46.500 Timestamp: Not Supported 00:14:46.500 Copy: Supported 00:14:46.500 Volatile Write Cache: Present 00:14:46.500 Atomic Write Unit (Normal): 1 00:14:46.500 Atomic Write Unit (PFail): 1 00:14:46.500 Atomic Compare & Write Unit: 1 00:14:46.500 Fused Compare & Write: Supported 00:14:46.500 Scatter-Gather List 00:14:46.500 SGL Command Set: Supported (Dword aligned) 00:14:46.500 SGL Keyed: Not Supported 00:14:46.500 SGL Bit Bucket Descriptor: Not Supported 00:14:46.500 SGL Metadata Pointer: Not Supported 00:14:46.500 Oversized SGL: Not Supported 00:14:46.500 SGL Metadata Address: Not Supported 00:14:46.500 SGL Offset: Not Supported 00:14:46.500 Transport SGL Data Block: Not Supported 00:14:46.500 Replay Protected Memory Block: Not Supported 00:14:46.500 00:14:46.500 Firmware Slot Information 00:14:46.500 ========================= 00:14:46.500 Active slot: 1 00:14:46.500 Slot 1 Firmware Revision: 24.05 00:14:46.500 00:14:46.500 00:14:46.500 Commands Supported and Effects 00:14:46.500 ============================== 00:14:46.500 Admin Commands 00:14:46.500 -------------- 00:14:46.500 Get Log Page (02h): Supported 00:14:46.500 Identify (06h): Supported 00:14:46.500 Abort (08h): Supported 00:14:46.500 Set Features (09h): Supported 00:14:46.500 Get Features (0Ah): Supported 00:14:46.500 Asynchronous Event Request (0Ch): Supported 00:14:46.500 Keep Alive (18h): Supported 00:14:46.500 I/O Commands 00:14:46.500 ------------ 00:14:46.500 Flush (00h): Supported LBA-Change 00:14:46.500 Write (01h): Supported LBA-Change 00:14:46.500 Read (02h): Supported 00:14:46.500 Compare (05h): Supported 00:14:46.500 Write Zeroes (08h): Supported LBA-Change 00:14:46.500 Dataset Management (09h): Supported LBA-Change 00:14:46.500 Copy (19h): Supported LBA-Change 00:14:46.500 Unknown (79h): Supported LBA-Change 00:14:46.500 Unknown (7Ah): Supported 00:14:46.500 00:14:46.500 Error Log 00:14:46.500 ========= 00:14:46.500 00:14:46.500 Arbitration 00:14:46.500 =========== 00:14:46.500 Arbitration Burst: 1 00:14:46.500 00:14:46.500 Power Management 00:14:46.500 ================ 00:14:46.500 Number of Power States: 1 00:14:46.500 Current Power State: Power State #0 00:14:46.500 Power State #0: 00:14:46.500 Max Power: 0.00 W 00:14:46.500 Non-Operational State: Operational 00:14:46.500 Entry Latency: Not Reported 00:14:46.500 Exit Latency: Not Reported 00:14:46.500 Relative Read Throughput: 0 00:14:46.500 Relative Read Latency: 0 00:14:46.500 Relative Write Throughput: 0 00:14:46.500 Relative Write Latency: 0 00:14:46.500 Idle Power: Not Reported 00:14:46.500 Active Power: Not Reported 00:14:46.500 Non-Operational Permissive Mode: Not Supported 00:14:46.500 00:14:46.500 Health Information 00:14:46.500 ================== 00:14:46.500 Critical Warnings: 00:14:46.500 Available Spare Space: OK 00:14:46.500 Temperature: OK 00:14:46.500 Device Reliability: OK 00:14:46.500 Read Only: No 00:14:46.500 Volatile Memory Backup: OK 00:14:46.500 Current Temperature: 0 Kelvin (-2[2024-04-18 11:50:36.853695] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:46.500 [2024-04-18 11:50:36.861462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:46.500 [2024-04-18 11:50:36.861522] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:14:46.500 [2024-04-18 11:50:36.861538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:46.500 [2024-04-18 11:50:36.861552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:46.500 [2024-04-18 11:50:36.861563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:46.500 [2024-04-18 11:50:36.861575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:46.500 [2024-04-18 11:50:36.861635] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:46.500 [2024-04-18 11:50:36.861655] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:14:46.500 [2024-04-18 11:50:36.862645] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:46.500 [2024-04-18 11:50:36.862710] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:14:46.500 [2024-04-18 11:50:36.862725] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:14:46.500 [2024-04-18 11:50:36.863647] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:14:46.500 [2024-04-18 11:50:36.863674] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:14:46.500 [2024-04-18 11:50:36.864346] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:14:46.500 [2024-04-18 11:50:36.865334] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:46.500 73 Celsius) 00:14:46.500 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:46.500 Available Spare: 0% 00:14:46.500 Available Spare Threshold: 0% 00:14:46.500 Life Percentage Used: 0% 00:14:46.500 Data Units Read: 0 00:14:46.500 Data Units Written: 0 00:14:46.500 Host Read Commands: 0 00:14:46.500 Host Write Commands: 0 00:14:46.500 Controller Busy Time: 0 minutes 00:14:46.500 Power Cycles: 0 00:14:46.500 Power On Hours: 0 hours 00:14:46.500 Unsafe Shutdowns: 0 00:14:46.500 Unrecoverable Media Errors: 0 00:14:46.500 Lifetime Error Log Entries: 0 00:14:46.500 Warning Temperature Time: 0 minutes 00:14:46.500 Critical Temperature Time: 0 minutes 00:14:46.500 00:14:46.500 Number of Queues 00:14:46.500 ================ 00:14:46.500 Number of I/O Submission Queues: 127 00:14:46.500 Number of I/O Completion Queues: 127 00:14:46.500 00:14:46.500 Active Namespaces 00:14:46.500 ================= 00:14:46.500 Namespace ID:1 00:14:46.500 Error Recovery Timeout: Unlimited 00:14:46.500 Command Set Identifier: NVM (00h) 00:14:46.500 Deallocate: Supported 00:14:46.500 Deallocated/Unwritten Error: Not Supported 00:14:46.500 Deallocated Read Value: Unknown 00:14:46.500 Deallocate in Write Zeroes: Not Supported 00:14:46.500 Deallocated Guard Field: 0xFFFF 00:14:46.501 Flush: Supported 00:14:46.501 Reservation: Supported 00:14:46.501 Namespace Sharing Capabilities: Multiple Controllers 00:14:46.501 Size (in LBAs): 131072 (0GiB) 00:14:46.501 Capacity (in LBAs): 131072 (0GiB) 00:14:46.501 Utilization (in LBAs): 131072 (0GiB) 00:14:46.501 NGUID: F1EA0103DC3947F4B1DEF0B464945694 00:14:46.501 UUID: f1ea0103-dc39-47f4-b1de-f0b464945694 00:14:46.501 Thin Provisioning: Not Supported 00:14:46.501 Per-NS Atomic Units: Yes 00:14:46.501 Atomic Boundary Size (Normal): 0 00:14:46.501 Atomic Boundary Size (PFail): 0 00:14:46.501 Atomic Boundary Offset: 0 00:14:46.501 Maximum Single Source Range Length: 65535 00:14:46.501 Maximum Copy Length: 65535 00:14:46.501 Maximum Source Range Count: 1 00:14:46.501 NGUID/EUI64 Never Reused: No 00:14:46.501 Namespace Write Protected: No 00:14:46.501 Number of LBA Formats: 1 00:14:46.501 Current LBA Format: LBA Format #00 00:14:46.501 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:46.501 00:14:46.501 11:50:36 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:46.501 EAL: No free 2048 kB hugepages reported on node 1 00:14:46.760 [2024-04-18 11:50:37.177926] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:52.031 [2024-04-18 11:50:42.286479] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:52.031 Initializing NVMe Controllers 00:14:52.032 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:52.032 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:52.032 Initialization complete. Launching workers. 00:14:52.032 ======================================================== 00:14:52.032 Latency(us) 00:14:52.032 Device Information : IOPS MiB/s Average min max 00:14:52.032 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39950.99 156.06 3203.47 1038.96 7260.42 00:14:52.032 ======================================================== 00:14:52.032 Total : 39950.99 156.06 3203.47 1038.96 7260.42 00:14:52.032 00:14:52.032 11:50:42 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:52.032 EAL: No free 2048 kB hugepages reported on node 1 00:14:52.290 [2024-04-18 11:50:42.607764] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:57.561 [2024-04-18 11:50:47.630691] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:57.561 Initializing NVMe Controllers 00:14:57.561 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:57.561 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:57.561 Initialization complete. Launching workers. 00:14:57.561 ======================================================== 00:14:57.561 Latency(us) 00:14:57.561 Device Information : IOPS MiB/s Average min max 00:14:57.561 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39905.47 155.88 3206.77 1053.94 6916.53 00:14:57.561 ======================================================== 00:14:57.561 Total : 39905.47 155.88 3206.77 1053.94 6916.53 00:14:57.561 00:14:57.561 11:50:47 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:57.561 EAL: No free 2048 kB hugepages reported on node 1 00:14:57.561 [2024-04-18 11:50:47.994747] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:02.896 [2024-04-18 11:50:53.163567] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:02.896 Initializing NVMe Controllers 00:15:02.896 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:02.896 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:02.896 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:02.896 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:02.896 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:02.896 Initialization complete. Launching workers. 00:15:02.896 Starting thread on core 2 00:15:02.896 Starting thread on core 3 00:15:02.896 Starting thread on core 1 00:15:02.896 11:50:53 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:02.896 EAL: No free 2048 kB hugepages reported on node 1 00:15:03.155 [2024-04-18 11:50:53.618049] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:06.446 [2024-04-18 11:50:56.767915] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:06.446 Initializing NVMe Controllers 00:15:06.446 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:06.446 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:06.446 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:06.446 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:06.446 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:06.446 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:06.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:06.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:06.446 Initialization complete. Launching workers. 00:15:06.446 Starting thread on core 1 with urgent priority queue 00:15:06.446 Starting thread on core 2 with urgent priority queue 00:15:06.446 Starting thread on core 3 with urgent priority queue 00:15:06.446 Starting thread on core 0 with urgent priority queue 00:15:06.446 SPDK bdev Controller (SPDK2 ) core 0: 512.00 IO/s 195.31 secs/100000 ios 00:15:06.446 SPDK bdev Controller (SPDK2 ) core 1: 426.67 IO/s 234.38 secs/100000 ios 00:15:06.446 SPDK bdev Controller (SPDK2 ) core 2: 469.33 IO/s 213.07 secs/100000 ios 00:15:06.446 SPDK bdev Controller (SPDK2 ) core 3: 533.33 IO/s 187.50 secs/100000 ios 00:15:06.446 ======================================================== 00:15:06.446 00:15:06.446 11:50:56 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:06.705 EAL: No free 2048 kB hugepages reported on node 1 00:15:06.705 [2024-04-18 11:50:57.235083] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:06.705 [2024-04-18 11:50:57.247568] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:06.964 Initializing NVMe Controllers 00:15:06.964 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:06.964 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:06.964 Namespace ID: 1 size: 0GB 00:15:06.964 Initialization complete. 00:15:06.964 INFO: using host memory buffer for IO 00:15:06.964 Hello world! 00:15:06.964 11:50:57 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:06.964 EAL: No free 2048 kB hugepages reported on node 1 00:15:07.223 [2024-04-18 11:50:57.678852] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:08.603 Initializing NVMe Controllers 00:15:08.603 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:08.603 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:08.603 Initialization complete. Launching workers. 00:15:08.603 submit (in ns) avg, min, max = 6722.8, 3282.4, 4006684.0 00:15:08.603 complete (in ns) avg, min, max = 20667.5, 1833.6, 4009665.6 00:15:08.603 00:15:08.603 Submit histogram 00:15:08.603 ================ 00:15:08.603 Range in us Cumulative Count 00:15:08.603 3.277 - 3.302: 0.0260% ( 4) 00:15:08.603 3.302 - 3.328: 0.0521% ( 4) 00:15:08.603 3.328 - 3.354: 0.1757% ( 19) 00:15:08.603 3.354 - 3.379: 1.0154% ( 129) 00:15:08.603 3.379 - 3.405: 3.6970% ( 412) 00:15:08.603 3.405 - 3.430: 9.7956% ( 937) 00:15:08.603 3.430 - 3.456: 19.0706% ( 1425) 00:15:08.603 3.456 - 3.482: 28.9052% ( 1511) 00:15:08.603 3.482 - 3.507: 40.2304% ( 1740) 00:15:08.603 3.507 - 3.533: 50.3970% ( 1562) 00:15:08.603 3.533 - 3.558: 56.9123% ( 1001) 00:15:08.603 3.558 - 3.584: 62.6855% ( 887) 00:15:08.603 3.584 - 3.610: 68.3481% ( 870) 00:15:08.603 3.610 - 3.635: 74.8763% ( 1003) 00:15:08.603 3.635 - 3.661: 80.6951% ( 894) 00:15:08.603 3.661 - 3.686: 83.5264% ( 435) 00:15:08.603 3.686 - 3.712: 84.8867% ( 209) 00:15:08.603 3.712 - 3.738: 86.3577% ( 226) 00:15:08.603 3.738 - 3.763: 88.0240% ( 256) 00:15:08.603 3.763 - 3.789: 89.9050% ( 289) 00:15:08.603 3.789 - 3.814: 91.8836% ( 304) 00:15:08.603 3.814 - 3.840: 93.2765% ( 214) 00:15:08.603 3.840 - 3.866: 94.2463% ( 149) 00:15:08.603 3.866 - 3.891: 95.4829% ( 190) 00:15:08.603 3.891 - 3.917: 96.4918% ( 155) 00:15:08.603 3.917 - 3.942: 97.1817% ( 106) 00:15:08.603 3.942 - 3.968: 97.6308% ( 69) 00:15:08.603 3.968 - 3.994: 97.9563% ( 50) 00:15:08.603 3.994 - 4.019: 98.2036% ( 38) 00:15:08.603 4.019 - 4.045: 98.3403% ( 21) 00:15:08.603 4.045 - 4.070: 98.4314% ( 14) 00:15:08.603 4.070 - 4.096: 98.4835% ( 8) 00:15:08.603 4.096 - 4.122: 98.5030% ( 3) 00:15:08.603 4.122 - 4.147: 98.5095% ( 1) 00:15:08.603 4.147 - 4.173: 98.5355% ( 4) 00:15:08.603 4.173 - 4.198: 98.5551% ( 3) 00:15:08.603 4.224 - 4.250: 98.5616% ( 1) 00:15:08.603 4.326 - 4.352: 98.5681% ( 1) 00:15:08.603 4.352 - 4.378: 98.5746% ( 1) 00:15:08.603 4.378 - 4.403: 98.5811% ( 1) 00:15:08.603 4.403 - 4.429: 98.6006% ( 3) 00:15:08.603 4.429 - 4.454: 98.6136% ( 2) 00:15:08.603 4.454 - 4.480: 98.6332% ( 3) 00:15:08.603 4.480 - 4.506: 98.6983% ( 10) 00:15:08.603 4.506 - 4.531: 98.7568% ( 9) 00:15:08.603 4.531 - 4.557: 98.7959% ( 6) 00:15:08.603 4.557 - 4.582: 98.8414% ( 7) 00:15:08.603 4.582 - 4.608: 98.9065% ( 10) 00:15:08.603 4.608 - 4.634: 98.9651% ( 9) 00:15:08.603 4.634 - 4.659: 98.9781% ( 2) 00:15:08.603 4.659 - 4.685: 99.0367% ( 9) 00:15:08.603 4.685 - 4.710: 99.0497% ( 2) 00:15:08.603 4.710 - 4.736: 99.0888% ( 6) 00:15:08.603 4.736 - 4.762: 99.1213% ( 5) 00:15:08.603 4.762 - 4.787: 99.1408% ( 3) 00:15:08.603 4.787 - 4.813: 99.1799% ( 6) 00:15:08.603 4.838 - 4.864: 99.1929% ( 2) 00:15:08.603 4.864 - 4.890: 99.2059% ( 2) 00:15:08.603 4.890 - 4.915: 99.2255% ( 3) 00:15:08.603 4.915 - 4.941: 99.2320% ( 1) 00:15:08.603 4.941 - 4.966: 99.2450% ( 2) 00:15:08.603 4.966 - 4.992: 99.2710% ( 4) 00:15:08.603 4.992 - 5.018: 99.2840% ( 2) 00:15:08.603 5.018 - 5.043: 99.3101% ( 4) 00:15:08.603 5.069 - 5.094: 99.3166% ( 1) 00:15:08.603 5.094 - 5.120: 99.3231% ( 1) 00:15:08.603 5.146 - 5.171: 99.3296% ( 1) 00:15:08.603 5.299 - 5.325: 99.3361% ( 1) 00:15:08.603 5.478 - 5.504: 99.3426% ( 1) 00:15:08.603 5.581 - 5.606: 99.3491% ( 1) 00:15:08.603 5.632 - 5.658: 99.3556% ( 1) 00:15:08.603 5.734 - 5.760: 99.3687% ( 2) 00:15:08.603 5.760 - 5.786: 99.3817% ( 2) 00:15:08.603 5.786 - 5.811: 99.3882% ( 1) 00:15:08.603 5.811 - 5.837: 99.3947% ( 1) 00:15:08.603 5.837 - 5.862: 99.4012% ( 1) 00:15:08.603 5.862 - 5.888: 99.4337% ( 5) 00:15:08.603 5.939 - 5.965: 99.4402% ( 1) 00:15:08.603 5.965 - 5.990: 99.4598% ( 3) 00:15:08.603 6.016 - 6.042: 99.4663% ( 1) 00:15:08.603 6.042 - 6.067: 99.4728% ( 1) 00:15:08.603 6.067 - 6.093: 99.4858% ( 2) 00:15:08.603 6.093 - 6.118: 99.4923% ( 1) 00:15:08.603 6.118 - 6.144: 99.4988% ( 1) 00:15:08.603 6.170 - 6.195: 99.5184% ( 3) 00:15:08.603 6.195 - 6.221: 99.5249% ( 1) 00:15:08.603 6.246 - 6.272: 99.5314% ( 1) 00:15:08.603 6.323 - 6.349: 99.5379% ( 1) 00:15:08.603 6.374 - 6.400: 99.5444% ( 1) 00:15:08.603 6.400 - 6.426: 99.5574% ( 2) 00:15:08.603 6.605 - 6.656: 99.5639% ( 1) 00:15:08.603 6.810 - 6.861: 99.5704% ( 1) 00:15:08.603 6.861 - 6.912: 99.5769% ( 1) 00:15:08.603 6.912 - 6.963: 99.5834% ( 1) 00:15:08.603 7.117 - 7.168: 99.5900% ( 1) 00:15:08.603 7.168 - 7.219: 99.5965% ( 1) 00:15:08.603 7.219 - 7.270: 99.6030% ( 1) 00:15:08.603 7.270 - 7.322: 99.6095% ( 1) 00:15:08.603 7.322 - 7.373: 99.6160% ( 1) 00:15:08.603 7.373 - 7.424: 99.6225% ( 1) 00:15:08.603 7.424 - 7.475: 99.6290% ( 1) 00:15:08.603 7.475 - 7.526: 99.6355% ( 1) 00:15:08.603 7.526 - 7.578: 99.6420% ( 1) 00:15:08.603 7.578 - 7.629: 99.6485% ( 1) 00:15:08.603 7.629 - 7.680: 99.6681% ( 3) 00:15:08.603 7.680 - 7.731: 99.6746% ( 1) 00:15:08.604 7.731 - 7.782: 99.6876% ( 2) 00:15:08.604 7.782 - 7.834: 99.7006% ( 2) 00:15:08.604 7.936 - 7.987: 99.7136% ( 2) 00:15:08.604 8.090 - 8.141: 99.7201% ( 1) 00:15:08.604 8.141 - 8.192: 99.7266% ( 1) 00:15:08.604 8.243 - 8.294: 99.7462% ( 3) 00:15:08.604 8.346 - 8.397: 99.7527% ( 1) 00:15:08.604 8.397 - 8.448: 99.7592% ( 1) 00:15:08.604 8.550 - 8.602: 99.7722% ( 2) 00:15:08.604 8.755 - 8.806: 99.7787% ( 1) 00:15:08.604 8.858 - 8.909: 99.7852% ( 1) 00:15:08.604 8.909 - 8.960: 99.7917% ( 1) 00:15:08.604 9.011 - 9.062: 99.7982% ( 1) 00:15:08.604 9.062 - 9.114: 99.8178% ( 3) 00:15:08.604 9.114 - 9.165: 99.8243% ( 1) 00:15:08.604 9.165 - 9.216: 99.8308% ( 1) 00:15:08.604 9.216 - 9.267: 99.8373% ( 1) 00:15:08.604 9.318 - 9.370: 99.8438% ( 1) 00:15:08.604 9.421 - 9.472: 99.8503% ( 1) 00:15:08.604 9.472 - 9.523: 99.8568% ( 1) 00:15:08.604 9.523 - 9.574: 99.8633% ( 1) 00:15:08.604 9.626 - 9.677: 99.8698% ( 1) 00:15:08.604 9.779 - 9.830: 99.8763% ( 1) 00:15:08.604 9.882 - 9.933: 99.8828% ( 1) 00:15:08.604 11.725 - 11.776: 99.8894% ( 1) 00:15:08.604 13.619 - 13.722: 99.9024% ( 2) 00:15:08.604 13.722 - 13.824: 99.9089% ( 1) 00:15:08.604 15.974 - 16.077: 99.9154% ( 1) 00:15:08.604 20.173 - 20.275: 99.9219% ( 1) 00:15:08.604 3984.589 - 4010.803: 100.0000% ( 12) 00:15:08.604 00:15:08.604 Complete histogram 00:15:08.604 ================== 00:15:08.604 Range in us Cumulative Count 00:15:08.604 1.830 - 1.843: 0.0521% ( 8) 00:15:08.604 1.843 - 1.856: 0.4231% ( 57) 00:15:08.604 1.856 - 1.869: 0.9047% ( 74) 00:15:08.604 1.869 - 1.882: 1.1781% ( 42) 00:15:08.604 1.882 - 1.894: 5.5259% ( 668) 00:15:08.604 1.894 - 1.907: 36.8133% ( 4807) 00:15:08.604 1.907 - 1.920: 69.2528% ( 4984) 00:15:08.604 1.920 - 1.933: 85.7003% ( 2527) 00:15:08.604 1.933 - 1.946: 93.6149% ( 1216) 00:15:08.604 1.946 - 1.958: 96.5699% ( 454) 00:15:08.604 1.958 - 1.971: 97.7285% ( 178) 00:15:08.604 1.971 - 1.984: 98.3142% ( 90) 00:15:08.604 1.984 - 1.997: 98.5876% ( 42) 00:15:08.604 1.997 - 2.010: 98.7113% ( 19) 00:15:08.604 2.010 - 2.022: 98.7894% ( 12) 00:15:08.604 2.022 - 2.035: 98.8089% ( 3) 00:15:08.604 2.035 - 2.048: 98.8219% ( 2) 00:15:08.604 2.048 - 2.061: 98.8414% ( 3) 00:15:08.604 2.061 - 2.074: 98.8480% ( 1) 00:15:08.604 2.074 - 2.086: 98.8545% ( 1) 00:15:08.604 2.086 - 2.099: 98.8675% ( 2) 00:15:08.604 2.099 - 2.112: 98.8740% ( 1) 00:15:08.604 2.112 - 2.125: 98.8805% ( 1) 00:15:08.604 2.125 - 2.138: 98.8870% ( 1) 00:15:08.604 2.150 - 2.163: 98.9000% ( 2) 00:15:08.604 2.176 - 2.189: 98.9065% ( 1) 00:15:08.604 2.189 - 2.202: 98.9196% ( 2) 00:15:08.604 2.202 - 2.214: 98.9326% ( 2) 00:15:08.604 2.214 - 2.2[2024-04-18 11:50:58.784359] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:08.604 27: 98.9391% ( 1) 00:15:08.604 2.227 - 2.240: 98.9521% ( 2) 00:15:08.604 2.240 - 2.253: 98.9651% ( 2) 00:15:08.604 2.253 - 2.266: 98.9716% ( 1) 00:15:08.604 2.266 - 2.278: 98.9781% ( 1) 00:15:08.604 2.368 - 2.381: 98.9846% ( 1) 00:15:08.604 2.598 - 2.611: 98.9911% ( 1) 00:15:08.604 2.637 - 2.650: 98.9977% ( 1) 00:15:08.604 2.662 - 2.675: 99.0172% ( 3) 00:15:08.604 2.675 - 2.688: 99.0237% ( 1) 00:15:08.604 2.701 - 2.714: 99.0367% ( 2) 00:15:08.604 2.714 - 2.726: 99.0497% ( 2) 00:15:08.604 2.726 - 2.739: 99.0693% ( 3) 00:15:08.604 2.739 - 2.752: 99.0758% ( 1) 00:15:08.604 2.765 - 2.778: 99.0823% ( 1) 00:15:08.604 2.790 - 2.803: 99.0953% ( 2) 00:15:08.604 2.803 - 2.816: 99.1213% ( 4) 00:15:08.604 2.842 - 2.854: 99.1278% ( 1) 00:15:08.604 2.854 - 2.867: 99.1408% ( 2) 00:15:08.604 2.880 - 2.893: 99.1474% ( 1) 00:15:08.604 2.893 - 2.906: 99.1539% ( 1) 00:15:08.604 2.918 - 2.931: 99.1669% ( 2) 00:15:08.604 2.931 - 2.944: 99.1799% ( 2) 00:15:08.604 2.957 - 2.970: 99.1929% ( 2) 00:15:08.604 2.970 - 2.982: 99.1994% ( 1) 00:15:08.604 2.995 - 3.008: 99.2059% ( 1) 00:15:08.604 3.046 - 3.059: 99.2190% ( 2) 00:15:08.604 3.059 - 3.072: 99.2255% ( 1) 00:15:08.604 3.085 - 3.098: 99.2320% ( 1) 00:15:08.604 3.149 - 3.162: 99.2385% ( 1) 00:15:08.604 3.200 - 3.213: 99.2450% ( 1) 00:15:08.604 3.238 - 3.251: 99.2515% ( 1) 00:15:08.604 4.198 - 4.224: 99.2580% ( 1) 00:15:08.604 4.557 - 4.582: 99.2645% ( 1) 00:15:08.604 4.890 - 4.915: 99.2710% ( 1) 00:15:08.604 5.427 - 5.453: 99.2775% ( 1) 00:15:08.604 5.453 - 5.478: 99.2840% ( 1) 00:15:08.604 5.581 - 5.606: 99.2905% ( 1) 00:15:08.604 5.760 - 5.786: 99.2971% ( 1) 00:15:08.604 5.786 - 5.811: 99.3036% ( 1) 00:15:08.604 5.862 - 5.888: 99.3101% ( 1) 00:15:08.604 5.965 - 5.990: 99.3166% ( 1) 00:15:08.604 5.990 - 6.016: 99.3231% ( 1) 00:15:08.604 6.093 - 6.118: 99.3361% ( 2) 00:15:08.604 6.195 - 6.221: 99.3426% ( 1) 00:15:08.604 6.221 - 6.246: 99.3491% ( 1) 00:15:08.604 6.272 - 6.298: 99.3556% ( 1) 00:15:08.604 6.298 - 6.323: 99.3621% ( 1) 00:15:08.604 6.451 - 6.477: 99.3687% ( 1) 00:15:08.604 6.477 - 6.502: 99.3817% ( 2) 00:15:08.604 6.502 - 6.528: 99.3882% ( 1) 00:15:08.604 6.554 - 6.605: 99.3947% ( 1) 00:15:08.604 6.656 - 6.707: 99.4012% ( 1) 00:15:08.604 6.758 - 6.810: 99.4077% ( 1) 00:15:08.604 6.810 - 6.861: 99.4207% ( 2) 00:15:08.604 7.066 - 7.117: 99.4272% ( 1) 00:15:08.604 7.219 - 7.270: 99.4468% ( 3) 00:15:08.604 7.526 - 7.578: 99.4533% ( 1) 00:15:08.604 7.680 - 7.731: 99.4598% ( 1) 00:15:08.604 7.987 - 8.038: 99.4663% ( 1) 00:15:08.604 8.141 - 8.192: 99.4728% ( 1) 00:15:08.604 8.448 - 8.499: 99.4793% ( 1) 00:15:08.604 8.499 - 8.550: 99.4858% ( 1) 00:15:08.604 8.653 - 8.704: 99.4923% ( 1) 00:15:08.604 8.960 - 9.011: 99.4988% ( 1) 00:15:08.604 11.213 - 11.264: 99.5053% ( 1) 00:15:08.604 12.186 - 12.237: 99.5118% ( 1) 00:15:08.604 13.517 - 13.619: 99.5184% ( 1) 00:15:08.604 15.155 - 15.258: 99.5249% ( 1) 00:15:08.604 17.715 - 17.818: 99.5314% ( 1) 00:15:08.604 3984.589 - 4010.803: 100.0000% ( 72) 00:15:08.604 00:15:08.604 11:50:58 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:08.604 11:50:58 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:08.604 11:50:58 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:08.604 11:50:58 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:08.604 11:50:58 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:08.604 [ 00:15:08.604 { 00:15:08.604 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:08.604 "subtype": "Discovery", 00:15:08.604 "listen_addresses": [], 00:15:08.604 "allow_any_host": true, 00:15:08.604 "hosts": [] 00:15:08.604 }, 00:15:08.604 { 00:15:08.604 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:08.604 "subtype": "NVMe", 00:15:08.604 "listen_addresses": [ 00:15:08.604 { 00:15:08.604 "transport": "VFIOUSER", 00:15:08.604 "trtype": "VFIOUSER", 00:15:08.604 "adrfam": "IPv4", 00:15:08.604 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:08.604 "trsvcid": "0" 00:15:08.604 } 00:15:08.604 ], 00:15:08.604 "allow_any_host": true, 00:15:08.604 "hosts": [], 00:15:08.604 "serial_number": "SPDK1", 00:15:08.604 "model_number": "SPDK bdev Controller", 00:15:08.604 "max_namespaces": 32, 00:15:08.604 "min_cntlid": 1, 00:15:08.604 "max_cntlid": 65519, 00:15:08.604 "namespaces": [ 00:15:08.604 { 00:15:08.604 "nsid": 1, 00:15:08.604 "bdev_name": "Malloc1", 00:15:08.604 "name": "Malloc1", 00:15:08.604 "nguid": "4815050131D3491180DDAFB9267302C8", 00:15:08.604 "uuid": "48150501-31d3-4911-80dd-afb9267302c8" 00:15:08.604 }, 00:15:08.604 { 00:15:08.604 "nsid": 2, 00:15:08.604 "bdev_name": "Malloc3", 00:15:08.604 "name": "Malloc3", 00:15:08.604 "nguid": "59011534B35D4686A5450D0665106BD3", 00:15:08.604 "uuid": "59011534-b35d-4686-a545-0d0665106bd3" 00:15:08.604 } 00:15:08.604 ] 00:15:08.604 }, 00:15:08.604 { 00:15:08.604 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:08.604 "subtype": "NVMe", 00:15:08.604 "listen_addresses": [ 00:15:08.604 { 00:15:08.604 "transport": "VFIOUSER", 00:15:08.604 "trtype": "VFIOUSER", 00:15:08.604 "adrfam": "IPv4", 00:15:08.604 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:08.604 "trsvcid": "0" 00:15:08.604 } 00:15:08.604 ], 00:15:08.604 "allow_any_host": true, 00:15:08.604 "hosts": [], 00:15:08.604 "serial_number": "SPDK2", 00:15:08.604 "model_number": "SPDK bdev Controller", 00:15:08.604 "max_namespaces": 32, 00:15:08.604 "min_cntlid": 1, 00:15:08.605 "max_cntlid": 65519, 00:15:08.605 "namespaces": [ 00:15:08.605 { 00:15:08.605 "nsid": 1, 00:15:08.605 "bdev_name": "Malloc2", 00:15:08.605 "name": "Malloc2", 00:15:08.605 "nguid": "F1EA0103DC3947F4B1DEF0B464945694", 00:15:08.605 "uuid": "f1ea0103-dc39-47f4-b1de-f0b464945694" 00:15:08.605 } 00:15:08.605 ] 00:15:08.605 } 00:15:08.605 ] 00:15:08.605 11:50:59 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:08.605 11:50:59 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:08.605 11:50:59 -- target/nvmf_vfio_user.sh@34 -- # aerpid=2432017 00:15:08.605 11:50:59 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:08.605 11:50:59 -- common/autotest_common.sh@1251 -- # local i=0 00:15:08.605 11:50:59 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:08.605 11:50:59 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:08.605 11:50:59 -- common/autotest_common.sh@1262 -- # return 0 00:15:08.605 11:50:59 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:08.605 11:50:59 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:08.864 EAL: No free 2048 kB hugepages reported on node 1 00:15:08.864 Malloc4 00:15:08.864 11:50:59 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:08.864 [2024-04-18 11:50:59.382303] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:09.122 [2024-04-18 11:50:59.515343] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:09.122 11:50:59 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:09.122 Asynchronous Event Request test 00:15:09.122 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:09.122 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:09.122 Registering asynchronous event callbacks... 00:15:09.122 Starting namespace attribute notice tests for all controllers... 00:15:09.122 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:09.122 aer_cb - Changed Namespace 00:15:09.122 Cleaning up... 00:15:09.381 [ 00:15:09.382 { 00:15:09.382 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:09.382 "subtype": "Discovery", 00:15:09.382 "listen_addresses": [], 00:15:09.382 "allow_any_host": true, 00:15:09.382 "hosts": [] 00:15:09.382 }, 00:15:09.382 { 00:15:09.382 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:09.382 "subtype": "NVMe", 00:15:09.382 "listen_addresses": [ 00:15:09.382 { 00:15:09.382 "transport": "VFIOUSER", 00:15:09.382 "trtype": "VFIOUSER", 00:15:09.382 "adrfam": "IPv4", 00:15:09.382 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:09.382 "trsvcid": "0" 00:15:09.382 } 00:15:09.382 ], 00:15:09.382 "allow_any_host": true, 00:15:09.382 "hosts": [], 00:15:09.382 "serial_number": "SPDK1", 00:15:09.382 "model_number": "SPDK bdev Controller", 00:15:09.382 "max_namespaces": 32, 00:15:09.382 "min_cntlid": 1, 00:15:09.382 "max_cntlid": 65519, 00:15:09.382 "namespaces": [ 00:15:09.382 { 00:15:09.382 "nsid": 1, 00:15:09.382 "bdev_name": "Malloc1", 00:15:09.382 "name": "Malloc1", 00:15:09.382 "nguid": "4815050131D3491180DDAFB9267302C8", 00:15:09.382 "uuid": "48150501-31d3-4911-80dd-afb9267302c8" 00:15:09.382 }, 00:15:09.382 { 00:15:09.382 "nsid": 2, 00:15:09.382 "bdev_name": "Malloc3", 00:15:09.382 "name": "Malloc3", 00:15:09.382 "nguid": "59011534B35D4686A5450D0665106BD3", 00:15:09.382 "uuid": "59011534-b35d-4686-a545-0d0665106bd3" 00:15:09.382 } 00:15:09.382 ] 00:15:09.382 }, 00:15:09.382 { 00:15:09.382 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:09.382 "subtype": "NVMe", 00:15:09.382 "listen_addresses": [ 00:15:09.382 { 00:15:09.382 "transport": "VFIOUSER", 00:15:09.382 "trtype": "VFIOUSER", 00:15:09.382 "adrfam": "IPv4", 00:15:09.382 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:09.382 "trsvcid": "0" 00:15:09.382 } 00:15:09.382 ], 00:15:09.382 "allow_any_host": true, 00:15:09.382 "hosts": [], 00:15:09.382 "serial_number": "SPDK2", 00:15:09.382 "model_number": "SPDK bdev Controller", 00:15:09.382 "max_namespaces": 32, 00:15:09.382 "min_cntlid": 1, 00:15:09.382 "max_cntlid": 65519, 00:15:09.382 "namespaces": [ 00:15:09.382 { 00:15:09.382 "nsid": 1, 00:15:09.382 "bdev_name": "Malloc2", 00:15:09.382 "name": "Malloc2", 00:15:09.382 "nguid": "F1EA0103DC3947F4B1DEF0B464945694", 00:15:09.382 "uuid": "f1ea0103-dc39-47f4-b1de-f0b464945694" 00:15:09.382 }, 00:15:09.382 { 00:15:09.382 "nsid": 2, 00:15:09.382 "bdev_name": "Malloc4", 00:15:09.382 "name": "Malloc4", 00:15:09.382 "nguid": "44A42051189D49939BA7A3BBA6ACB9D0", 00:15:09.382 "uuid": "44a42051-189d-4993-9ba7-a3bba6acb9d0" 00:15:09.382 } 00:15:09.382 ] 00:15:09.382 } 00:15:09.382 ] 00:15:09.382 11:50:59 -- target/nvmf_vfio_user.sh@44 -- # wait 2432017 00:15:09.382 11:50:59 -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:09.382 11:50:59 -- target/nvmf_vfio_user.sh@95 -- # killprocess 2423711 00:15:09.382 11:50:59 -- common/autotest_common.sh@936 -- # '[' -z 2423711 ']' 00:15:09.382 11:50:59 -- common/autotest_common.sh@940 -- # kill -0 2423711 00:15:09.382 11:50:59 -- common/autotest_common.sh@941 -- # uname 00:15:09.382 11:50:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:09.382 11:50:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2423711 00:15:09.382 11:50:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:09.382 11:50:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:09.382 11:50:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2423711' 00:15:09.382 killing process with pid 2423711 00:15:09.382 11:50:59 -- common/autotest_common.sh@955 -- # kill 2423711 00:15:09.382 [2024-04-18 11:50:59.768557] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:15:09.382 11:50:59 -- common/autotest_common.sh@960 -- # wait 2423711 00:15:11.288 11:51:01 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:11.288 11:51:01 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:11.288 11:51:01 -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:11.288 11:51:01 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:11.288 11:51:01 -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:11.288 11:51:01 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2432675 00:15:11.288 11:51:01 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2432675' 00:15:11.288 Process pid: 2432675 00:15:11.288 11:51:01 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:11.288 11:51:01 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:11.288 11:51:01 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2432675 00:15:11.288 11:51:01 -- common/autotest_common.sh@817 -- # '[' -z 2432675 ']' 00:15:11.288 11:51:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:11.288 11:51:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:11.288 11:51:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:11.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:11.288 11:51:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:11.288 11:51:01 -- common/autotest_common.sh@10 -- # set +x 00:15:11.547 [2024-04-18 11:51:01.891517] thread.c:2927:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:11.547 [2024-04-18 11:51:01.893622] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:15:11.547 [2024-04-18 11:51:01.893690] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:11.547 EAL: No free 2048 kB hugepages reported on node 1 00:15:11.547 [2024-04-18 11:51:02.019193] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:11.806 [2024-04-18 11:51:02.234230] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:11.806 [2024-04-18 11:51:02.234279] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:11.806 [2024-04-18 11:51:02.234293] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:11.806 [2024-04-18 11:51:02.234305] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:11.806 [2024-04-18 11:51:02.234317] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:11.806 [2024-04-18 11:51:02.234403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:11.806 [2024-04-18 11:51:02.234484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:11.806 [2024-04-18 11:51:02.234521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:11.806 [2024-04-18 11:51:02.234532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:12.065 [2024-04-18 11:51:02.609179] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_0) to intr mode from intr mode. 00:15:12.065 [2024-04-18 11:51:02.610322] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_1) to intr mode from intr mode. 00:15:12.065 [2024-04-18 11:51:02.611633] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_2) to intr mode from intr mode. 00:15:12.065 [2024-04-18 11:51:02.612640] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:12.065 [2024-04-18 11:51:02.612783] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_3) to intr mode from intr mode. 00:15:12.323 11:51:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:12.323 11:51:02 -- common/autotest_common.sh@850 -- # return 0 00:15:12.323 11:51:02 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:13.258 11:51:03 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:13.517 11:51:03 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:13.517 11:51:03 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:13.517 11:51:03 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:13.517 11:51:03 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:13.517 11:51:03 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:13.775 Malloc1 00:15:13.775 11:51:04 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:13.775 11:51:04 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:14.033 11:51:04 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:14.292 11:51:04 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:14.292 11:51:04 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:14.292 11:51:04 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:14.550 Malloc2 00:15:14.551 11:51:04 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:14.551 11:51:05 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:14.808 11:51:05 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:15.067 11:51:05 -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:15.067 11:51:05 -- target/nvmf_vfio_user.sh@95 -- # killprocess 2432675 00:15:15.067 11:51:05 -- common/autotest_common.sh@936 -- # '[' -z 2432675 ']' 00:15:15.067 11:51:05 -- common/autotest_common.sh@940 -- # kill -0 2432675 00:15:15.067 11:51:05 -- common/autotest_common.sh@941 -- # uname 00:15:15.067 11:51:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:15.067 11:51:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2432675 00:15:15.067 11:51:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:15.067 11:51:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:15.067 11:51:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2432675' 00:15:15.067 killing process with pid 2432675 00:15:15.067 11:51:05 -- common/autotest_common.sh@955 -- # kill 2432675 00:15:15.067 11:51:05 -- common/autotest_common.sh@960 -- # wait 2432675 00:15:16.971 11:51:07 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:16.971 11:51:07 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:16.971 00:15:16.971 real 0m57.163s 00:15:16.971 user 3m36.828s 00:15:16.971 sys 0m5.654s 00:15:16.971 11:51:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:16.971 11:51:07 -- common/autotest_common.sh@10 -- # set +x 00:15:16.971 ************************************ 00:15:16.971 END TEST nvmf_vfio_user 00:15:16.971 ************************************ 00:15:16.971 11:51:07 -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:16.971 11:51:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:16.971 11:51:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:16.971 11:51:07 -- common/autotest_common.sh@10 -- # set +x 00:15:16.971 ************************************ 00:15:16.971 START TEST nvmf_vfio_user_nvme_compliance 00:15:16.971 ************************************ 00:15:16.971 11:51:07 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:16.971 * Looking for test storage... 00:15:16.971 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:16.971 11:51:07 -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:16.971 11:51:07 -- nvmf/common.sh@7 -- # uname -s 00:15:16.971 11:51:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:16.971 11:51:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:16.971 11:51:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:16.971 11:51:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:16.971 11:51:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:16.971 11:51:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:16.971 11:51:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:16.971 11:51:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:16.971 11:51:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:16.971 11:51:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:16.971 11:51:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:15:16.971 11:51:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:15:16.971 11:51:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:16.971 11:51:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:16.971 11:51:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:16.971 11:51:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:16.971 11:51:07 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:16.971 11:51:07 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:16.971 11:51:07 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:16.971 11:51:07 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:16.971 11:51:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.971 11:51:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.971 11:51:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.971 11:51:07 -- paths/export.sh@5 -- # export PATH 00:15:16.971 11:51:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.971 11:51:07 -- nvmf/common.sh@47 -- # : 0 00:15:16.971 11:51:07 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:16.971 11:51:07 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:16.971 11:51:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:16.971 11:51:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:16.971 11:51:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:16.971 11:51:07 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:16.971 11:51:07 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:16.971 11:51:07 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:16.971 11:51:07 -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:16.971 11:51:07 -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:16.971 11:51:07 -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:16.971 11:51:07 -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:16.971 11:51:07 -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:16.971 11:51:07 -- compliance/compliance.sh@20 -- # nvmfpid=2434007 00:15:16.971 11:51:07 -- compliance/compliance.sh@21 -- # echo 'Process pid: 2434007' 00:15:16.971 Process pid: 2434007 00:15:16.971 11:51:07 -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:16.971 11:51:07 -- compliance/compliance.sh@24 -- # waitforlisten 2434007 00:15:16.971 11:51:07 -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:16.971 11:51:07 -- common/autotest_common.sh@817 -- # '[' -z 2434007 ']' 00:15:16.971 11:51:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:16.971 11:51:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:16.971 11:51:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:16.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:16.971 11:51:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:16.971 11:51:07 -- common/autotest_common.sh@10 -- # set +x 00:15:16.971 [2024-04-18 11:51:07.517114] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:15:16.971 [2024-04-18 11:51:07.517207] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:17.230 EAL: No free 2048 kB hugepages reported on node 1 00:15:17.230 [2024-04-18 11:51:07.641436] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:17.488 [2024-04-18 11:51:07.847902] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:17.488 [2024-04-18 11:51:07.847947] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:17.488 [2024-04-18 11:51:07.847960] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:17.488 [2024-04-18 11:51:07.847988] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:17.488 [2024-04-18 11:51:07.848000] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:17.488 [2024-04-18 11:51:07.848076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:17.488 [2024-04-18 11:51:07.848182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.488 [2024-04-18 11:51:07.848186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:17.745 11:51:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:17.745 11:51:08 -- common/autotest_common.sh@850 -- # return 0 00:15:17.745 11:51:08 -- compliance/compliance.sh@26 -- # sleep 1 00:15:19.130 11:51:09 -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:19.130 11:51:09 -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:19.130 11:51:09 -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:19.130 11:51:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:19.130 11:51:09 -- common/autotest_common.sh@10 -- # set +x 00:15:19.130 11:51:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:19.130 11:51:09 -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:19.130 11:51:09 -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:19.130 11:51:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:19.130 11:51:09 -- common/autotest_common.sh@10 -- # set +x 00:15:19.130 malloc0 00:15:19.130 11:51:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:19.130 11:51:09 -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:19.130 11:51:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:19.130 11:51:09 -- common/autotest_common.sh@10 -- # set +x 00:15:19.130 11:51:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:19.130 11:51:09 -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:19.130 11:51:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:19.130 11:51:09 -- common/autotest_common.sh@10 -- # set +x 00:15:19.130 11:51:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:19.130 11:51:09 -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:19.130 11:51:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:19.130 11:51:09 -- common/autotest_common.sh@10 -- # set +x 00:15:19.130 11:51:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:19.130 11:51:09 -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:19.130 EAL: No free 2048 kB hugepages reported on node 1 00:15:19.130 00:15:19.130 00:15:19.130 CUnit - A unit testing framework for C - Version 2.1-3 00:15:19.130 http://cunit.sourceforge.net/ 00:15:19.130 00:15:19.130 00:15:19.130 Suite: nvme_compliance 00:15:19.387 Test: admin_identify_ctrlr_verify_dptr ...[2024-04-18 11:51:09.688539] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:19.387 [2024-04-18 11:51:09.690016] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:19.387 [2024-04-18 11:51:09.690039] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:19.387 [2024-04-18 11:51:09.690055] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:19.387 [2024-04-18 11:51:09.693589] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:19.387 passed 00:15:19.387 Test: admin_identify_ctrlr_verify_fused ...[2024-04-18 11:51:09.795398] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:19.387 [2024-04-18 11:51:09.798428] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:19.387 passed 00:15:19.388 Test: admin_identify_ns ...[2024-04-18 11:51:09.900646] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:19.645 [2024-04-18 11:51:09.962465] ctrlr.c:2656:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:19.645 [2024-04-18 11:51:09.970469] ctrlr.c:2656:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:19.645 [2024-04-18 11:51:09.991585] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:19.645 passed 00:15:19.645 Test: admin_get_features_mandatory_features ...[2024-04-18 11:51:10.095253] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:19.645 [2024-04-18 11:51:10.098254] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:19.645 passed 00:15:19.903 Test: admin_get_features_optional_features ...[2024-04-18 11:51:10.205096] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:19.903 [2024-04-18 11:51:10.208125] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:19.903 passed 00:15:19.903 Test: admin_set_features_number_of_queues ...[2024-04-18 11:51:10.310326] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:19.903 [2024-04-18 11:51:10.419519] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:20.160 passed 00:15:20.160 Test: admin_get_log_page_mandatory_logs ...[2024-04-18 11:51:10.521534] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:20.160 [2024-04-18 11:51:10.524564] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:20.160 passed 00:15:20.161 Test: admin_get_log_page_with_lpo ...[2024-04-18 11:51:10.628855] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:20.161 [2024-04-18 11:51:10.698486] ctrlr.c:2604:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:20.418 [2024-04-18 11:51:10.711566] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:20.418 passed 00:15:20.418 Test: fabric_property_get ...[2024-04-18 11:51:10.814523] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:20.418 [2024-04-18 11:51:10.815866] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:20.418 [2024-04-18 11:51:10.817545] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:20.418 passed 00:15:20.418 Test: admin_delete_io_sq_use_admin_qid ...[2024-04-18 11:51:10.922336] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:20.418 [2024-04-18 11:51:10.923684] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:20.418 [2024-04-18 11:51:10.928384] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:20.676 passed 00:15:20.676 Test: admin_delete_io_sq_delete_sq_twice ...[2024-04-18 11:51:11.030669] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:20.676 [2024-04-18 11:51:11.116466] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:20.676 [2024-04-18 11:51:11.132468] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:20.676 [2024-04-18 11:51:11.138231] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:20.676 passed 00:15:20.934 Test: admin_delete_io_cq_use_admin_qid ...[2024-04-18 11:51:11.239371] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:20.934 [2024-04-18 11:51:11.240712] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:20.934 [2024-04-18 11:51:11.242403] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:20.934 passed 00:15:20.934 Test: admin_delete_io_cq_delete_cq_first ...[2024-04-18 11:51:11.342404] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:20.934 [2024-04-18 11:51:11.417470] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:20.934 [2024-04-18 11:51:11.438632] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:20.934 [2024-04-18 11:51:11.444212] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:21.192 passed 00:15:21.192 Test: admin_create_io_cq_verify_iv_pc ...[2024-04-18 11:51:11.546823] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:21.192 [2024-04-18 11:51:11.548174] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:21.192 [2024-04-18 11:51:11.548212] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:21.192 [2024-04-18 11:51:11.549851] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:21.192 passed 00:15:21.192 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-04-18 11:51:11.652930] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:21.450 [2024-04-18 11:51:11.746462] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:21.450 [2024-04-18 11:51:11.754460] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:21.450 [2024-04-18 11:51:11.762461] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:21.450 [2024-04-18 11:51:11.770459] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:21.450 [2024-04-18 11:51:11.800331] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:21.450 passed 00:15:21.450 Test: admin_create_io_sq_verify_pc ...[2024-04-18 11:51:11.903389] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:21.450 [2024-04-18 11:51:11.919495] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:21.450 [2024-04-18 11:51:11.937467] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:21.450 passed 00:15:21.707 Test: admin_create_io_qp_max_qps ...[2024-04-18 11:51:12.040254] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:22.648 [2024-04-18 11:51:13.151468] nvme_ctrlr.c:5329:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:15:23.213 [2024-04-18 11:51:13.589043] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:23.213 passed 00:15:23.213 Test: admin_create_io_sq_shared_cq ...[2024-04-18 11:51:13.690262] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:23.481 [2024-04-18 11:51:13.823466] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:23.481 [2024-04-18 11:51:13.859686] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:23.481 passed 00:15:23.481 00:15:23.481 Run Summary: Type Total Ran Passed Failed Inactive 00:15:23.481 suites 1 1 n/a 0 0 00:15:23.481 tests 18 18 18 0 0 00:15:23.481 asserts 360 360 360 0 n/a 00:15:23.481 00:15:23.481 Elapsed time = 1.777 seconds 00:15:23.481 11:51:13 -- compliance/compliance.sh@42 -- # killprocess 2434007 00:15:23.481 11:51:13 -- common/autotest_common.sh@936 -- # '[' -z 2434007 ']' 00:15:23.481 11:51:13 -- common/autotest_common.sh@940 -- # kill -0 2434007 00:15:23.481 11:51:13 -- common/autotest_common.sh@941 -- # uname 00:15:23.481 11:51:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:23.481 11:51:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2434007 00:15:23.481 11:51:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:23.481 11:51:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:23.481 11:51:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2434007' 00:15:23.481 killing process with pid 2434007 00:15:23.481 11:51:14 -- common/autotest_common.sh@955 -- # kill 2434007 00:15:23.481 11:51:14 -- common/autotest_common.sh@960 -- # wait 2434007 00:15:25.377 11:51:15 -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:25.377 11:51:15 -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:25.377 00:15:25.377 real 0m8.179s 00:15:25.377 user 0m21.862s 00:15:25.377 sys 0m0.892s 00:15:25.377 11:51:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:25.377 11:51:15 -- common/autotest_common.sh@10 -- # set +x 00:15:25.377 ************************************ 00:15:25.377 END TEST nvmf_vfio_user_nvme_compliance 00:15:25.377 ************************************ 00:15:25.377 11:51:15 -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:25.377 11:51:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:25.377 11:51:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:25.377 11:51:15 -- common/autotest_common.sh@10 -- # set +x 00:15:25.377 ************************************ 00:15:25.377 START TEST nvmf_vfio_user_fuzz 00:15:25.377 ************************************ 00:15:25.377 11:51:15 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:25.377 * Looking for test storage... 00:15:25.377 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:25.377 11:51:15 -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:25.377 11:51:15 -- nvmf/common.sh@7 -- # uname -s 00:15:25.377 11:51:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:25.377 11:51:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:25.377 11:51:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:25.377 11:51:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:25.377 11:51:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:25.377 11:51:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:25.377 11:51:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:25.377 11:51:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:25.377 11:51:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:25.377 11:51:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:25.377 11:51:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:15:25.377 11:51:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:15:25.377 11:51:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:25.377 11:51:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:25.377 11:51:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:25.377 11:51:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:25.377 11:51:15 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:25.377 11:51:15 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:25.377 11:51:15 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:25.377 11:51:15 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:25.377 11:51:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.377 11:51:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.377 11:51:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.377 11:51:15 -- paths/export.sh@5 -- # export PATH 00:15:25.377 11:51:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.377 11:51:15 -- nvmf/common.sh@47 -- # : 0 00:15:25.377 11:51:15 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:25.377 11:51:15 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:25.377 11:51:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:25.377 11:51:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:25.377 11:51:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:25.377 11:51:15 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:25.377 11:51:15 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:25.377 11:51:15 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:25.377 11:51:15 -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:25.377 11:51:15 -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:25.377 11:51:15 -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:25.377 11:51:15 -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:25.377 11:51:15 -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:25.377 11:51:15 -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:25.377 11:51:15 -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:25.377 11:51:15 -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2435613 00:15:25.377 11:51:15 -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2435613' 00:15:25.377 Process pid: 2435613 00:15:25.377 11:51:15 -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:25.377 11:51:15 -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:25.377 11:51:15 -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2435613 00:15:25.377 11:51:15 -- common/autotest_common.sh@817 -- # '[' -z 2435613 ']' 00:15:25.377 11:51:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.377 11:51:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:25.377 11:51:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.377 11:51:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:25.377 11:51:15 -- common/autotest_common.sh@10 -- # set +x 00:15:26.309 11:51:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:26.309 11:51:16 -- common/autotest_common.sh@850 -- # return 0 00:15:26.309 11:51:16 -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:27.243 11:51:17 -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:27.243 11:51:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:27.243 11:51:17 -- common/autotest_common.sh@10 -- # set +x 00:15:27.243 11:51:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:27.243 11:51:17 -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:27.243 11:51:17 -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:27.243 11:51:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:27.243 11:51:17 -- common/autotest_common.sh@10 -- # set +x 00:15:27.243 malloc0 00:15:27.243 11:51:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:27.243 11:51:17 -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:27.243 11:51:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:27.243 11:51:17 -- common/autotest_common.sh@10 -- # set +x 00:15:27.243 11:51:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:27.243 11:51:17 -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:27.243 11:51:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:27.243 11:51:17 -- common/autotest_common.sh@10 -- # set +x 00:15:27.243 11:51:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:27.243 11:51:17 -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:27.243 11:51:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:27.243 11:51:17 -- common/autotest_common.sh@10 -- # set +x 00:15:27.243 11:51:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:27.243 11:51:17 -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:27.243 11:51:17 -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:15:59.308 Fuzzing completed. Shutting down the fuzz application 00:15:59.308 00:15:59.308 Dumping successful admin opcodes: 00:15:59.308 8, 9, 10, 24, 00:15:59.308 Dumping successful io opcodes: 00:15:59.308 0, 00:15:59.308 NS: 0x200003a1eec0 I/O qp, Total commands completed: 723483, total successful commands: 2818, random_seed: 3756673792 00:15:59.308 NS: 0x200003a1eec0 admin qp, Total commands completed: 167478, total successful commands: 1363, random_seed: 840719680 00:15:59.308 11:51:48 -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:15:59.308 11:51:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:59.308 11:51:48 -- common/autotest_common.sh@10 -- # set +x 00:15:59.308 11:51:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:59.309 11:51:48 -- target/vfio_user_fuzz.sh@46 -- # killprocess 2435613 00:15:59.309 11:51:48 -- common/autotest_common.sh@936 -- # '[' -z 2435613 ']' 00:15:59.309 11:51:48 -- common/autotest_common.sh@940 -- # kill -0 2435613 00:15:59.309 11:51:48 -- common/autotest_common.sh@941 -- # uname 00:15:59.309 11:51:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:59.309 11:51:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2435613 00:15:59.309 11:51:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:59.309 11:51:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:59.309 11:51:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2435613' 00:15:59.309 killing process with pid 2435613 00:15:59.309 11:51:48 -- common/autotest_common.sh@955 -- # kill 2435613 00:15:59.309 11:51:48 -- common/autotest_common.sh@960 -- # wait 2435613 00:15:59.874 11:51:50 -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:15:59.874 11:51:50 -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:15:59.874 00:15:59.874 real 0m34.769s 00:15:59.874 user 0m35.999s 00:15:59.874 sys 0m28.007s 00:15:59.874 11:51:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:59.874 11:51:50 -- common/autotest_common.sh@10 -- # set +x 00:15:59.874 ************************************ 00:15:59.874 END TEST nvmf_vfio_user_fuzz 00:15:59.874 ************************************ 00:16:00.191 11:51:50 -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:00.191 11:51:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:00.191 11:51:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:00.191 11:51:50 -- common/autotest_common.sh@10 -- # set +x 00:16:00.191 ************************************ 00:16:00.191 START TEST nvmf_host_management 00:16:00.191 ************************************ 00:16:00.191 11:51:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:00.191 * Looking for test storage... 00:16:00.191 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:00.191 11:51:50 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:00.191 11:51:50 -- nvmf/common.sh@7 -- # uname -s 00:16:00.191 11:51:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:00.191 11:51:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:00.191 11:51:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:00.191 11:51:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:00.191 11:51:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:00.191 11:51:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:00.191 11:51:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:00.191 11:51:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:00.191 11:51:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:00.191 11:51:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:00.191 11:51:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:00.191 11:51:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:16:00.191 11:51:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:00.191 11:51:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:00.191 11:51:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:00.191 11:51:50 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:00.191 11:51:50 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:00.450 11:51:50 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:00.450 11:51:50 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:00.450 11:51:50 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:00.450 11:51:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.450 11:51:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.450 11:51:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.450 11:51:50 -- paths/export.sh@5 -- # export PATH 00:16:00.450 11:51:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.450 11:51:50 -- nvmf/common.sh@47 -- # : 0 00:16:00.450 11:51:50 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:00.450 11:51:50 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:00.450 11:51:50 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:00.450 11:51:50 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:00.450 11:51:50 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:00.450 11:51:50 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:00.450 11:51:50 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:00.450 11:51:50 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:00.450 11:51:50 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:00.450 11:51:50 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:00.450 11:51:50 -- target/host_management.sh@105 -- # nvmftestinit 00:16:00.450 11:51:50 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:00.450 11:51:50 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:00.450 11:51:50 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:00.450 11:51:50 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:00.450 11:51:50 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:00.450 11:51:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:00.450 11:51:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:00.450 11:51:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:00.450 11:51:50 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:00.450 11:51:50 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:00.450 11:51:50 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:00.450 11:51:50 -- common/autotest_common.sh@10 -- # set +x 00:16:07.009 11:51:57 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:07.009 11:51:57 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:07.009 11:51:57 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:07.009 11:51:57 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:07.009 11:51:57 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:07.009 11:51:57 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:07.009 11:51:57 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:07.009 11:51:57 -- nvmf/common.sh@295 -- # net_devs=() 00:16:07.009 11:51:57 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:07.009 11:51:57 -- nvmf/common.sh@296 -- # e810=() 00:16:07.009 11:51:57 -- nvmf/common.sh@296 -- # local -ga e810 00:16:07.009 11:51:57 -- nvmf/common.sh@297 -- # x722=() 00:16:07.009 11:51:57 -- nvmf/common.sh@297 -- # local -ga x722 00:16:07.009 11:51:57 -- nvmf/common.sh@298 -- # mlx=() 00:16:07.009 11:51:57 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:07.009 11:51:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:07.009 11:51:57 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:07.009 11:51:57 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:07.009 11:51:57 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:07.009 11:51:57 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:07.009 11:51:57 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:07.009 11:51:57 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:07.009 11:51:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:07.009 11:51:57 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:07.010 11:51:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:07.010 11:51:57 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:07.010 11:51:57 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:07.010 11:51:57 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:07.010 11:51:57 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:07.010 11:51:57 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:07.010 11:51:57 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:07.010 11:51:57 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:07.010 11:51:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:07.010 11:51:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:07.010 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:07.010 11:51:57 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:07.010 11:51:57 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:07.010 11:51:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:07.010 11:51:57 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:07.010 11:51:57 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:07.010 11:51:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:07.010 11:51:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:07.010 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:07.010 11:51:57 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:07.010 11:51:57 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:07.010 11:51:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:07.010 11:51:57 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:07.010 11:51:57 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:07.010 11:51:57 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:07.010 11:51:57 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:07.010 11:51:57 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:07.010 11:51:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:07.010 11:51:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:07.010 11:51:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:07.010 11:51:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:07.010 11:51:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:07.010 Found net devices under 0000:af:00.0: cvl_0_0 00:16:07.010 11:51:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:07.010 11:51:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:07.010 11:51:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:07.010 11:51:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:07.010 11:51:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:07.010 11:51:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:07.010 Found net devices under 0000:af:00.1: cvl_0_1 00:16:07.010 11:51:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:07.010 11:51:57 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:07.010 11:51:57 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:07.010 11:51:57 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:07.010 11:51:57 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:07.010 11:51:57 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:07.010 11:51:57 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:07.010 11:51:57 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:07.010 11:51:57 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:07.010 11:51:57 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:07.010 11:51:57 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:07.010 11:51:57 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:07.010 11:51:57 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:07.010 11:51:57 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:07.010 11:51:57 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:07.010 11:51:57 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:07.010 11:51:57 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:07.010 11:51:57 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:07.010 11:51:57 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:07.010 11:51:57 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:07.010 11:51:57 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:07.010 11:51:57 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:07.010 11:51:57 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:07.010 11:51:57 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:07.010 11:51:57 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:07.010 11:51:57 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:07.010 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:07.010 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:16:07.010 00:16:07.010 --- 10.0.0.2 ping statistics --- 00:16:07.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:07.010 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:16:07.010 11:51:57 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:07.010 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:07.010 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:16:07.010 00:16:07.010 --- 10.0.0.1 ping statistics --- 00:16:07.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:07.010 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:16:07.010 11:51:57 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:07.010 11:51:57 -- nvmf/common.sh@411 -- # return 0 00:16:07.010 11:51:57 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:07.010 11:51:57 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:07.010 11:51:57 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:07.010 11:51:57 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:07.010 11:51:57 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:07.010 11:51:57 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:07.010 11:51:57 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:07.010 11:51:57 -- target/host_management.sh@107 -- # run_test nvmf_host_management nvmf_host_management 00:16:07.010 11:51:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:07.010 11:51:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:07.010 11:51:57 -- common/autotest_common.sh@10 -- # set +x 00:16:07.269 ************************************ 00:16:07.269 START TEST nvmf_host_management 00:16:07.269 ************************************ 00:16:07.269 11:51:57 -- common/autotest_common.sh@1111 -- # nvmf_host_management 00:16:07.269 11:51:57 -- target/host_management.sh@69 -- # starttarget 00:16:07.269 11:51:57 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:16:07.269 11:51:57 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:07.269 11:51:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:07.269 11:51:57 -- common/autotest_common.sh@10 -- # set +x 00:16:07.269 11:51:57 -- nvmf/common.sh@470 -- # nvmfpid=2444747 00:16:07.269 11:51:57 -- nvmf/common.sh@471 -- # waitforlisten 2444747 00:16:07.269 11:51:57 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:07.269 11:51:57 -- common/autotest_common.sh@817 -- # '[' -z 2444747 ']' 00:16:07.269 11:51:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:07.269 11:51:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:07.269 11:51:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:07.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:07.269 11:51:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:07.269 11:51:57 -- common/autotest_common.sh@10 -- # set +x 00:16:07.270 [2024-04-18 11:51:57.711470] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:16:07.270 [2024-04-18 11:51:57.711556] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:07.270 EAL: No free 2048 kB hugepages reported on node 1 00:16:07.528 [2024-04-18 11:51:57.838019] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:07.528 [2024-04-18 11:51:58.051959] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:07.528 [2024-04-18 11:51:58.052008] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:07.528 [2024-04-18 11:51:58.052021] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:07.528 [2024-04-18 11:51:58.052035] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:07.528 [2024-04-18 11:51:58.052046] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:07.528 [2024-04-18 11:51:58.052171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:07.528 [2024-04-18 11:51:58.052195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:07.528 [2024-04-18 11:51:58.052286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:07.528 [2024-04-18 11:51:58.052311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:08.094 11:51:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:08.094 11:51:58 -- common/autotest_common.sh@850 -- # return 0 00:16:08.094 11:51:58 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:08.094 11:51:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:08.094 11:51:58 -- common/autotest_common.sh@10 -- # set +x 00:16:08.094 11:51:58 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:08.094 11:51:58 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:08.094 11:51:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:08.094 11:51:58 -- common/autotest_common.sh@10 -- # set +x 00:16:08.094 [2024-04-18 11:51:58.520397] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:08.094 11:51:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:08.094 11:51:58 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:16:08.094 11:51:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:08.094 11:51:58 -- common/autotest_common.sh@10 -- # set +x 00:16:08.094 11:51:58 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:08.094 11:51:58 -- target/host_management.sh@23 -- # cat 00:16:08.094 11:51:58 -- target/host_management.sh@30 -- # rpc_cmd 00:16:08.094 11:51:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:08.094 11:51:58 -- common/autotest_common.sh@10 -- # set +x 00:16:08.094 Malloc0 00:16:08.352 [2024-04-18 11:51:58.656648] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:08.352 11:51:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:08.352 11:51:58 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:16:08.352 11:51:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:08.352 11:51:58 -- common/autotest_common.sh@10 -- # set +x 00:16:08.352 11:51:58 -- target/host_management.sh@73 -- # perfpid=2444947 00:16:08.352 11:51:58 -- target/host_management.sh@74 -- # waitforlisten 2444947 /var/tmp/bdevperf.sock 00:16:08.352 11:51:58 -- common/autotest_common.sh@817 -- # '[' -z 2444947 ']' 00:16:08.352 11:51:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:08.352 11:51:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:08.352 11:51:58 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:08.352 11:51:58 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:16:08.352 11:51:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:08.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:08.352 11:51:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:08.352 11:51:58 -- nvmf/common.sh@521 -- # config=() 00:16:08.352 11:51:58 -- common/autotest_common.sh@10 -- # set +x 00:16:08.352 11:51:58 -- nvmf/common.sh@521 -- # local subsystem config 00:16:08.352 11:51:58 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:08.352 11:51:58 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:08.352 { 00:16:08.352 "params": { 00:16:08.352 "name": "Nvme$subsystem", 00:16:08.353 "trtype": "$TEST_TRANSPORT", 00:16:08.353 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:08.353 "adrfam": "ipv4", 00:16:08.353 "trsvcid": "$NVMF_PORT", 00:16:08.353 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:08.353 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:08.353 "hdgst": ${hdgst:-false}, 00:16:08.353 "ddgst": ${ddgst:-false} 00:16:08.353 }, 00:16:08.353 "method": "bdev_nvme_attach_controller" 00:16:08.353 } 00:16:08.353 EOF 00:16:08.353 )") 00:16:08.353 11:51:58 -- nvmf/common.sh@543 -- # cat 00:16:08.353 11:51:58 -- nvmf/common.sh@545 -- # jq . 00:16:08.353 11:51:58 -- nvmf/common.sh@546 -- # IFS=, 00:16:08.353 11:51:58 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:08.353 "params": { 00:16:08.353 "name": "Nvme0", 00:16:08.353 "trtype": "tcp", 00:16:08.353 "traddr": "10.0.0.2", 00:16:08.353 "adrfam": "ipv4", 00:16:08.353 "trsvcid": "4420", 00:16:08.353 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:08.353 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:08.353 "hdgst": false, 00:16:08.353 "ddgst": false 00:16:08.353 }, 00:16:08.353 "method": "bdev_nvme_attach_controller" 00:16:08.353 }' 00:16:08.353 [2024-04-18 11:51:58.792378] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:16:08.353 [2024-04-18 11:51:58.792472] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2444947 ] 00:16:08.353 EAL: No free 2048 kB hugepages reported on node 1 00:16:08.611 [2024-04-18 11:51:58.914290] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:08.611 [2024-04-18 11:51:59.151876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.178 Running I/O for 10 seconds... 00:16:09.178 11:51:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:09.178 11:51:59 -- common/autotest_common.sh@850 -- # return 0 00:16:09.178 11:51:59 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:09.178 11:51:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:09.178 11:51:59 -- common/autotest_common.sh@10 -- # set +x 00:16:09.178 11:51:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:09.178 11:51:59 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:09.178 11:51:59 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:16:09.178 11:51:59 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:09.178 11:51:59 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:16:09.178 11:51:59 -- target/host_management.sh@52 -- # local ret=1 00:16:09.178 11:51:59 -- target/host_management.sh@53 -- # local i 00:16:09.178 11:51:59 -- target/host_management.sh@54 -- # (( i = 10 )) 00:16:09.178 11:51:59 -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:09.178 11:51:59 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:09.178 11:51:59 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:09.178 11:51:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:09.178 11:51:59 -- common/autotest_common.sh@10 -- # set +x 00:16:09.436 11:51:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:09.436 11:51:59 -- target/host_management.sh@55 -- # read_io_count=65 00:16:09.436 11:51:59 -- target/host_management.sh@58 -- # '[' 65 -ge 100 ']' 00:16:09.436 11:51:59 -- target/host_management.sh@62 -- # sleep 0.25 00:16:09.696 11:52:00 -- target/host_management.sh@54 -- # (( i-- )) 00:16:09.696 11:52:00 -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:09.696 11:52:00 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:09.696 11:52:00 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:09.696 11:52:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:09.696 11:52:00 -- common/autotest_common.sh@10 -- # set +x 00:16:09.696 11:52:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:09.696 11:52:00 -- target/host_management.sh@55 -- # read_io_count=451 00:16:09.696 11:52:00 -- target/host_management.sh@58 -- # '[' 451 -ge 100 ']' 00:16:09.696 11:52:00 -- target/host_management.sh@59 -- # ret=0 00:16:09.696 11:52:00 -- target/host_management.sh@60 -- # break 00:16:09.696 11:52:00 -- target/host_management.sh@64 -- # return 0 00:16:09.696 11:52:00 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:09.696 11:52:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:09.696 11:52:00 -- common/autotest_common.sh@10 -- # set +x 00:16:09.696 [2024-04-18 11:52:00.037404] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:16:09.696 [2024-04-18 11:52:00.037454] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:16:09.696 [2024-04-18 11:52:00.037467] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:16:09.696 [2024-04-18 11:52:00.037478] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:16:09.696 [2024-04-18 11:52:00.037489] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:16:09.696 [2024-04-18 11:52:00.037499] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:16:09.696 [2024-04-18 11:52:00.037511] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:16:09.696 [2024-04-18 11:52:00.037527] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:16:09.696 [2024-04-18 11:52:00.037537] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:16:09.696 [2024-04-18 11:52:00.037548] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:16:09.696 [2024-04-18 11:52:00.037558] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:16:09.696 [2024-04-18 11:52:00.037569] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:16:09.696 [2024-04-18 11:52:00.037579] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:16:09.696 [2024-04-18 11:52:00.037590] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:16:09.696 [2024-04-18 11:52:00.037600] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:16:09.696 [2024-04-18 11:52:00.037611] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:16:09.696 [2024-04-18 11:52:00.037621] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:16:09.696 [2024-04-18 11:52:00.037631] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:16:09.696 [2024-04-18 11:52:00.037641] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:16:09.696 [2024-04-18 11:52:00.037651] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:16:09.696 [2024-04-18 11:52:00.037661] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:16:09.697 [2024-04-18 11:52:00.037672] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:16:09.697 [2024-04-18 11:52:00.037682] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:16:09.697 [2024-04-18 11:52:00.037692] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:16:09.697 [2024-04-18 11:52:00.037703] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:16:09.697 [2024-04-18 11:52:00.037713] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:16:09.697 [2024-04-18 11:52:00.037723] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:16:09.697 [2024-04-18 11:52:00.037733] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:16:09.697 [2024-04-18 11:52:00.037743] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:16:09.697 [2024-04-18 11:52:00.037754] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:16:09.697 [2024-04-18 11:52:00.037764] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:16:09.697 [2024-04-18 11:52:00.037774] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:16:09.697 [2024-04-18 11:52:00.037785] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:16:09.697 [2024-04-18 11:52:00.037797] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:16:09.697 [2024-04-18 11:52:00.037807] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:16:09.697 [2024-04-18 11:52:00.037818] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:16:09.697 [2024-04-18 11:52:00.037828] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:16:09.697 [2024-04-18 11:52:00.037839] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:16:09.697 [2024-04-18 11:52:00.037850] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:16:09.697 [2024-04-18 11:52:00.037860] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:16:09.697 [2024-04-18 11:52:00.037871] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:16:09.697 [2024-04-18 11:52:00.037881] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:16:09.697 [2024-04-18 11:52:00.037891] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:16:09.697 [2024-04-18 11:52:00.037902] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:16:09.697 [2024-04-18 11:52:00.037912] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:16:09.697 [2024-04-18 11:52:00.037922] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:16:09.697 [2024-04-18 11:52:00.037932] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:16:09.697 [2024-04-18 11:52:00.037942] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:16:09.697 [2024-04-18 11:52:00.037952] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:16:09.697 [2024-04-18 11:52:00.037963] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:16:09.697 [2024-04-18 11:52:00.037973] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:16:09.697 [2024-04-18 11:52:00.037983] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:16:09.697 [2024-04-18 11:52:00.037994] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:16:09.697 [2024-04-18 11:52:00.038004] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:16:09.697 [2024-04-18 11:52:00.038014] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:16:09.697 [2024-04-18 11:52:00.038024] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:16:09.697 [2024-04-18 11:52:00.038035] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:16:09.697 [2024-04-18 11:52:00.038045] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:16:09.697 [2024-04-18 11:52:00.038055] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:16:09.697 [2024-04-18 11:52:00.038065] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:16:09.697 [2024-04-18 11:52:00.038077] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:16:09.697 [2024-04-18 11:52:00.038087] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:16:09.697 [2024-04-18 11:52:00.038099] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:16:09.697 [2024-04-18 11:52:00.038242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.697 [2024-04-18 11:52:00.038288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.697 [2024-04-18 11:52:00.038328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:57472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.697 [2024-04-18 11:52:00.038346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.697 [2024-04-18 11:52:00.038364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:57600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.697 [2024-04-18 11:52:00.038380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.697 [2024-04-18 11:52:00.038399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:57728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.697 [2024-04-18 11:52:00.038415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.697 [2024-04-18 11:52:00.038434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:57856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.697 [2024-04-18 11:52:00.038458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.697 [2024-04-18 11:52:00.038477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:57984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.697 [2024-04-18 11:52:00.038492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.697 [2024-04-18 11:52:00.038511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:58112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.697 [2024-04-18 11:52:00.038526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.697 [2024-04-18 11:52:00.038543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:58240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.697 [2024-04-18 11:52:00.038559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.697 [2024-04-18 11:52:00.038577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:58368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.697 [2024-04-18 11:52:00.038593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.697 [2024-04-18 11:52:00.038611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:58496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.697 [2024-04-18 11:52:00.038627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.697 [2024-04-18 11:52:00.038644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:58624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.697 [2024-04-18 11:52:00.038662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.697 [2024-04-18 11:52:00.038683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:58752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.697 [2024-04-18 11:52:00.038700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.697 [2024-04-18 11:52:00.038718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:58880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.697 [2024-04-18 11:52:00.038733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.697 [2024-04-18 11:52:00.038751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:59008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.697 [2024-04-18 11:52:00.038767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.697 [2024-04-18 11:52:00.038786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:59136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.697 [2024-04-18 11:52:00.038810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.697 [2024-04-18 11:52:00.038827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:59264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.697 [2024-04-18 11:52:00.038843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.697 [2024-04-18 11:52:00.038860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:59392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.697 [2024-04-18 11:52:00.038876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.697 [2024-04-18 11:52:00.038894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:59520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.697 [2024-04-18 11:52:00.038910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.697 [2024-04-18 11:52:00.038928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:59648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.697 [2024-04-18 11:52:00.038944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.698 [2024-04-18 11:52:00.038961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:59776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.698 [2024-04-18 11:52:00.038978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.698 [2024-04-18 11:52:00.038996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:59904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.698 [2024-04-18 11:52:00.039011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.698 [2024-04-18 11:52:00.039029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:60032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.698 [2024-04-18 11:52:00.039048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.698 [2024-04-18 11:52:00.039065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:60160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.698 [2024-04-18 11:52:00.039082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.698 [2024-04-18 11:52:00.039100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:60288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.698 [2024-04-18 11:52:00.039117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.698 [2024-04-18 11:52:00.039135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:60416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.698 [2024-04-18 11:52:00.039152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.698 [2024-04-18 11:52:00.039171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:60544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.698 [2024-04-18 11:52:00.039187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.698 [2024-04-18 11:52:00.039205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:60672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.698 [2024-04-18 11:52:00.039221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.698 [2024-04-18 11:52:00.039239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:60800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.698 [2024-04-18 11:52:00.039255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.698 [2024-04-18 11:52:00.039272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:60928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.698 [2024-04-18 11:52:00.039288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.698 [2024-04-18 11:52:00.039306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:61056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.698 [2024-04-18 11:52:00.039321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.698 [2024-04-18 11:52:00.039338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:61184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.698 [2024-04-18 11:52:00.039353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.698 [2024-04-18 11:52:00.039370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:61312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.698 [2024-04-18 11:52:00.039385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.698 [2024-04-18 11:52:00.039402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:61440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.698 [2024-04-18 11:52:00.039418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.698 [2024-04-18 11:52:00.039435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:61568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.698 [2024-04-18 11:52:00.039456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.698 [2024-04-18 11:52:00.039474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:61696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.698 [2024-04-18 11:52:00.039490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.698 [2024-04-18 11:52:00.039507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:61824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.698 [2024-04-18 11:52:00.039522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.698 [2024-04-18 11:52:00.039541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:61952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.698 [2024-04-18 11:52:00.039557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.698 [2024-04-18 11:52:00.039574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.698 [2024-04-18 11:52:00.039589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.698 [2024-04-18 11:52:00.039606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:62208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.698 [2024-04-18 11:52:00.039622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.698 [2024-04-18 11:52:00.039639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:62336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.698 [2024-04-18 11:52:00.039654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.698 [2024-04-18 11:52:00.039671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:62464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.698 [2024-04-18 11:52:00.039686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.698 [2024-04-18 11:52:00.039704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:62592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.698 [2024-04-18 11:52:00.039719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.698 [2024-04-18 11:52:00.039737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.698 [2024-04-18 11:52:00.039750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.698 [2024-04-18 11:52:00.039764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:62848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.698 [2024-04-18 11:52:00.039775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.698 [2024-04-18 11:52:00.039789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:62976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.698 [2024-04-18 11:52:00.039802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.698 [2024-04-18 11:52:00.039816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:63104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.698 [2024-04-18 11:52:00.039827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.698 [2024-04-18 11:52:00.039841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:63232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.698 [2024-04-18 11:52:00.039852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.698 [2024-04-18 11:52:00.039866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:63360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.698 [2024-04-18 11:52:00.039878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.698 [2024-04-18 11:52:00.039891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:63488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.698 [2024-04-18 11:52:00.039905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.698 [2024-04-18 11:52:00.039918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:63616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.698 [2024-04-18 11:52:00.039930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.698 [2024-04-18 11:52:00.039943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:63744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.698 [2024-04-18 11:52:00.039954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.698 [2024-04-18 11:52:00.039967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:63872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.698 [2024-04-18 11:52:00.039979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.698 [2024-04-18 11:52:00.039993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:64000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.698 [2024-04-18 11:52:00.040005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.698 [2024-04-18 11:52:00.040018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:64128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.698 [2024-04-18 11:52:00.040030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.698 [2024-04-18 11:52:00.040043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:64256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.698 [2024-04-18 11:52:00.040055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.698 [2024-04-18 11:52:00.040068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:64384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.698 [2024-04-18 11:52:00.040079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.698 [2024-04-18 11:52:00.040092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:64512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.698 [2024-04-18 11:52:00.040104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.698 [2024-04-18 11:52:00.040117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:64640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.698 [2024-04-18 11:52:00.040129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.698 [2024-04-18 11:52:00.040142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:64768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.698 [2024-04-18 11:52:00.040153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.699 [2024-04-18 11:52:00.040167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:64896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.699 [2024-04-18 11:52:00.040179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.699 [2024-04-18 11:52:00.040191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:65024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.699 [2024-04-18 11:52:00.040204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.699 [2024-04-18 11:52:00.040219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:65152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.699 [2024-04-18 11:52:00.040231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.699 [2024-04-18 11:52:00.040244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:65280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.699 [2024-04-18 11:52:00.040256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.699 [2024-04-18 11:52:00.040269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:65408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.699 [2024-04-18 11:52:00.040281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.699 [2024-04-18 11:52:00.040294] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007e40 is same with the state(5) to be set 00:16:09.699 [2024-04-18 11:52:00.040595] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x614000007e40 was disconnected and freed. reset controller. 00:16:09.699 [2024-04-18 11:52:00.040655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:09.699 [2024-04-18 11:52:00.040671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.699 [2024-04-18 11:52:00.040685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:09.699 [2024-04-18 11:52:00.040697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.699 [2024-04-18 11:52:00.040710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:09.699 [2024-04-18 11:52:00.040722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.699 [2024-04-18 11:52:00.040734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:09.699 [2024-04-18 11:52:00.040746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.699 [2024-04-18 11:52:00.040758] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:16:09.699 [2024-04-18 11:52:00.041736] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:09.699 11:52:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:09.699 11:52:00 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:09.699 11:52:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:09.699 task offset: 57344 on job bdev=Nvme0n1 fails 00:16:09.699 00:16:09.699 Latency(us) 00:16:09.699 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:09.699 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:09.699 Job: Nvme0n1 ended in about 0.39 seconds with error 00:16:09.699 Verification LBA range: start 0x0 length 0x400 00:16:09.699 Nvme0n1 : 0.39 1151.22 71.95 164.46 0.00 47392.26 7811.89 46137.34 00:16:09.699 =================================================================================================================== 00:16:09.699 Total : 1151.22 71.95 164.46 0.00 47392.26 7811.89 46137.34 00:16:09.699 11:52:00 -- common/autotest_common.sh@10 -- # set +x 00:16:09.699 [2024-04-18 11:52:00.046475] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:09.699 [2024-04-18 11:52:00.046521] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:16:09.699 11:52:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:09.699 11:52:00 -- target/host_management.sh@87 -- # sleep 1 00:16:09.699 [2024-04-18 11:52:00.139756] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:10.634 11:52:01 -- target/host_management.sh@91 -- # kill -9 2444947 00:16:10.634 11:52:01 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:16:10.635 11:52:01 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:10.635 11:52:01 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:16:10.635 11:52:01 -- nvmf/common.sh@521 -- # config=() 00:16:10.635 11:52:01 -- nvmf/common.sh@521 -- # local subsystem config 00:16:10.635 11:52:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:10.635 11:52:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:10.635 { 00:16:10.635 "params": { 00:16:10.635 "name": "Nvme$subsystem", 00:16:10.635 "trtype": "$TEST_TRANSPORT", 00:16:10.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:10.635 "adrfam": "ipv4", 00:16:10.635 "trsvcid": "$NVMF_PORT", 00:16:10.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:10.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:10.635 "hdgst": ${hdgst:-false}, 00:16:10.635 "ddgst": ${ddgst:-false} 00:16:10.635 }, 00:16:10.635 "method": "bdev_nvme_attach_controller" 00:16:10.635 } 00:16:10.635 EOF 00:16:10.635 )") 00:16:10.635 11:52:01 -- nvmf/common.sh@543 -- # cat 00:16:10.635 11:52:01 -- nvmf/common.sh@545 -- # jq . 00:16:10.635 11:52:01 -- nvmf/common.sh@546 -- # IFS=, 00:16:10.635 11:52:01 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:10.635 "params": { 00:16:10.635 "name": "Nvme0", 00:16:10.635 "trtype": "tcp", 00:16:10.635 "traddr": "10.0.0.2", 00:16:10.635 "adrfam": "ipv4", 00:16:10.635 "trsvcid": "4420", 00:16:10.635 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:10.635 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:10.635 "hdgst": false, 00:16:10.635 "ddgst": false 00:16:10.635 }, 00:16:10.635 "method": "bdev_nvme_attach_controller" 00:16:10.635 }' 00:16:10.635 [2024-04-18 11:52:01.150014] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:16:10.635 [2024-04-18 11:52:01.150104] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2445472 ] 00:16:10.893 EAL: No free 2048 kB hugepages reported on node 1 00:16:10.893 [2024-04-18 11:52:01.290365] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:11.151 [2024-04-18 11:52:01.525634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:11.717 Running I/O for 1 seconds... 00:16:12.652 00:16:12.653 Latency(us) 00:16:12.653 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:12.653 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:12.653 Verification LBA range: start 0x0 length 0x400 00:16:12.653 Nvme0n1 : 1.04 1359.38 84.96 0.00 0.00 46432.26 8860.47 44669.34 00:16:12.653 =================================================================================================================== 00:16:12.653 Total : 1359.38 84.96 0.00 0.00 46432.26 8860.47 44669.34 00:16:14.027 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 2444947 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:16:14.027 11:52:04 -- target/host_management.sh@102 -- # stoptarget 00:16:14.027 11:52:04 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:16:14.027 11:52:04 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:14.027 11:52:04 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:14.027 11:52:04 -- target/host_management.sh@40 -- # nvmftestfini 00:16:14.027 11:52:04 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:14.027 11:52:04 -- nvmf/common.sh@117 -- # sync 00:16:14.027 11:52:04 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:14.027 11:52:04 -- nvmf/common.sh@120 -- # set +e 00:16:14.027 11:52:04 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:14.027 11:52:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:14.027 rmmod nvme_tcp 00:16:14.027 rmmod nvme_fabrics 00:16:14.027 rmmod nvme_keyring 00:16:14.027 11:52:04 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:14.027 11:52:04 -- nvmf/common.sh@124 -- # set -e 00:16:14.027 11:52:04 -- nvmf/common.sh@125 -- # return 0 00:16:14.027 11:52:04 -- nvmf/common.sh@478 -- # '[' -n 2444747 ']' 00:16:14.027 11:52:04 -- nvmf/common.sh@479 -- # killprocess 2444747 00:16:14.027 11:52:04 -- common/autotest_common.sh@936 -- # '[' -z 2444747 ']' 00:16:14.027 11:52:04 -- common/autotest_common.sh@940 -- # kill -0 2444747 00:16:14.027 11:52:04 -- common/autotest_common.sh@941 -- # uname 00:16:14.027 11:52:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:14.027 11:52:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2444747 00:16:14.027 11:52:04 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:14.027 11:52:04 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:14.027 11:52:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2444747' 00:16:14.027 killing process with pid 2444747 00:16:14.027 11:52:04 -- common/autotest_common.sh@955 -- # kill 2444747 00:16:14.027 11:52:04 -- common/autotest_common.sh@960 -- # wait 2444747 00:16:15.403 [2024-04-18 11:52:05.661937] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:16:15.403 11:52:05 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:15.403 11:52:05 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:15.403 11:52:05 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:15.403 11:52:05 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:15.403 11:52:05 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:15.403 11:52:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:15.403 11:52:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:15.403 11:52:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:17.308 11:52:07 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:17.308 00:16:17.308 real 0m10.195s 00:16:17.308 user 0m33.732s 00:16:17.308 sys 0m1.730s 00:16:17.308 11:52:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:17.308 11:52:07 -- common/autotest_common.sh@10 -- # set +x 00:16:17.308 ************************************ 00:16:17.308 END TEST nvmf_host_management 00:16:17.308 ************************************ 00:16:17.567 11:52:07 -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:16:17.567 00:16:17.567 real 0m17.253s 00:16:17.567 user 0m35.676s 00:16:17.567 sys 0m6.888s 00:16:17.567 11:52:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:17.567 11:52:07 -- common/autotest_common.sh@10 -- # set +x 00:16:17.567 ************************************ 00:16:17.567 END TEST nvmf_host_management 00:16:17.567 ************************************ 00:16:17.567 11:52:07 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:17.567 11:52:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:17.567 11:52:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:17.567 11:52:07 -- common/autotest_common.sh@10 -- # set +x 00:16:17.567 ************************************ 00:16:17.567 START TEST nvmf_lvol 00:16:17.567 ************************************ 00:16:17.567 11:52:08 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:17.825 * Looking for test storage... 00:16:17.825 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:17.825 11:52:08 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:17.826 11:52:08 -- nvmf/common.sh@7 -- # uname -s 00:16:17.826 11:52:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:17.826 11:52:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:17.826 11:52:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:17.826 11:52:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:17.826 11:52:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:17.826 11:52:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:17.826 11:52:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:17.826 11:52:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:17.826 11:52:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:17.826 11:52:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:17.826 11:52:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:17.826 11:52:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:16:17.826 11:52:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:17.826 11:52:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:17.826 11:52:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:17.826 11:52:08 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:17.826 11:52:08 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:17.826 11:52:08 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:17.826 11:52:08 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:17.826 11:52:08 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:17.826 11:52:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.826 11:52:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.826 11:52:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.826 11:52:08 -- paths/export.sh@5 -- # export PATH 00:16:17.826 11:52:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.826 11:52:08 -- nvmf/common.sh@47 -- # : 0 00:16:17.826 11:52:08 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:17.826 11:52:08 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:17.826 11:52:08 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:17.826 11:52:08 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:17.826 11:52:08 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:17.826 11:52:08 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:17.826 11:52:08 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:17.826 11:52:08 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:17.826 11:52:08 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:17.826 11:52:08 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:17.826 11:52:08 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:16:17.826 11:52:08 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:16:17.826 11:52:08 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:17.826 11:52:08 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:16:17.826 11:52:08 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:17.826 11:52:08 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:17.826 11:52:08 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:17.826 11:52:08 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:17.826 11:52:08 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:17.826 11:52:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:17.826 11:52:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:17.826 11:52:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:17.826 11:52:08 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:17.826 11:52:08 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:17.826 11:52:08 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:17.826 11:52:08 -- common/autotest_common.sh@10 -- # set +x 00:16:24.429 11:52:14 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:24.429 11:52:14 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:24.429 11:52:14 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:24.429 11:52:14 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:24.429 11:52:14 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:24.429 11:52:14 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:24.429 11:52:14 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:24.429 11:52:14 -- nvmf/common.sh@295 -- # net_devs=() 00:16:24.429 11:52:14 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:24.429 11:52:14 -- nvmf/common.sh@296 -- # e810=() 00:16:24.429 11:52:14 -- nvmf/common.sh@296 -- # local -ga e810 00:16:24.429 11:52:14 -- nvmf/common.sh@297 -- # x722=() 00:16:24.429 11:52:14 -- nvmf/common.sh@297 -- # local -ga x722 00:16:24.429 11:52:14 -- nvmf/common.sh@298 -- # mlx=() 00:16:24.429 11:52:14 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:24.429 11:52:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:24.429 11:52:14 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:24.429 11:52:14 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:24.429 11:52:14 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:24.429 11:52:14 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:24.429 11:52:14 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:24.429 11:52:14 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:24.429 11:52:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:24.429 11:52:14 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:24.429 11:52:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:24.429 11:52:14 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:24.429 11:52:14 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:24.429 11:52:14 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:24.429 11:52:14 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:24.429 11:52:14 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:24.429 11:52:14 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:24.429 11:52:14 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:24.429 11:52:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:24.429 11:52:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:24.429 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:24.429 11:52:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:24.429 11:52:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:24.429 11:52:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:24.429 11:52:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:24.429 11:52:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:24.429 11:52:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:24.429 11:52:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:24.429 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:24.429 11:52:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:24.429 11:52:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:24.429 11:52:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:24.429 11:52:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:24.429 11:52:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:24.429 11:52:14 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:24.429 11:52:14 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:24.429 11:52:14 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:24.429 11:52:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:24.429 11:52:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:24.429 11:52:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:24.429 11:52:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:24.429 11:52:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:24.429 Found net devices under 0000:af:00.0: cvl_0_0 00:16:24.429 11:52:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:24.429 11:52:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:24.429 11:52:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:24.429 11:52:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:24.429 11:52:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:24.429 11:52:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:24.429 Found net devices under 0000:af:00.1: cvl_0_1 00:16:24.429 11:52:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:24.429 11:52:14 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:24.429 11:52:14 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:24.429 11:52:14 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:24.429 11:52:14 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:24.429 11:52:14 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:24.429 11:52:14 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:24.429 11:52:14 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:24.429 11:52:14 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:24.429 11:52:14 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:24.429 11:52:14 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:24.429 11:52:14 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:24.429 11:52:14 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:24.429 11:52:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:24.429 11:52:14 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:24.429 11:52:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:24.429 11:52:14 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:24.429 11:52:14 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:24.429 11:52:14 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:24.429 11:52:14 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:24.429 11:52:14 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:24.429 11:52:14 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:24.429 11:52:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:24.429 11:52:14 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:24.429 11:52:14 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:24.429 11:52:14 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:24.429 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:24.429 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:16:24.429 00:16:24.429 --- 10.0.0.2 ping statistics --- 00:16:24.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:24.429 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:16:24.429 11:52:14 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:24.429 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:24.429 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:16:24.429 00:16:24.429 --- 10.0.0.1 ping statistics --- 00:16:24.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:24.429 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:16:24.429 11:52:14 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:24.429 11:52:14 -- nvmf/common.sh@411 -- # return 0 00:16:24.429 11:52:14 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:24.429 11:52:14 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:24.429 11:52:14 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:24.429 11:52:14 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:24.429 11:52:14 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:24.429 11:52:14 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:24.429 11:52:14 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:24.429 11:52:14 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:16:24.429 11:52:14 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:24.429 11:52:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:24.429 11:52:14 -- common/autotest_common.sh@10 -- # set +x 00:16:24.429 11:52:14 -- nvmf/common.sh@470 -- # nvmfpid=2449762 00:16:24.429 11:52:14 -- nvmf/common.sh@471 -- # waitforlisten 2449762 00:16:24.430 11:52:14 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:24.430 11:52:14 -- common/autotest_common.sh@817 -- # '[' -z 2449762 ']' 00:16:24.430 11:52:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:24.430 11:52:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:24.430 11:52:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:24.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:24.430 11:52:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:24.430 11:52:14 -- common/autotest_common.sh@10 -- # set +x 00:16:24.430 [2024-04-18 11:52:14.693834] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:16:24.430 [2024-04-18 11:52:14.693919] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:24.430 EAL: No free 2048 kB hugepages reported on node 1 00:16:24.430 [2024-04-18 11:52:14.822455] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:24.688 [2024-04-18 11:52:15.035915] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:24.688 [2024-04-18 11:52:15.035961] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:24.688 [2024-04-18 11:52:15.035974] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:24.688 [2024-04-18 11:52:15.035986] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:24.688 [2024-04-18 11:52:15.036000] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:24.688 [2024-04-18 11:52:15.036076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:24.688 [2024-04-18 11:52:15.036146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.688 [2024-04-18 11:52:15.036151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:24.946 11:52:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:24.946 11:52:15 -- common/autotest_common.sh@850 -- # return 0 00:16:24.946 11:52:15 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:24.946 11:52:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:24.946 11:52:15 -- common/autotest_common.sh@10 -- # set +x 00:16:25.204 11:52:15 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:25.204 11:52:15 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:25.204 [2024-04-18 11:52:15.657391] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:25.204 11:52:15 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:25.461 11:52:15 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:16:25.461 11:52:15 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:25.719 11:52:16 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:16:25.719 11:52:16 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:16:25.977 11:52:16 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:16:26.235 11:52:16 -- target/nvmf_lvol.sh@29 -- # lvs=42306dc7-ad31-42f9-ada0-8c636aeaa736 00:16:26.235 11:52:16 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 42306dc7-ad31-42f9-ada0-8c636aeaa736 lvol 20 00:16:26.235 11:52:16 -- target/nvmf_lvol.sh@32 -- # lvol=37f3bed8-a909-4056-9137-7e0ecebd8e20 00:16:26.235 11:52:16 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:26.492 11:52:16 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 37f3bed8-a909-4056-9137-7e0ecebd8e20 00:16:26.750 11:52:17 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:26.750 [2024-04-18 11:52:17.243105] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:26.750 11:52:17 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:27.008 11:52:17 -- target/nvmf_lvol.sh@42 -- # perf_pid=2450318 00:16:27.008 11:52:17 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:16:27.008 11:52:17 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:16:27.008 EAL: No free 2048 kB hugepages reported on node 1 00:16:27.941 11:52:18 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 37f3bed8-a909-4056-9137-7e0ecebd8e20 MY_SNAPSHOT 00:16:28.199 11:52:18 -- target/nvmf_lvol.sh@47 -- # snapshot=87559a8d-b968-4d20-a521-ad79466b97d6 00:16:28.199 11:52:18 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 37f3bed8-a909-4056-9137-7e0ecebd8e20 30 00:16:28.458 11:52:18 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 87559a8d-b968-4d20-a521-ad79466b97d6 MY_CLONE 00:16:28.716 11:52:19 -- target/nvmf_lvol.sh@49 -- # clone=5ef48467-cf34-46f0-b5a8-dd42cd9ee172 00:16:28.716 11:52:19 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 5ef48467-cf34-46f0-b5a8-dd42cd9ee172 00:16:29.282 11:52:19 -- target/nvmf_lvol.sh@53 -- # wait 2450318 00:16:39.251 Initializing NVMe Controllers 00:16:39.251 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:16:39.251 Controller IO queue size 128, less than required. 00:16:39.251 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:39.251 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:16:39.251 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:16:39.251 Initialization complete. Launching workers. 00:16:39.251 ======================================================== 00:16:39.251 Latency(us) 00:16:39.251 Device Information : IOPS MiB/s Average min max 00:16:39.252 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11318.90 44.21 11312.54 449.29 176873.76 00:16:39.252 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11008.40 43.00 11630.09 3155.05 147899.65 00:16:39.252 ======================================================== 00:16:39.252 Total : 22327.30 87.22 11469.10 449.29 176873.76 00:16:39.252 00:16:39.252 11:52:28 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:39.252 11:52:28 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 37f3bed8-a909-4056-9137-7e0ecebd8e20 00:16:39.252 11:52:28 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 42306dc7-ad31-42f9-ada0-8c636aeaa736 00:16:39.252 11:52:28 -- target/nvmf_lvol.sh@60 -- # rm -f 00:16:39.252 11:52:28 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:16:39.252 11:52:28 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:16:39.252 11:52:28 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:39.252 11:52:28 -- nvmf/common.sh@117 -- # sync 00:16:39.252 11:52:28 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:39.252 11:52:28 -- nvmf/common.sh@120 -- # set +e 00:16:39.252 11:52:28 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:39.252 11:52:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:39.252 rmmod nvme_tcp 00:16:39.252 rmmod nvme_fabrics 00:16:39.252 rmmod nvme_keyring 00:16:39.252 11:52:28 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:39.252 11:52:28 -- nvmf/common.sh@124 -- # set -e 00:16:39.252 11:52:28 -- nvmf/common.sh@125 -- # return 0 00:16:39.252 11:52:28 -- nvmf/common.sh@478 -- # '[' -n 2449762 ']' 00:16:39.252 11:52:28 -- nvmf/common.sh@479 -- # killprocess 2449762 00:16:39.252 11:52:28 -- common/autotest_common.sh@936 -- # '[' -z 2449762 ']' 00:16:39.252 11:52:28 -- common/autotest_common.sh@940 -- # kill -0 2449762 00:16:39.252 11:52:28 -- common/autotest_common.sh@941 -- # uname 00:16:39.252 11:52:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:39.252 11:52:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2449762 00:16:39.252 11:52:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:39.252 11:52:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:39.252 11:52:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2449762' 00:16:39.252 killing process with pid 2449762 00:16:39.252 11:52:28 -- common/autotest_common.sh@955 -- # kill 2449762 00:16:39.252 11:52:28 -- common/autotest_common.sh@960 -- # wait 2449762 00:16:39.819 11:52:30 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:39.819 11:52:30 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:39.819 11:52:30 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:39.819 11:52:30 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:39.819 11:52:30 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:39.819 11:52:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:39.819 11:52:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:39.819 11:52:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:42.352 11:52:32 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:42.352 00:16:42.352 real 0m24.342s 00:16:42.352 user 1m5.977s 00:16:42.352 sys 0m9.589s 00:16:42.352 11:52:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:42.352 11:52:32 -- common/autotest_common.sh@10 -- # set +x 00:16:42.352 ************************************ 00:16:42.352 END TEST nvmf_lvol 00:16:42.352 ************************************ 00:16:42.352 11:52:32 -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:42.352 11:52:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:42.352 11:52:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:42.352 11:52:32 -- common/autotest_common.sh@10 -- # set +x 00:16:42.352 ************************************ 00:16:42.352 START TEST nvmf_lvs_grow 00:16:42.352 ************************************ 00:16:42.352 11:52:32 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:42.352 * Looking for test storage... 00:16:42.352 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:42.352 11:52:32 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:42.352 11:52:32 -- nvmf/common.sh@7 -- # uname -s 00:16:42.352 11:52:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:42.352 11:52:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:42.352 11:52:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:42.352 11:52:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:42.352 11:52:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:42.352 11:52:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:42.352 11:52:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:42.352 11:52:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:42.352 11:52:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:42.352 11:52:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:42.352 11:52:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:42.352 11:52:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:16:42.352 11:52:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:42.352 11:52:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:42.352 11:52:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:42.352 11:52:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:42.352 11:52:32 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:42.352 11:52:32 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:42.352 11:52:32 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:42.352 11:52:32 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:42.352 11:52:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.352 11:52:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.352 11:52:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.352 11:52:32 -- paths/export.sh@5 -- # export PATH 00:16:42.352 11:52:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.352 11:52:32 -- nvmf/common.sh@47 -- # : 0 00:16:42.352 11:52:32 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:42.352 11:52:32 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:42.352 11:52:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:42.352 11:52:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:42.352 11:52:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:42.352 11:52:32 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:42.352 11:52:32 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:42.352 11:52:32 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:42.352 11:52:32 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:42.352 11:52:32 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:42.352 11:52:32 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:16:42.352 11:52:32 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:42.352 11:52:32 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:42.352 11:52:32 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:42.352 11:52:32 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:42.352 11:52:32 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:42.352 11:52:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:42.352 11:52:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:42.352 11:52:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:42.352 11:52:32 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:42.352 11:52:32 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:42.352 11:52:32 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:42.352 11:52:32 -- common/autotest_common.sh@10 -- # set +x 00:16:48.988 11:52:39 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:48.988 11:52:39 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:48.988 11:52:39 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:48.988 11:52:39 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:48.988 11:52:39 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:48.988 11:52:39 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:48.988 11:52:39 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:48.988 11:52:39 -- nvmf/common.sh@295 -- # net_devs=() 00:16:48.988 11:52:39 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:48.988 11:52:39 -- nvmf/common.sh@296 -- # e810=() 00:16:48.988 11:52:39 -- nvmf/common.sh@296 -- # local -ga e810 00:16:48.988 11:52:39 -- nvmf/common.sh@297 -- # x722=() 00:16:48.988 11:52:39 -- nvmf/common.sh@297 -- # local -ga x722 00:16:48.988 11:52:39 -- nvmf/common.sh@298 -- # mlx=() 00:16:48.988 11:52:39 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:48.988 11:52:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:48.988 11:52:39 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:48.988 11:52:39 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:48.988 11:52:39 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:48.988 11:52:39 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:48.988 11:52:39 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:48.988 11:52:39 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:48.988 11:52:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:48.988 11:52:39 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:48.988 11:52:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:48.988 11:52:39 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:48.988 11:52:39 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:48.988 11:52:39 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:48.988 11:52:39 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:48.988 11:52:39 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:48.988 11:52:39 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:48.988 11:52:39 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:48.988 11:52:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:48.988 11:52:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:48.988 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:48.988 11:52:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:48.988 11:52:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:48.988 11:52:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:48.988 11:52:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:48.988 11:52:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:48.988 11:52:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:48.988 11:52:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:48.988 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:48.988 11:52:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:48.988 11:52:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:48.988 11:52:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:48.988 11:52:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:48.988 11:52:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:48.988 11:52:39 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:48.988 11:52:39 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:48.988 11:52:39 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:48.988 11:52:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:48.988 11:52:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:48.988 11:52:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:48.988 11:52:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:48.988 11:52:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:48.988 Found net devices under 0000:af:00.0: cvl_0_0 00:16:48.988 11:52:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:48.988 11:52:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:48.988 11:52:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:48.988 11:52:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:48.988 11:52:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:48.988 11:52:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:48.988 Found net devices under 0000:af:00.1: cvl_0_1 00:16:48.988 11:52:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:48.988 11:52:39 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:48.988 11:52:39 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:48.988 11:52:39 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:48.988 11:52:39 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:48.988 11:52:39 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:48.988 11:52:39 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:48.988 11:52:39 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:48.988 11:52:39 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:48.988 11:52:39 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:48.988 11:52:39 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:48.988 11:52:39 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:48.988 11:52:39 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:48.988 11:52:39 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:48.988 11:52:39 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:48.988 11:52:39 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:48.988 11:52:39 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:48.988 11:52:39 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:48.988 11:52:39 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:48.988 11:52:39 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:48.988 11:52:39 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:48.989 11:52:39 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:48.989 11:52:39 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:48.989 11:52:39 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:48.989 11:52:39 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:48.989 11:52:39 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:48.989 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:48.989 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:16:48.989 00:16:48.989 --- 10.0.0.2 ping statistics --- 00:16:48.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.989 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:16:48.989 11:52:39 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:48.989 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:48.989 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:16:48.989 00:16:48.989 --- 10.0.0.1 ping statistics --- 00:16:48.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.989 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:16:48.989 11:52:39 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:48.989 11:52:39 -- nvmf/common.sh@411 -- # return 0 00:16:48.989 11:52:39 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:48.989 11:52:39 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:48.989 11:52:39 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:48.989 11:52:39 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:48.989 11:52:39 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:48.989 11:52:39 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:48.989 11:52:39 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:48.989 11:52:39 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:16:48.989 11:52:39 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:48.989 11:52:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:48.989 11:52:39 -- common/autotest_common.sh@10 -- # set +x 00:16:48.989 11:52:39 -- nvmf/common.sh@470 -- # nvmfpid=2456159 00:16:48.989 11:52:39 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:48.989 11:52:39 -- nvmf/common.sh@471 -- # waitforlisten 2456159 00:16:48.989 11:52:39 -- common/autotest_common.sh@817 -- # '[' -z 2456159 ']' 00:16:48.989 11:52:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.989 11:52:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:48.989 11:52:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.989 11:52:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:48.989 11:52:39 -- common/autotest_common.sh@10 -- # set +x 00:16:49.247 [2024-04-18 11:52:39.606193] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:16:49.247 [2024-04-18 11:52:39.606278] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:49.247 EAL: No free 2048 kB hugepages reported on node 1 00:16:49.247 [2024-04-18 11:52:39.733778] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.505 [2024-04-18 11:52:39.956013] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:49.505 [2024-04-18 11:52:39.956066] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:49.505 [2024-04-18 11:52:39.956079] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:49.505 [2024-04-18 11:52:39.956093] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:49.505 [2024-04-18 11:52:39.956104] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:49.505 [2024-04-18 11:52:39.956148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:50.069 11:52:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:50.069 11:52:40 -- common/autotest_common.sh@850 -- # return 0 00:16:50.069 11:52:40 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:50.069 11:52:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:50.069 11:52:40 -- common/autotest_common.sh@10 -- # set +x 00:16:50.069 11:52:40 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:50.069 11:52:40 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:50.069 [2024-04-18 11:52:40.560893] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:50.069 11:52:40 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:16:50.069 11:52:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:50.069 11:52:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:50.069 11:52:40 -- common/autotest_common.sh@10 -- # set +x 00:16:50.327 ************************************ 00:16:50.327 START TEST lvs_grow_clean 00:16:50.327 ************************************ 00:16:50.327 11:52:40 -- common/autotest_common.sh@1111 -- # lvs_grow 00:16:50.327 11:52:40 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:50.328 11:52:40 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:50.328 11:52:40 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:50.328 11:52:40 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:50.328 11:52:40 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:50.328 11:52:40 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:50.328 11:52:40 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:50.328 11:52:40 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:50.328 11:52:40 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:50.586 11:52:40 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:50.586 11:52:40 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:50.586 11:52:41 -- target/nvmf_lvs_grow.sh@28 -- # lvs=47184b3d-9cb6-4726-91a0-144ee8acbc63 00:16:50.586 11:52:41 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47184b3d-9cb6-4726-91a0-144ee8acbc63 00:16:50.586 11:52:41 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:50.844 11:52:41 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:50.844 11:52:41 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:50.844 11:52:41 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 47184b3d-9cb6-4726-91a0-144ee8acbc63 lvol 150 00:16:51.128 11:52:41 -- target/nvmf_lvs_grow.sh@33 -- # lvol=fe71ed10-8b74-49b6-a936-8cf9a3708ae5 00:16:51.128 11:52:41 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:51.128 11:52:41 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:51.128 [2024-04-18 11:52:41.600278] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:51.128 [2024-04-18 11:52:41.600357] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:51.128 true 00:16:51.128 11:52:41 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47184b3d-9cb6-4726-91a0-144ee8acbc63 00:16:51.128 11:52:41 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:51.386 11:52:41 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:51.386 11:52:41 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:51.644 11:52:41 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 fe71ed10-8b74-49b6-a936-8cf9a3708ae5 00:16:51.644 11:52:42 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:51.901 [2024-04-18 11:52:42.254353] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:51.901 11:52:42 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:51.901 11:52:42 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2456739 00:16:51.901 11:52:42 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:51.901 11:52:42 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:51.901 11:52:42 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2456739 /var/tmp/bdevperf.sock 00:16:51.901 11:52:42 -- common/autotest_common.sh@817 -- # '[' -z 2456739 ']' 00:16:51.901 11:52:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:51.901 11:52:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:51.901 11:52:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:51.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:51.901 11:52:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:51.901 11:52:42 -- common/autotest_common.sh@10 -- # set +x 00:16:52.159 [2024-04-18 11:52:42.512408] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:16:52.159 [2024-04-18 11:52:42.512500] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2456739 ] 00:16:52.159 EAL: No free 2048 kB hugepages reported on node 1 00:16:52.159 [2024-04-18 11:52:42.634390] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.417 [2024-04-18 11:52:42.848154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:52.982 11:52:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:52.982 11:52:43 -- common/autotest_common.sh@850 -- # return 0 00:16:52.982 11:52:43 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:53.239 Nvme0n1 00:16:53.239 11:52:43 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:53.496 [ 00:16:53.496 { 00:16:53.496 "name": "Nvme0n1", 00:16:53.496 "aliases": [ 00:16:53.496 "fe71ed10-8b74-49b6-a936-8cf9a3708ae5" 00:16:53.496 ], 00:16:53.496 "product_name": "NVMe disk", 00:16:53.496 "block_size": 4096, 00:16:53.496 "num_blocks": 38912, 00:16:53.496 "uuid": "fe71ed10-8b74-49b6-a936-8cf9a3708ae5", 00:16:53.496 "assigned_rate_limits": { 00:16:53.496 "rw_ios_per_sec": 0, 00:16:53.496 "rw_mbytes_per_sec": 0, 00:16:53.496 "r_mbytes_per_sec": 0, 00:16:53.496 "w_mbytes_per_sec": 0 00:16:53.496 }, 00:16:53.496 "claimed": false, 00:16:53.496 "zoned": false, 00:16:53.496 "supported_io_types": { 00:16:53.496 "read": true, 00:16:53.496 "write": true, 00:16:53.496 "unmap": true, 00:16:53.496 "write_zeroes": true, 00:16:53.496 "flush": true, 00:16:53.496 "reset": true, 00:16:53.496 "compare": true, 00:16:53.496 "compare_and_write": true, 00:16:53.496 "abort": true, 00:16:53.496 "nvme_admin": true, 00:16:53.496 "nvme_io": true 00:16:53.496 }, 00:16:53.496 "memory_domains": [ 00:16:53.496 { 00:16:53.496 "dma_device_id": "system", 00:16:53.496 "dma_device_type": 1 00:16:53.496 } 00:16:53.496 ], 00:16:53.496 "driver_specific": { 00:16:53.496 "nvme": [ 00:16:53.496 { 00:16:53.496 "trid": { 00:16:53.496 "trtype": "TCP", 00:16:53.496 "adrfam": "IPv4", 00:16:53.496 "traddr": "10.0.0.2", 00:16:53.496 "trsvcid": "4420", 00:16:53.496 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:53.496 }, 00:16:53.496 "ctrlr_data": { 00:16:53.496 "cntlid": 1, 00:16:53.496 "vendor_id": "0x8086", 00:16:53.496 "model_number": "SPDK bdev Controller", 00:16:53.496 "serial_number": "SPDK0", 00:16:53.496 "firmware_revision": "24.05", 00:16:53.496 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:53.496 "oacs": { 00:16:53.496 "security": 0, 00:16:53.496 "format": 0, 00:16:53.496 "firmware": 0, 00:16:53.496 "ns_manage": 0 00:16:53.496 }, 00:16:53.496 "multi_ctrlr": true, 00:16:53.496 "ana_reporting": false 00:16:53.496 }, 00:16:53.496 "vs": { 00:16:53.496 "nvme_version": "1.3" 00:16:53.496 }, 00:16:53.496 "ns_data": { 00:16:53.496 "id": 1, 00:16:53.496 "can_share": true 00:16:53.496 } 00:16:53.496 } 00:16:53.496 ], 00:16:53.496 "mp_policy": "active_passive" 00:16:53.496 } 00:16:53.496 } 00:16:53.496 ] 00:16:53.496 11:52:43 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2457006 00:16:53.496 11:52:43 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:53.496 11:52:43 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:53.496 Running I/O for 10 seconds... 00:16:54.430 Latency(us) 00:16:54.430 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:54.430 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:54.430 Nvme0n1 : 1.00 20316.00 79.36 0.00 0.00 0.00 0.00 0.00 00:16:54.430 =================================================================================================================== 00:16:54.430 Total : 20316.00 79.36 0.00 0.00 0.00 0.00 0.00 00:16:54.430 00:16:55.363 11:52:45 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 47184b3d-9cb6-4726-91a0-144ee8acbc63 00:16:55.622 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:55.622 Nvme0n1 : 2.00 20521.50 80.16 0.00 0.00 0.00 0.00 0.00 00:16:55.622 =================================================================================================================== 00:16:55.622 Total : 20521.50 80.16 0.00 0.00 0.00 0.00 0.00 00:16:55.622 00:16:55.622 true 00:16:55.622 11:52:46 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47184b3d-9cb6-4726-91a0-144ee8acbc63 00:16:55.622 11:52:46 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:55.881 11:52:46 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:55.881 11:52:46 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:55.881 11:52:46 -- target/nvmf_lvs_grow.sh@65 -- # wait 2457006 00:16:56.447 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:56.447 Nvme0n1 : 3.00 20534.33 80.21 0.00 0.00 0.00 0.00 0.00 00:16:56.447 =================================================================================================================== 00:16:56.447 Total : 20534.33 80.21 0.00 0.00 0.00 0.00 0.00 00:16:56.447 00:16:57.381 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:57.381 Nvme0n1 : 4.00 20616.75 80.53 0.00 0.00 0.00 0.00 0.00 00:16:57.381 =================================================================================================================== 00:16:57.381 Total : 20616.75 80.53 0.00 0.00 0.00 0.00 0.00 00:16:57.381 00:16:58.754 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:58.754 Nvme0n1 : 5.00 20626.20 80.57 0.00 0.00 0.00 0.00 0.00 00:16:58.754 =================================================================================================================== 00:16:58.754 Total : 20626.20 80.57 0.00 0.00 0.00 0.00 0.00 00:16:58.754 00:16:59.688 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:59.688 Nvme0n1 : 6.00 20678.67 80.78 0.00 0.00 0.00 0.00 0.00 00:16:59.688 =================================================================================================================== 00:16:59.688 Total : 20678.67 80.78 0.00 0.00 0.00 0.00 0.00 00:16:59.688 00:17:00.622 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:00.622 Nvme0n1 : 7.00 20604.71 80.49 0.00 0.00 0.00 0.00 0.00 00:17:00.622 =================================================================================================================== 00:17:00.622 Total : 20604.71 80.49 0.00 0.00 0.00 0.00 0.00 00:17:00.622 00:17:01.604 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:01.604 Nvme0n1 : 8.00 20599.12 80.47 0.00 0.00 0.00 0.00 0.00 00:17:01.604 =================================================================================================================== 00:17:01.604 Total : 20599.12 80.47 0.00 0.00 0.00 0.00 0.00 00:17:01.604 00:17:02.539 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:02.539 Nvme0n1 : 9.00 20620.78 80.55 0.00 0.00 0.00 0.00 0.00 00:17:02.539 =================================================================================================================== 00:17:02.539 Total : 20620.78 80.55 0.00 0.00 0.00 0.00 0.00 00:17:02.539 00:17:03.473 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:03.473 Nvme0n1 : 10.00 20657.80 80.69 0.00 0.00 0.00 0.00 0.00 00:17:03.473 =================================================================================================================== 00:17:03.473 Total : 20657.80 80.69 0.00 0.00 0.00 0.00 0.00 00:17:03.473 00:17:03.473 00:17:03.473 Latency(us) 00:17:03.473 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:03.473 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:03.473 Nvme0n1 : 10.01 20658.81 80.70 0.00 0.00 6192.12 3263.69 12111.05 00:17:03.473 =================================================================================================================== 00:17:03.473 Total : 20658.81 80.70 0.00 0.00 6192.12 3263.69 12111.05 00:17:03.473 0 00:17:03.473 11:52:53 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2456739 00:17:03.473 11:52:53 -- common/autotest_common.sh@936 -- # '[' -z 2456739 ']' 00:17:03.473 11:52:53 -- common/autotest_common.sh@940 -- # kill -0 2456739 00:17:03.473 11:52:53 -- common/autotest_common.sh@941 -- # uname 00:17:03.473 11:52:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:03.473 11:52:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2456739 00:17:03.731 11:52:54 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:03.731 11:52:54 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:03.731 11:52:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2456739' 00:17:03.731 killing process with pid 2456739 00:17:03.731 11:52:54 -- common/autotest_common.sh@955 -- # kill 2456739 00:17:03.731 Received shutdown signal, test time was about 10.000000 seconds 00:17:03.731 00:17:03.731 Latency(us) 00:17:03.731 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:03.731 =================================================================================================================== 00:17:03.731 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:03.731 11:52:54 -- common/autotest_common.sh@960 -- # wait 2456739 00:17:04.665 11:52:55 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:04.665 11:52:55 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47184b3d-9cb6-4726-91a0-144ee8acbc63 00:17:04.665 11:52:55 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:17:04.923 11:52:55 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:17:04.923 11:52:55 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:17:04.923 11:52:55 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:05.181 [2024-04-18 11:52:55.534268] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:05.181 11:52:55 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47184b3d-9cb6-4726-91a0-144ee8acbc63 00:17:05.181 11:52:55 -- common/autotest_common.sh@638 -- # local es=0 00:17:05.181 11:52:55 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47184b3d-9cb6-4726-91a0-144ee8acbc63 00:17:05.181 11:52:55 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:05.181 11:52:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:05.181 11:52:55 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:05.181 11:52:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:05.181 11:52:55 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:05.181 11:52:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:05.181 11:52:55 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:05.181 11:52:55 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:05.181 11:52:55 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47184b3d-9cb6-4726-91a0-144ee8acbc63 00:17:05.439 request: 00:17:05.439 { 00:17:05.439 "uuid": "47184b3d-9cb6-4726-91a0-144ee8acbc63", 00:17:05.439 "method": "bdev_lvol_get_lvstores", 00:17:05.439 "req_id": 1 00:17:05.439 } 00:17:05.439 Got JSON-RPC error response 00:17:05.439 response: 00:17:05.439 { 00:17:05.439 "code": -19, 00:17:05.439 "message": "No such device" 00:17:05.439 } 00:17:05.439 11:52:55 -- common/autotest_common.sh@641 -- # es=1 00:17:05.439 11:52:55 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:05.439 11:52:55 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:05.439 11:52:55 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:05.439 11:52:55 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:05.439 aio_bdev 00:17:05.439 11:52:55 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev fe71ed10-8b74-49b6-a936-8cf9a3708ae5 00:17:05.439 11:52:55 -- common/autotest_common.sh@885 -- # local bdev_name=fe71ed10-8b74-49b6-a936-8cf9a3708ae5 00:17:05.439 11:52:55 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:17:05.439 11:52:55 -- common/autotest_common.sh@887 -- # local i 00:17:05.439 11:52:55 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:17:05.439 11:52:55 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:17:05.439 11:52:55 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:05.697 11:52:56 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b fe71ed10-8b74-49b6-a936-8cf9a3708ae5 -t 2000 00:17:05.697 [ 00:17:05.697 { 00:17:05.697 "name": "fe71ed10-8b74-49b6-a936-8cf9a3708ae5", 00:17:05.697 "aliases": [ 00:17:05.697 "lvs/lvol" 00:17:05.697 ], 00:17:05.697 "product_name": "Logical Volume", 00:17:05.697 "block_size": 4096, 00:17:05.697 "num_blocks": 38912, 00:17:05.697 "uuid": "fe71ed10-8b74-49b6-a936-8cf9a3708ae5", 00:17:05.697 "assigned_rate_limits": { 00:17:05.697 "rw_ios_per_sec": 0, 00:17:05.697 "rw_mbytes_per_sec": 0, 00:17:05.697 "r_mbytes_per_sec": 0, 00:17:05.697 "w_mbytes_per_sec": 0 00:17:05.697 }, 00:17:05.697 "claimed": false, 00:17:05.697 "zoned": false, 00:17:05.697 "supported_io_types": { 00:17:05.697 "read": true, 00:17:05.697 "write": true, 00:17:05.697 "unmap": true, 00:17:05.697 "write_zeroes": true, 00:17:05.697 "flush": false, 00:17:05.697 "reset": true, 00:17:05.697 "compare": false, 00:17:05.697 "compare_and_write": false, 00:17:05.697 "abort": false, 00:17:05.697 "nvme_admin": false, 00:17:05.697 "nvme_io": false 00:17:05.697 }, 00:17:05.697 "driver_specific": { 00:17:05.697 "lvol": { 00:17:05.697 "lvol_store_uuid": "47184b3d-9cb6-4726-91a0-144ee8acbc63", 00:17:05.697 "base_bdev": "aio_bdev", 00:17:05.697 "thin_provision": false, 00:17:05.697 "snapshot": false, 00:17:05.697 "clone": false, 00:17:05.697 "esnap_clone": false 00:17:05.697 } 00:17:05.697 } 00:17:05.697 } 00:17:05.697 ] 00:17:05.697 11:52:56 -- common/autotest_common.sh@893 -- # return 0 00:17:05.697 11:52:56 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47184b3d-9cb6-4726-91a0-144ee8acbc63 00:17:05.697 11:52:56 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:17:05.955 11:52:56 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:17:05.955 11:52:56 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47184b3d-9cb6-4726-91a0-144ee8acbc63 00:17:05.955 11:52:56 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:17:06.213 11:52:56 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:17:06.213 11:52:56 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete fe71ed10-8b74-49b6-a936-8cf9a3708ae5 00:17:06.213 11:52:56 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 47184b3d-9cb6-4726-91a0-144ee8acbc63 00:17:06.471 11:52:56 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:06.729 11:52:57 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:06.729 00:17:06.729 real 0m16.361s 00:17:06.729 user 0m15.547s 00:17:06.729 sys 0m1.927s 00:17:06.729 11:52:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:06.729 11:52:57 -- common/autotest_common.sh@10 -- # set +x 00:17:06.729 ************************************ 00:17:06.729 END TEST lvs_grow_clean 00:17:06.729 ************************************ 00:17:06.729 11:52:57 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:17:06.729 11:52:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:06.729 11:52:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:06.729 11:52:57 -- common/autotest_common.sh@10 -- # set +x 00:17:06.729 ************************************ 00:17:06.729 START TEST lvs_grow_dirty 00:17:06.729 ************************************ 00:17:06.987 11:52:57 -- common/autotest_common.sh@1111 -- # lvs_grow dirty 00:17:06.987 11:52:57 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:06.987 11:52:57 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:06.987 11:52:57 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:06.987 11:52:57 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:06.987 11:52:57 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:06.987 11:52:57 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:06.987 11:52:57 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:06.987 11:52:57 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:06.987 11:52:57 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:06.987 11:52:57 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:06.987 11:52:57 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:07.245 11:52:57 -- target/nvmf_lvs_grow.sh@28 -- # lvs=4ee476db-956f-4840-9090-0a6c2c5ba8fc 00:17:07.245 11:52:57 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ee476db-956f-4840-9090-0a6c2c5ba8fc 00:17:07.245 11:52:57 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:07.503 11:52:57 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:07.503 11:52:57 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:07.503 11:52:57 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4ee476db-956f-4840-9090-0a6c2c5ba8fc lvol 150 00:17:07.503 11:52:57 -- target/nvmf_lvs_grow.sh@33 -- # lvol=98d01c76-39df-4f26-853b-5d22d698c807 00:17:07.503 11:52:57 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:07.503 11:52:58 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:07.761 [2024-04-18 11:52:58.157481] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:07.761 [2024-04-18 11:52:58.157556] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:07.761 true 00:17:07.761 11:52:58 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ee476db-956f-4840-9090-0a6c2c5ba8fc 00:17:07.762 11:52:58 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:08.019 11:52:58 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:08.019 11:52:58 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:08.019 11:52:58 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 98d01c76-39df-4f26-853b-5d22d698c807 00:17:08.277 11:52:58 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:08.277 11:52:58 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:08.536 11:52:58 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:08.536 11:52:58 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2459489 00:17:08.536 11:52:58 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:08.536 11:52:58 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2459489 /var/tmp/bdevperf.sock 00:17:08.536 11:52:58 -- common/autotest_common.sh@817 -- # '[' -z 2459489 ']' 00:17:08.536 11:52:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:08.536 11:52:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:08.536 11:52:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:08.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:08.536 11:52:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:08.536 11:52:58 -- common/autotest_common.sh@10 -- # set +x 00:17:08.536 [2024-04-18 11:52:59.009330] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:17:08.536 [2024-04-18 11:52:59.009422] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2459489 ] 00:17:08.536 EAL: No free 2048 kB hugepages reported on node 1 00:17:08.794 [2024-04-18 11:52:59.133901] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.794 [2024-04-18 11:52:59.341549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:09.361 11:52:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:09.361 11:52:59 -- common/autotest_common.sh@850 -- # return 0 00:17:09.361 11:52:59 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:09.619 Nvme0n1 00:17:09.619 11:53:00 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:09.878 [ 00:17:09.878 { 00:17:09.878 "name": "Nvme0n1", 00:17:09.878 "aliases": [ 00:17:09.878 "98d01c76-39df-4f26-853b-5d22d698c807" 00:17:09.878 ], 00:17:09.878 "product_name": "NVMe disk", 00:17:09.878 "block_size": 4096, 00:17:09.878 "num_blocks": 38912, 00:17:09.878 "uuid": "98d01c76-39df-4f26-853b-5d22d698c807", 00:17:09.878 "assigned_rate_limits": { 00:17:09.878 "rw_ios_per_sec": 0, 00:17:09.878 "rw_mbytes_per_sec": 0, 00:17:09.878 "r_mbytes_per_sec": 0, 00:17:09.878 "w_mbytes_per_sec": 0 00:17:09.878 }, 00:17:09.878 "claimed": false, 00:17:09.878 "zoned": false, 00:17:09.878 "supported_io_types": { 00:17:09.878 "read": true, 00:17:09.878 "write": true, 00:17:09.878 "unmap": true, 00:17:09.878 "write_zeroes": true, 00:17:09.878 "flush": true, 00:17:09.878 "reset": true, 00:17:09.878 "compare": true, 00:17:09.878 "compare_and_write": true, 00:17:09.878 "abort": true, 00:17:09.878 "nvme_admin": true, 00:17:09.878 "nvme_io": true 00:17:09.878 }, 00:17:09.878 "memory_domains": [ 00:17:09.878 { 00:17:09.878 "dma_device_id": "system", 00:17:09.878 "dma_device_type": 1 00:17:09.878 } 00:17:09.878 ], 00:17:09.878 "driver_specific": { 00:17:09.878 "nvme": [ 00:17:09.878 { 00:17:09.878 "trid": { 00:17:09.878 "trtype": "TCP", 00:17:09.878 "adrfam": "IPv4", 00:17:09.878 "traddr": "10.0.0.2", 00:17:09.878 "trsvcid": "4420", 00:17:09.878 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:09.878 }, 00:17:09.878 "ctrlr_data": { 00:17:09.878 "cntlid": 1, 00:17:09.879 "vendor_id": "0x8086", 00:17:09.879 "model_number": "SPDK bdev Controller", 00:17:09.879 "serial_number": "SPDK0", 00:17:09.879 "firmware_revision": "24.05", 00:17:09.879 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:09.879 "oacs": { 00:17:09.879 "security": 0, 00:17:09.879 "format": 0, 00:17:09.879 "firmware": 0, 00:17:09.879 "ns_manage": 0 00:17:09.879 }, 00:17:09.879 "multi_ctrlr": true, 00:17:09.879 "ana_reporting": false 00:17:09.879 }, 00:17:09.879 "vs": { 00:17:09.879 "nvme_version": "1.3" 00:17:09.879 }, 00:17:09.879 "ns_data": { 00:17:09.879 "id": 1, 00:17:09.879 "can_share": true 00:17:09.879 } 00:17:09.879 } 00:17:09.879 ], 00:17:09.879 "mp_policy": "active_passive" 00:17:09.879 } 00:17:09.879 } 00:17:09.879 ] 00:17:09.879 11:53:00 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2459747 00:17:09.879 11:53:00 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:09.879 11:53:00 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:09.879 Running I/O for 10 seconds... 00:17:10.813 Latency(us) 00:17:10.813 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:10.813 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:10.813 Nvme0n1 : 1.00 19856.00 77.56 0.00 0.00 0.00 0.00 0.00 00:17:10.813 =================================================================================================================== 00:17:10.813 Total : 19856.00 77.56 0.00 0.00 0.00 0.00 0.00 00:17:10.813 00:17:11.748 11:53:02 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4ee476db-956f-4840-9090-0a6c2c5ba8fc 00:17:11.748 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:11.748 Nvme0n1 : 2.00 20168.00 78.78 0.00 0.00 0.00 0.00 0.00 00:17:11.748 =================================================================================================================== 00:17:11.748 Total : 20168.00 78.78 0.00 0.00 0.00 0.00 0.00 00:17:11.748 00:17:12.006 true 00:17:12.006 11:53:02 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ee476db-956f-4840-9090-0a6c2c5ba8fc 00:17:12.006 11:53:02 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:12.264 11:53:02 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:12.264 11:53:02 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:12.264 11:53:02 -- target/nvmf_lvs_grow.sh@65 -- # wait 2459747 00:17:12.830 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:12.830 Nvme0n1 : 3.00 20256.00 79.12 0.00 0.00 0.00 0.00 0.00 00:17:12.830 =================================================================================================================== 00:17:12.830 Total : 20256.00 79.12 0.00 0.00 0.00 0.00 0.00 00:17:12.830 00:17:13.764 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:13.764 Nvme0n1 : 4.00 20339.75 79.45 0.00 0.00 0.00 0.00 0.00 00:17:13.764 =================================================================================================================== 00:17:13.764 Total : 20339.75 79.45 0.00 0.00 0.00 0.00 0.00 00:17:13.764 00:17:15.166 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:15.166 Nvme0n1 : 5.00 20406.40 79.71 0.00 0.00 0.00 0.00 0.00 00:17:15.166 =================================================================================================================== 00:17:15.166 Total : 20406.40 79.71 0.00 0.00 0.00 0.00 0.00 00:17:15.166 00:17:16.101 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:16.101 Nvme0n1 : 6.00 20448.50 79.88 0.00 0.00 0.00 0.00 0.00 00:17:16.101 =================================================================================================================== 00:17:16.101 Total : 20448.50 79.88 0.00 0.00 0.00 0.00 0.00 00:17:16.101 00:17:17.036 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:17.036 Nvme0n1 : 7.00 20491.29 80.04 0.00 0.00 0.00 0.00 0.00 00:17:17.036 =================================================================================================================== 00:17:17.036 Total : 20491.29 80.04 0.00 0.00 0.00 0.00 0.00 00:17:17.036 00:17:17.970 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:17.970 Nvme0n1 : 8.00 20528.25 80.19 0.00 0.00 0.00 0.00 0.00 00:17:17.970 =================================================================================================================== 00:17:17.970 Total : 20528.25 80.19 0.00 0.00 0.00 0.00 0.00 00:17:17.970 00:17:18.906 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:18.906 Nvme0n1 : 9.00 20558.56 80.31 0.00 0.00 0.00 0.00 0.00 00:17:18.906 =================================================================================================================== 00:17:18.906 Total : 20558.56 80.31 0.00 0.00 0.00 0.00 0.00 00:17:18.906 00:17:19.841 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:19.841 Nvme0n1 : 10.00 20558.40 80.31 0.00 0.00 0.00 0.00 0.00 00:17:19.841 =================================================================================================================== 00:17:19.841 Total : 20558.40 80.31 0.00 0.00 0.00 0.00 0.00 00:17:19.841 00:17:19.841 00:17:19.841 Latency(us) 00:17:19.841 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:19.841 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:19.841 Nvme0n1 : 10.01 20557.37 80.30 0.00 0.00 6222.50 1795.69 11744.05 00:17:19.841 =================================================================================================================== 00:17:19.841 Total : 20557.37 80.30 0.00 0.00 6222.50 1795.69 11744.05 00:17:19.841 0 00:17:19.841 11:53:10 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2459489 00:17:19.841 11:53:10 -- common/autotest_common.sh@936 -- # '[' -z 2459489 ']' 00:17:19.841 11:53:10 -- common/autotest_common.sh@940 -- # kill -0 2459489 00:17:19.841 11:53:10 -- common/autotest_common.sh@941 -- # uname 00:17:19.841 11:53:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:19.841 11:53:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2459489 00:17:20.100 11:53:10 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:20.100 11:53:10 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:20.100 11:53:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2459489' 00:17:20.100 killing process with pid 2459489 00:17:20.100 11:53:10 -- common/autotest_common.sh@955 -- # kill 2459489 00:17:20.100 Received shutdown signal, test time was about 10.000000 seconds 00:17:20.100 00:17:20.100 Latency(us) 00:17:20.100 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:20.100 =================================================================================================================== 00:17:20.100 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:20.100 11:53:10 -- common/autotest_common.sh@960 -- # wait 2459489 00:17:21.035 11:53:11 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:21.294 11:53:11 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:17:21.294 11:53:11 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ee476db-956f-4840-9090-0a6c2c5ba8fc 00:17:21.294 11:53:11 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:17:21.294 11:53:11 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:17:21.294 11:53:11 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 2456159 00:17:21.294 11:53:11 -- target/nvmf_lvs_grow.sh@74 -- # wait 2456159 00:17:21.552 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 2456159 Killed "${NVMF_APP[@]}" "$@" 00:17:21.552 11:53:11 -- target/nvmf_lvs_grow.sh@74 -- # true 00:17:21.552 11:53:11 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:17:21.552 11:53:11 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:21.552 11:53:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:21.552 11:53:11 -- common/autotest_common.sh@10 -- # set +x 00:17:21.552 11:53:11 -- nvmf/common.sh@470 -- # nvmfpid=2461740 00:17:21.552 11:53:11 -- nvmf/common.sh@471 -- # waitforlisten 2461740 00:17:21.552 11:53:11 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:21.552 11:53:11 -- common/autotest_common.sh@817 -- # '[' -z 2461740 ']' 00:17:21.552 11:53:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:21.552 11:53:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:21.552 11:53:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:21.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:21.552 11:53:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:21.552 11:53:11 -- common/autotest_common.sh@10 -- # set +x 00:17:21.552 [2024-04-18 11:53:11.979476] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:17:21.552 [2024-04-18 11:53:11.979570] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:21.552 EAL: No free 2048 kB hugepages reported on node 1 00:17:21.810 [2024-04-18 11:53:12.113521] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.810 [2024-04-18 11:53:12.318789] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:21.810 [2024-04-18 11:53:12.318840] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:21.810 [2024-04-18 11:53:12.318852] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:21.810 [2024-04-18 11:53:12.318865] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:21.810 [2024-04-18 11:53:12.318874] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:21.810 [2024-04-18 11:53:12.318907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.377 11:53:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:22.377 11:53:12 -- common/autotest_common.sh@850 -- # return 0 00:17:22.377 11:53:12 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:22.377 11:53:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:22.377 11:53:12 -- common/autotest_common.sh@10 -- # set +x 00:17:22.377 11:53:12 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:22.377 11:53:12 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:22.377 [2024-04-18 11:53:12.917711] blobstore.c:4779:bs_recover: *NOTICE*: Performing recovery on blobstore 00:17:22.377 [2024-04-18 11:53:12.917852] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:17:22.377 [2024-04-18 11:53:12.917887] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:17:22.636 11:53:12 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:17:22.636 11:53:12 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 98d01c76-39df-4f26-853b-5d22d698c807 00:17:22.636 11:53:12 -- common/autotest_common.sh@885 -- # local bdev_name=98d01c76-39df-4f26-853b-5d22d698c807 00:17:22.636 11:53:12 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:17:22.636 11:53:12 -- common/autotest_common.sh@887 -- # local i 00:17:22.636 11:53:12 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:17:22.636 11:53:12 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:17:22.636 11:53:12 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:22.636 11:53:13 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 98d01c76-39df-4f26-853b-5d22d698c807 -t 2000 00:17:22.894 [ 00:17:22.894 { 00:17:22.894 "name": "98d01c76-39df-4f26-853b-5d22d698c807", 00:17:22.894 "aliases": [ 00:17:22.894 "lvs/lvol" 00:17:22.894 ], 00:17:22.894 "product_name": "Logical Volume", 00:17:22.894 "block_size": 4096, 00:17:22.894 "num_blocks": 38912, 00:17:22.894 "uuid": "98d01c76-39df-4f26-853b-5d22d698c807", 00:17:22.894 "assigned_rate_limits": { 00:17:22.894 "rw_ios_per_sec": 0, 00:17:22.894 "rw_mbytes_per_sec": 0, 00:17:22.894 "r_mbytes_per_sec": 0, 00:17:22.894 "w_mbytes_per_sec": 0 00:17:22.894 }, 00:17:22.894 "claimed": false, 00:17:22.894 "zoned": false, 00:17:22.894 "supported_io_types": { 00:17:22.894 "read": true, 00:17:22.894 "write": true, 00:17:22.894 "unmap": true, 00:17:22.894 "write_zeroes": true, 00:17:22.894 "flush": false, 00:17:22.894 "reset": true, 00:17:22.894 "compare": false, 00:17:22.894 "compare_and_write": false, 00:17:22.894 "abort": false, 00:17:22.894 "nvme_admin": false, 00:17:22.894 "nvme_io": false 00:17:22.894 }, 00:17:22.894 "driver_specific": { 00:17:22.894 "lvol": { 00:17:22.894 "lvol_store_uuid": "4ee476db-956f-4840-9090-0a6c2c5ba8fc", 00:17:22.894 "base_bdev": "aio_bdev", 00:17:22.894 "thin_provision": false, 00:17:22.894 "snapshot": false, 00:17:22.894 "clone": false, 00:17:22.894 "esnap_clone": false 00:17:22.894 } 00:17:22.894 } 00:17:22.894 } 00:17:22.894 ] 00:17:22.894 11:53:13 -- common/autotest_common.sh@893 -- # return 0 00:17:22.894 11:53:13 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ee476db-956f-4840-9090-0a6c2c5ba8fc 00:17:22.894 11:53:13 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:17:22.894 11:53:13 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:17:22.894 11:53:13 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ee476db-956f-4840-9090-0a6c2c5ba8fc 00:17:22.894 11:53:13 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:17:23.152 11:53:13 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:17:23.152 11:53:13 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:23.410 [2024-04-18 11:53:13.753841] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:23.410 11:53:13 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ee476db-956f-4840-9090-0a6c2c5ba8fc 00:17:23.410 11:53:13 -- common/autotest_common.sh@638 -- # local es=0 00:17:23.410 11:53:13 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ee476db-956f-4840-9090-0a6c2c5ba8fc 00:17:23.410 11:53:13 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:23.410 11:53:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:23.410 11:53:13 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:23.410 11:53:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:23.410 11:53:13 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:23.410 11:53:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:23.410 11:53:13 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:23.410 11:53:13 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:23.411 11:53:13 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ee476db-956f-4840-9090-0a6c2c5ba8fc 00:17:23.669 request: 00:17:23.669 { 00:17:23.669 "uuid": "4ee476db-956f-4840-9090-0a6c2c5ba8fc", 00:17:23.669 "method": "bdev_lvol_get_lvstores", 00:17:23.669 "req_id": 1 00:17:23.669 } 00:17:23.669 Got JSON-RPC error response 00:17:23.669 response: 00:17:23.669 { 00:17:23.669 "code": -19, 00:17:23.669 "message": "No such device" 00:17:23.669 } 00:17:23.669 11:53:13 -- common/autotest_common.sh@641 -- # es=1 00:17:23.669 11:53:13 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:23.669 11:53:13 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:23.669 11:53:13 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:23.669 11:53:13 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:23.669 aio_bdev 00:17:23.669 11:53:14 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 98d01c76-39df-4f26-853b-5d22d698c807 00:17:23.669 11:53:14 -- common/autotest_common.sh@885 -- # local bdev_name=98d01c76-39df-4f26-853b-5d22d698c807 00:17:23.669 11:53:14 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:17:23.669 11:53:14 -- common/autotest_common.sh@887 -- # local i 00:17:23.669 11:53:14 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:17:23.669 11:53:14 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:17:23.669 11:53:14 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:23.928 11:53:14 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 98d01c76-39df-4f26-853b-5d22d698c807 -t 2000 00:17:23.928 [ 00:17:23.928 { 00:17:23.928 "name": "98d01c76-39df-4f26-853b-5d22d698c807", 00:17:23.928 "aliases": [ 00:17:23.928 "lvs/lvol" 00:17:23.928 ], 00:17:23.928 "product_name": "Logical Volume", 00:17:23.928 "block_size": 4096, 00:17:23.928 "num_blocks": 38912, 00:17:23.928 "uuid": "98d01c76-39df-4f26-853b-5d22d698c807", 00:17:23.928 "assigned_rate_limits": { 00:17:23.928 "rw_ios_per_sec": 0, 00:17:23.928 "rw_mbytes_per_sec": 0, 00:17:23.928 "r_mbytes_per_sec": 0, 00:17:23.928 "w_mbytes_per_sec": 0 00:17:23.928 }, 00:17:23.928 "claimed": false, 00:17:23.928 "zoned": false, 00:17:23.928 "supported_io_types": { 00:17:23.928 "read": true, 00:17:23.928 "write": true, 00:17:23.928 "unmap": true, 00:17:23.928 "write_zeroes": true, 00:17:23.928 "flush": false, 00:17:23.928 "reset": true, 00:17:23.928 "compare": false, 00:17:23.928 "compare_and_write": false, 00:17:23.928 "abort": false, 00:17:23.928 "nvme_admin": false, 00:17:23.928 "nvme_io": false 00:17:23.928 }, 00:17:23.928 "driver_specific": { 00:17:23.929 "lvol": { 00:17:23.929 "lvol_store_uuid": "4ee476db-956f-4840-9090-0a6c2c5ba8fc", 00:17:23.929 "base_bdev": "aio_bdev", 00:17:23.929 "thin_provision": false, 00:17:23.929 "snapshot": false, 00:17:23.929 "clone": false, 00:17:23.929 "esnap_clone": false 00:17:23.929 } 00:17:23.929 } 00:17:23.929 } 00:17:23.929 ] 00:17:23.929 11:53:14 -- common/autotest_common.sh@893 -- # return 0 00:17:23.929 11:53:14 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ee476db-956f-4840-9090-0a6c2c5ba8fc 00:17:23.929 11:53:14 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:17:24.187 11:53:14 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:17:24.187 11:53:14 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ee476db-956f-4840-9090-0a6c2c5ba8fc 00:17:24.187 11:53:14 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:17:24.445 11:53:14 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:17:24.445 11:53:14 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 98d01c76-39df-4f26-853b-5d22d698c807 00:17:24.445 11:53:14 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4ee476db-956f-4840-9090-0a6c2c5ba8fc 00:17:24.704 11:53:15 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:24.962 11:53:15 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:24.962 00:17:24.962 real 0m18.058s 00:17:24.962 user 0m45.982s 00:17:24.962 sys 0m4.621s 00:17:24.963 11:53:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:24.963 11:53:15 -- common/autotest_common.sh@10 -- # set +x 00:17:24.963 ************************************ 00:17:24.963 END TEST lvs_grow_dirty 00:17:24.963 ************************************ 00:17:24.963 11:53:15 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:17:24.963 11:53:15 -- common/autotest_common.sh@794 -- # type=--id 00:17:24.963 11:53:15 -- common/autotest_common.sh@795 -- # id=0 00:17:24.963 11:53:15 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:17:24.963 11:53:15 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:24.963 11:53:15 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:17:24.963 11:53:15 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:17:24.963 11:53:15 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:17:24.963 11:53:15 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:24.963 nvmf_trace.0 00:17:24.963 11:53:15 -- common/autotest_common.sh@809 -- # return 0 00:17:24.963 11:53:15 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:17:24.963 11:53:15 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:24.963 11:53:15 -- nvmf/common.sh@117 -- # sync 00:17:24.963 11:53:15 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:24.963 11:53:15 -- nvmf/common.sh@120 -- # set +e 00:17:24.963 11:53:15 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:24.963 11:53:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:24.963 rmmod nvme_tcp 00:17:24.963 rmmod nvme_fabrics 00:17:24.963 rmmod nvme_keyring 00:17:24.963 11:53:15 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:24.963 11:53:15 -- nvmf/common.sh@124 -- # set -e 00:17:24.963 11:53:15 -- nvmf/common.sh@125 -- # return 0 00:17:24.963 11:53:15 -- nvmf/common.sh@478 -- # '[' -n 2461740 ']' 00:17:24.963 11:53:15 -- nvmf/common.sh@479 -- # killprocess 2461740 00:17:24.963 11:53:15 -- common/autotest_common.sh@936 -- # '[' -z 2461740 ']' 00:17:24.963 11:53:15 -- common/autotest_common.sh@940 -- # kill -0 2461740 00:17:24.963 11:53:15 -- common/autotest_common.sh@941 -- # uname 00:17:24.963 11:53:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:24.963 11:53:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2461740 00:17:25.221 11:53:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:25.221 11:53:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:25.221 11:53:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2461740' 00:17:25.221 killing process with pid 2461740 00:17:25.221 11:53:15 -- common/autotest_common.sh@955 -- # kill 2461740 00:17:25.221 11:53:15 -- common/autotest_common.sh@960 -- # wait 2461740 00:17:26.600 11:53:16 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:26.600 11:53:16 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:26.600 11:53:16 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:26.600 11:53:16 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:26.600 11:53:16 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:26.600 11:53:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:26.600 11:53:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:26.600 11:53:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.504 11:53:18 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:28.504 00:17:28.504 real 0m46.198s 00:17:28.504 user 1m8.564s 00:17:28.504 sys 0m12.352s 00:17:28.504 11:53:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:28.504 11:53:18 -- common/autotest_common.sh@10 -- # set +x 00:17:28.504 ************************************ 00:17:28.504 END TEST nvmf_lvs_grow 00:17:28.504 ************************************ 00:17:28.504 11:53:18 -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:28.504 11:53:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:28.504 11:53:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:28.504 11:53:18 -- common/autotest_common.sh@10 -- # set +x 00:17:28.504 ************************************ 00:17:28.504 START TEST nvmf_bdev_io_wait 00:17:28.504 ************************************ 00:17:28.504 11:53:19 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:28.765 * Looking for test storage... 00:17:28.765 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:28.765 11:53:19 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:28.765 11:53:19 -- nvmf/common.sh@7 -- # uname -s 00:17:28.765 11:53:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:28.765 11:53:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:28.765 11:53:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:28.765 11:53:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:28.765 11:53:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:28.765 11:53:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:28.765 11:53:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:28.765 11:53:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:28.765 11:53:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:28.765 11:53:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:28.765 11:53:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:28.765 11:53:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:17:28.765 11:53:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:28.765 11:53:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:28.765 11:53:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:28.765 11:53:19 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:28.765 11:53:19 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:28.765 11:53:19 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:28.765 11:53:19 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:28.765 11:53:19 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:28.765 11:53:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.765 11:53:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.765 11:53:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.765 11:53:19 -- paths/export.sh@5 -- # export PATH 00:17:28.765 11:53:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.765 11:53:19 -- nvmf/common.sh@47 -- # : 0 00:17:28.765 11:53:19 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:28.765 11:53:19 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:28.765 11:53:19 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:28.765 11:53:19 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:28.765 11:53:19 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:28.765 11:53:19 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:28.765 11:53:19 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:28.765 11:53:19 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:28.765 11:53:19 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:28.765 11:53:19 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:28.765 11:53:19 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:17:28.765 11:53:19 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:28.765 11:53:19 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:28.766 11:53:19 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:28.766 11:53:19 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:28.766 11:53:19 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:28.766 11:53:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:28.766 11:53:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:28.766 11:53:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.766 11:53:19 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:28.766 11:53:19 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:28.766 11:53:19 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:28.766 11:53:19 -- common/autotest_common.sh@10 -- # set +x 00:17:35.371 11:53:25 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:35.371 11:53:25 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:35.371 11:53:25 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:35.371 11:53:25 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:35.371 11:53:25 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:35.371 11:53:25 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:35.371 11:53:25 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:35.371 11:53:25 -- nvmf/common.sh@295 -- # net_devs=() 00:17:35.371 11:53:25 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:35.371 11:53:25 -- nvmf/common.sh@296 -- # e810=() 00:17:35.371 11:53:25 -- nvmf/common.sh@296 -- # local -ga e810 00:17:35.371 11:53:25 -- nvmf/common.sh@297 -- # x722=() 00:17:35.371 11:53:25 -- nvmf/common.sh@297 -- # local -ga x722 00:17:35.371 11:53:25 -- nvmf/common.sh@298 -- # mlx=() 00:17:35.371 11:53:25 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:35.371 11:53:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:35.371 11:53:25 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:35.371 11:53:25 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:35.371 11:53:25 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:35.371 11:53:25 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:35.371 11:53:25 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:35.371 11:53:25 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:35.371 11:53:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:35.371 11:53:25 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:35.371 11:53:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:35.371 11:53:25 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:35.371 11:53:25 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:35.371 11:53:25 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:35.371 11:53:25 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:35.371 11:53:25 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:35.371 11:53:25 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:35.371 11:53:25 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:35.371 11:53:25 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:35.371 11:53:25 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:35.371 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:35.371 11:53:25 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:35.371 11:53:25 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:35.371 11:53:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:35.371 11:53:25 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:35.371 11:53:25 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:35.371 11:53:25 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:35.371 11:53:25 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:35.371 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:35.371 11:53:25 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:35.371 11:53:25 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:35.371 11:53:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:35.371 11:53:25 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:35.371 11:53:25 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:35.371 11:53:25 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:35.371 11:53:25 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:35.371 11:53:25 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:35.371 11:53:25 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:35.371 11:53:25 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:35.371 11:53:25 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:35.371 11:53:25 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:35.371 11:53:25 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:35.371 Found net devices under 0000:af:00.0: cvl_0_0 00:17:35.371 11:53:25 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:35.371 11:53:25 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:35.371 11:53:25 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:35.371 11:53:25 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:35.371 11:53:25 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:35.371 11:53:25 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:35.371 Found net devices under 0000:af:00.1: cvl_0_1 00:17:35.371 11:53:25 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:35.371 11:53:25 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:35.371 11:53:25 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:35.371 11:53:25 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:35.371 11:53:25 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:35.371 11:53:25 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:35.371 11:53:25 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:35.371 11:53:25 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:35.371 11:53:25 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:35.371 11:53:25 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:35.371 11:53:25 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:35.371 11:53:25 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:35.371 11:53:25 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:35.371 11:53:25 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:35.371 11:53:25 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:35.371 11:53:25 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:35.371 11:53:25 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:35.371 11:53:25 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:35.371 11:53:25 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:35.371 11:53:25 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:35.371 11:53:25 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:35.371 11:53:25 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:35.371 11:53:25 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:35.371 11:53:25 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:35.372 11:53:25 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:35.372 11:53:25 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:35.372 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:35.372 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:17:35.372 00:17:35.372 --- 10.0.0.2 ping statistics --- 00:17:35.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.372 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:17:35.372 11:53:25 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:35.372 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:35.372 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:17:35.372 00:17:35.372 --- 10.0.0.1 ping statistics --- 00:17:35.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.372 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:17:35.372 11:53:25 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:35.372 11:53:25 -- nvmf/common.sh@411 -- # return 0 00:17:35.372 11:53:25 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:35.372 11:53:25 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:35.372 11:53:25 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:35.372 11:53:25 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:35.372 11:53:25 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:35.372 11:53:25 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:35.372 11:53:25 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:35.630 11:53:25 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:17:35.630 11:53:25 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:35.630 11:53:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:35.630 11:53:25 -- common/autotest_common.sh@10 -- # set +x 00:17:35.630 11:53:25 -- nvmf/common.sh@470 -- # nvmfpid=2466236 00:17:35.630 11:53:25 -- nvmf/common.sh@471 -- # waitforlisten 2466236 00:17:35.630 11:53:25 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:17:35.630 11:53:25 -- common/autotest_common.sh@817 -- # '[' -z 2466236 ']' 00:17:35.630 11:53:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:35.630 11:53:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:35.630 11:53:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:35.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:35.630 11:53:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:35.630 11:53:25 -- common/autotest_common.sh@10 -- # set +x 00:17:35.630 [2024-04-18 11:53:26.049731] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:17:35.630 [2024-04-18 11:53:26.049820] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:35.630 EAL: No free 2048 kB hugepages reported on node 1 00:17:35.888 [2024-04-18 11:53:26.181291] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:35.888 [2024-04-18 11:53:26.404517] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:35.888 [2024-04-18 11:53:26.404565] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:35.888 [2024-04-18 11:53:26.404577] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:35.889 [2024-04-18 11:53:26.404591] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:35.889 [2024-04-18 11:53:26.404600] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:35.889 [2024-04-18 11:53:26.404685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:35.889 [2024-04-18 11:53:26.404759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:35.889 [2024-04-18 11:53:26.404820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:35.889 [2024-04-18 11:53:26.404828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:36.454 11:53:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:36.454 11:53:26 -- common/autotest_common.sh@850 -- # return 0 00:17:36.454 11:53:26 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:36.454 11:53:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:36.454 11:53:26 -- common/autotest_common.sh@10 -- # set +x 00:17:36.454 11:53:26 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:36.454 11:53:26 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:17:36.454 11:53:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:36.454 11:53:26 -- common/autotest_common.sh@10 -- # set +x 00:17:36.454 11:53:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:36.454 11:53:26 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:17:36.454 11:53:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:36.454 11:53:26 -- common/autotest_common.sh@10 -- # set +x 00:17:36.712 11:53:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:36.712 11:53:27 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:36.712 11:53:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:36.712 11:53:27 -- common/autotest_common.sh@10 -- # set +x 00:17:36.712 [2024-04-18 11:53:27.138591] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:36.712 11:53:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:36.712 11:53:27 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:36.712 11:53:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:36.712 11:53:27 -- common/autotest_common.sh@10 -- # set +x 00:17:36.712 Malloc0 00:17:36.712 11:53:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:36.712 11:53:27 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:36.712 11:53:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:36.712 11:53:27 -- common/autotest_common.sh@10 -- # set +x 00:17:36.971 11:53:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:36.971 11:53:27 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:36.971 11:53:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:36.971 11:53:27 -- common/autotest_common.sh@10 -- # set +x 00:17:36.971 11:53:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:36.971 11:53:27 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:36.971 11:53:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:36.971 11:53:27 -- common/autotest_common.sh@10 -- # set +x 00:17:36.971 [2024-04-18 11:53:27.276635] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:36.971 11:53:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:36.971 11:53:27 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2466476 00:17:36.971 11:53:27 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:17:36.971 11:53:27 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:17:36.971 11:53:27 -- target/bdev_io_wait.sh@30 -- # READ_PID=2466478 00:17:36.971 11:53:27 -- nvmf/common.sh@521 -- # config=() 00:17:36.971 11:53:27 -- nvmf/common.sh@521 -- # local subsystem config 00:17:36.971 11:53:27 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:36.971 11:53:27 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:36.971 { 00:17:36.971 "params": { 00:17:36.971 "name": "Nvme$subsystem", 00:17:36.971 "trtype": "$TEST_TRANSPORT", 00:17:36.971 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:36.971 "adrfam": "ipv4", 00:17:36.971 "trsvcid": "$NVMF_PORT", 00:17:36.971 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:36.971 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:36.971 "hdgst": ${hdgst:-false}, 00:17:36.971 "ddgst": ${ddgst:-false} 00:17:36.971 }, 00:17:36.971 "method": "bdev_nvme_attach_controller" 00:17:36.971 } 00:17:36.971 EOF 00:17:36.971 )") 00:17:36.971 11:53:27 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:17:36.971 11:53:27 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2466480 00:17:36.971 11:53:27 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:17:36.971 11:53:27 -- nvmf/common.sh@521 -- # config=() 00:17:36.971 11:53:27 -- nvmf/common.sh@521 -- # local subsystem config 00:17:36.971 11:53:27 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:36.971 11:53:27 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:36.971 { 00:17:36.971 "params": { 00:17:36.971 "name": "Nvme$subsystem", 00:17:36.971 "trtype": "$TEST_TRANSPORT", 00:17:36.971 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:36.971 "adrfam": "ipv4", 00:17:36.971 "trsvcid": "$NVMF_PORT", 00:17:36.971 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:36.971 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:36.971 "hdgst": ${hdgst:-false}, 00:17:36.971 "ddgst": ${ddgst:-false} 00:17:36.971 }, 00:17:36.971 "method": "bdev_nvme_attach_controller" 00:17:36.971 } 00:17:36.971 EOF 00:17:36.971 )") 00:17:36.971 11:53:27 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:17:36.971 11:53:27 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2466483 00:17:36.971 11:53:27 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:17:36.971 11:53:27 -- nvmf/common.sh@543 -- # cat 00:17:36.971 11:53:27 -- target/bdev_io_wait.sh@35 -- # sync 00:17:36.971 11:53:27 -- nvmf/common.sh@521 -- # config=() 00:17:36.971 11:53:27 -- nvmf/common.sh@521 -- # local subsystem config 00:17:36.971 11:53:27 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:36.971 11:53:27 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:36.971 { 00:17:36.971 "params": { 00:17:36.971 "name": "Nvme$subsystem", 00:17:36.971 "trtype": "$TEST_TRANSPORT", 00:17:36.971 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:36.971 "adrfam": "ipv4", 00:17:36.971 "trsvcid": "$NVMF_PORT", 00:17:36.971 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:36.971 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:36.971 "hdgst": ${hdgst:-false}, 00:17:36.971 "ddgst": ${ddgst:-false} 00:17:36.971 }, 00:17:36.971 "method": "bdev_nvme_attach_controller" 00:17:36.971 } 00:17:36.971 EOF 00:17:36.971 )") 00:17:36.971 11:53:27 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:17:36.971 11:53:27 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:17:36.971 11:53:27 -- nvmf/common.sh@521 -- # config=() 00:17:36.971 11:53:27 -- nvmf/common.sh@543 -- # cat 00:17:36.971 11:53:27 -- nvmf/common.sh@521 -- # local subsystem config 00:17:36.971 11:53:27 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:36.971 11:53:27 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:36.971 { 00:17:36.971 "params": { 00:17:36.971 "name": "Nvme$subsystem", 00:17:36.971 "trtype": "$TEST_TRANSPORT", 00:17:36.971 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:36.971 "adrfam": "ipv4", 00:17:36.971 "trsvcid": "$NVMF_PORT", 00:17:36.971 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:36.971 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:36.971 "hdgst": ${hdgst:-false}, 00:17:36.971 "ddgst": ${ddgst:-false} 00:17:36.971 }, 00:17:36.971 "method": "bdev_nvme_attach_controller" 00:17:36.971 } 00:17:36.971 EOF 00:17:36.972 )") 00:17:36.972 11:53:27 -- nvmf/common.sh@543 -- # cat 00:17:36.972 11:53:27 -- target/bdev_io_wait.sh@37 -- # wait 2466476 00:17:36.972 11:53:27 -- nvmf/common.sh@543 -- # cat 00:17:36.972 11:53:27 -- nvmf/common.sh@545 -- # jq . 00:17:36.972 11:53:27 -- nvmf/common.sh@545 -- # jq . 00:17:36.972 11:53:27 -- nvmf/common.sh@545 -- # jq . 00:17:36.972 11:53:27 -- nvmf/common.sh@546 -- # IFS=, 00:17:36.972 11:53:27 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:36.972 "params": { 00:17:36.972 "name": "Nvme1", 00:17:36.972 "trtype": "tcp", 00:17:36.972 "traddr": "10.0.0.2", 00:17:36.972 "adrfam": "ipv4", 00:17:36.972 "trsvcid": "4420", 00:17:36.972 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:36.972 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:36.972 "hdgst": false, 00:17:36.972 "ddgst": false 00:17:36.972 }, 00:17:36.972 "method": "bdev_nvme_attach_controller" 00:17:36.972 }' 00:17:36.972 11:53:27 -- nvmf/common.sh@545 -- # jq . 00:17:36.972 11:53:27 -- nvmf/common.sh@546 -- # IFS=, 00:17:36.972 11:53:27 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:36.972 "params": { 00:17:36.972 "name": "Nvme1", 00:17:36.972 "trtype": "tcp", 00:17:36.972 "traddr": "10.0.0.2", 00:17:36.972 "adrfam": "ipv4", 00:17:36.972 "trsvcid": "4420", 00:17:36.972 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:36.972 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:36.972 "hdgst": false, 00:17:36.972 "ddgst": false 00:17:36.972 }, 00:17:36.972 "method": "bdev_nvme_attach_controller" 00:17:36.972 }' 00:17:36.972 11:53:27 -- nvmf/common.sh@546 -- # IFS=, 00:17:36.972 11:53:27 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:36.972 "params": { 00:17:36.972 "name": "Nvme1", 00:17:36.972 "trtype": "tcp", 00:17:36.972 "traddr": "10.0.0.2", 00:17:36.972 "adrfam": "ipv4", 00:17:36.972 "trsvcid": "4420", 00:17:36.972 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:36.972 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:36.972 "hdgst": false, 00:17:36.972 "ddgst": false 00:17:36.972 }, 00:17:36.972 "method": "bdev_nvme_attach_controller" 00:17:36.972 }' 00:17:36.972 11:53:27 -- nvmf/common.sh@546 -- # IFS=, 00:17:36.972 11:53:27 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:36.972 "params": { 00:17:36.972 "name": "Nvme1", 00:17:36.972 "trtype": "tcp", 00:17:36.972 "traddr": "10.0.0.2", 00:17:36.972 "adrfam": "ipv4", 00:17:36.972 "trsvcid": "4420", 00:17:36.972 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:36.972 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:36.972 "hdgst": false, 00:17:36.972 "ddgst": false 00:17:36.972 }, 00:17:36.972 "method": "bdev_nvme_attach_controller" 00:17:36.972 }' 00:17:36.972 [2024-04-18 11:53:27.361412] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:17:36.972 [2024-04-18 11:53:27.361522] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:17:36.972 [2024-04-18 11:53:27.361578] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:17:36.972 [2024-04-18 11:53:27.361661] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:17:36.972 [2024-04-18 11:53:27.363816] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:17:36.972 [2024-04-18 11:53:27.363894] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:17:36.972 [2024-04-18 11:53:27.365549] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:17:36.972 [2024-04-18 11:53:27.365643] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:17:36.972 EAL: No free 2048 kB hugepages reported on node 1 00:17:37.229 EAL: No free 2048 kB hugepages reported on node 1 00:17:37.229 [2024-04-18 11:53:27.600330] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.229 EAL: No free 2048 kB hugepages reported on node 1 00:17:37.229 [2024-04-18 11:53:27.696903] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.229 EAL: No free 2048 kB hugepages reported on node 1 00:17:37.487 [2024-04-18 11:53:27.798569] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.487 [2024-04-18 11:53:27.830593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:17:37.487 [2024-04-18 11:53:27.865446] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.487 [2024-04-18 11:53:27.909961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:37.487 [2024-04-18 11:53:28.024374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:37.745 [2024-04-18 11:53:28.074952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:37.745 Running I/O for 1 seconds... 00:17:38.003 Running I/O for 1 seconds... 00:17:38.261 Running I/O for 1 seconds... 00:17:38.261 Running I/O for 1 seconds... 00:17:38.828 00:17:38.828 Latency(us) 00:17:38.828 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:38.828 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:17:38.828 Nvme1n1 : 1.01 13555.34 52.95 0.00 0.00 9412.54 5636.10 16148.07 00:17:38.829 =================================================================================================================== 00:17:38.829 Total : 13555.34 52.95 0.00 0.00 9412.54 5636.10 16148.07 00:17:39.087 00:17:39.087 Latency(us) 00:17:39.087 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.087 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:17:39.087 Nvme1n1 : 1.02 6547.24 25.58 0.00 0.00 19417.81 5006.95 29989.27 00:17:39.087 =================================================================================================================== 00:17:39.087 Total : 6547.24 25.58 0.00 0.00 19417.81 5006.95 29989.27 00:17:39.346 00:17:39.346 Latency(us) 00:17:39.346 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.346 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:17:39.346 Nvme1n1 : 1.01 7469.65 29.18 0.00 0.00 17068.50 6579.81 43411.05 00:17:39.346 =================================================================================================================== 00:17:39.346 Total : 7469.65 29.18 0.00 0.00 17068.50 6579.81 43411.05 00:17:39.346 00:17:39.346 Latency(us) 00:17:39.346 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.346 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:17:39.346 Nvme1n1 : 1.00 231579.45 904.61 0.00 0.00 550.69 235.93 740.56 00:17:39.346 =================================================================================================================== 00:17:39.346 Total : 231579.45 904.61 0.00 0.00 550.69 235.93 740.56 00:17:39.911 11:53:30 -- target/bdev_io_wait.sh@38 -- # wait 2466478 00:17:40.169 11:53:30 -- target/bdev_io_wait.sh@39 -- # wait 2466480 00:17:40.428 11:53:30 -- target/bdev_io_wait.sh@40 -- # wait 2466483 00:17:40.428 11:53:30 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:40.428 11:53:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:40.428 11:53:30 -- common/autotest_common.sh@10 -- # set +x 00:17:40.428 11:53:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:40.428 11:53:30 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:17:40.428 11:53:30 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:17:40.428 11:53:30 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:40.428 11:53:30 -- nvmf/common.sh@117 -- # sync 00:17:40.428 11:53:30 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:40.428 11:53:30 -- nvmf/common.sh@120 -- # set +e 00:17:40.428 11:53:30 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:40.428 11:53:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:40.428 rmmod nvme_tcp 00:17:40.428 rmmod nvme_fabrics 00:17:40.428 rmmod nvme_keyring 00:17:40.428 11:53:30 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:40.428 11:53:30 -- nvmf/common.sh@124 -- # set -e 00:17:40.428 11:53:30 -- nvmf/common.sh@125 -- # return 0 00:17:40.428 11:53:30 -- nvmf/common.sh@478 -- # '[' -n 2466236 ']' 00:17:40.428 11:53:30 -- nvmf/common.sh@479 -- # killprocess 2466236 00:17:40.428 11:53:30 -- common/autotest_common.sh@936 -- # '[' -z 2466236 ']' 00:17:40.428 11:53:30 -- common/autotest_common.sh@940 -- # kill -0 2466236 00:17:40.428 11:53:30 -- common/autotest_common.sh@941 -- # uname 00:17:40.428 11:53:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:40.428 11:53:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2466236 00:17:40.428 11:53:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:40.428 11:53:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:40.428 11:53:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2466236' 00:17:40.428 killing process with pid 2466236 00:17:40.428 11:53:30 -- common/autotest_common.sh@955 -- # kill 2466236 00:17:40.428 11:53:30 -- common/autotest_common.sh@960 -- # wait 2466236 00:17:41.806 11:53:32 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:41.806 11:53:32 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:41.806 11:53:32 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:41.806 11:53:32 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:41.806 11:53:32 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:41.806 11:53:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:41.806 11:53:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:41.806 11:53:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:43.712 11:53:34 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:43.712 00:17:43.712 real 0m15.115s 00:17:43.713 user 0m33.468s 00:17:43.713 sys 0m7.641s 00:17:43.713 11:53:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:43.713 11:53:34 -- common/autotest_common.sh@10 -- # set +x 00:17:43.713 ************************************ 00:17:43.713 END TEST nvmf_bdev_io_wait 00:17:43.713 ************************************ 00:17:43.713 11:53:34 -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:43.713 11:53:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:43.713 11:53:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:43.713 11:53:34 -- common/autotest_common.sh@10 -- # set +x 00:17:43.974 ************************************ 00:17:43.974 START TEST nvmf_queue_depth 00:17:43.974 ************************************ 00:17:43.974 11:53:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:43.974 * Looking for test storage... 00:17:43.974 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:43.974 11:53:34 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:43.974 11:53:34 -- nvmf/common.sh@7 -- # uname -s 00:17:43.974 11:53:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:43.974 11:53:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:43.974 11:53:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:43.974 11:53:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:43.974 11:53:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:43.974 11:53:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:43.974 11:53:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:43.974 11:53:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:43.974 11:53:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:43.974 11:53:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:43.974 11:53:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:43.974 11:53:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:17:43.974 11:53:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:43.974 11:53:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:43.974 11:53:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:43.974 11:53:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:43.974 11:53:34 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:43.974 11:53:34 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:43.974 11:53:34 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:43.974 11:53:34 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:43.974 11:53:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.974 11:53:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.975 11:53:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.975 11:53:34 -- paths/export.sh@5 -- # export PATH 00:17:43.975 11:53:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.975 11:53:34 -- nvmf/common.sh@47 -- # : 0 00:17:43.975 11:53:34 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:43.975 11:53:34 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:43.975 11:53:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:43.975 11:53:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:43.975 11:53:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:43.975 11:53:34 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:43.975 11:53:34 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:43.975 11:53:34 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:43.975 11:53:34 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:17:43.975 11:53:34 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:17:43.975 11:53:34 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:43.975 11:53:34 -- target/queue_depth.sh@19 -- # nvmftestinit 00:17:43.975 11:53:34 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:43.975 11:53:34 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:43.975 11:53:34 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:43.975 11:53:34 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:43.975 11:53:34 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:43.975 11:53:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:43.975 11:53:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:43.975 11:53:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:43.975 11:53:34 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:43.975 11:53:34 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:43.975 11:53:34 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:43.975 11:53:34 -- common/autotest_common.sh@10 -- # set +x 00:17:50.560 11:53:40 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:50.560 11:53:40 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:50.560 11:53:40 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:50.560 11:53:40 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:50.560 11:53:40 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:50.560 11:53:40 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:50.560 11:53:40 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:50.560 11:53:40 -- nvmf/common.sh@295 -- # net_devs=() 00:17:50.560 11:53:40 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:50.560 11:53:40 -- nvmf/common.sh@296 -- # e810=() 00:17:50.560 11:53:40 -- nvmf/common.sh@296 -- # local -ga e810 00:17:50.560 11:53:40 -- nvmf/common.sh@297 -- # x722=() 00:17:50.560 11:53:40 -- nvmf/common.sh@297 -- # local -ga x722 00:17:50.560 11:53:40 -- nvmf/common.sh@298 -- # mlx=() 00:17:50.560 11:53:40 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:50.560 11:53:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:50.560 11:53:40 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:50.560 11:53:40 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:50.560 11:53:40 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:50.560 11:53:40 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:50.560 11:53:40 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:50.560 11:53:40 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:50.560 11:53:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:50.560 11:53:40 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:50.560 11:53:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:50.560 11:53:40 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:50.560 11:53:40 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:50.560 11:53:40 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:50.560 11:53:40 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:50.560 11:53:40 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:50.560 11:53:40 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:50.560 11:53:40 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:50.560 11:53:40 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:50.560 11:53:40 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:50.560 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:50.560 11:53:40 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:50.560 11:53:40 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:50.560 11:53:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:50.560 11:53:40 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:50.560 11:53:40 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:50.560 11:53:40 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:50.560 11:53:40 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:50.560 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:50.560 11:53:40 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:50.560 11:53:40 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:50.560 11:53:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:50.560 11:53:40 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:50.560 11:53:40 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:50.560 11:53:40 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:50.560 11:53:40 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:50.560 11:53:40 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:50.560 11:53:40 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:50.560 11:53:40 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:50.560 11:53:40 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:50.560 11:53:40 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:50.560 11:53:40 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:50.560 Found net devices under 0000:af:00.0: cvl_0_0 00:17:50.560 11:53:40 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:50.560 11:53:40 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:50.560 11:53:40 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:50.560 11:53:40 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:50.560 11:53:40 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:50.560 11:53:40 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:50.560 Found net devices under 0000:af:00.1: cvl_0_1 00:17:50.560 11:53:40 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:50.560 11:53:40 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:50.560 11:53:40 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:50.560 11:53:40 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:50.560 11:53:40 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:50.560 11:53:40 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:50.560 11:53:40 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:50.560 11:53:40 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:50.560 11:53:40 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:50.560 11:53:40 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:50.560 11:53:40 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:50.560 11:53:40 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:50.560 11:53:40 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:50.560 11:53:40 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:50.560 11:53:40 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:50.560 11:53:40 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:50.560 11:53:40 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:50.560 11:53:40 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:50.560 11:53:40 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:50.560 11:53:40 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:50.560 11:53:40 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:50.560 11:53:40 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:50.560 11:53:40 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:50.560 11:53:40 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:50.560 11:53:40 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:50.560 11:53:40 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:50.560 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:50.560 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:17:50.560 00:17:50.560 --- 10.0.0.2 ping statistics --- 00:17:50.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.560 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:17:50.560 11:53:40 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:50.560 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:50.560 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:17:50.560 00:17:50.560 --- 10.0.0.1 ping statistics --- 00:17:50.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.560 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:17:50.560 11:53:40 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:50.560 11:53:40 -- nvmf/common.sh@411 -- # return 0 00:17:50.560 11:53:40 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:50.560 11:53:40 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:50.561 11:53:40 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:50.561 11:53:40 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:50.561 11:53:40 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:50.561 11:53:40 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:50.561 11:53:40 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:50.561 11:53:40 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:17:50.561 11:53:40 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:50.561 11:53:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:50.561 11:53:40 -- common/autotest_common.sh@10 -- # set +x 00:17:50.561 11:53:40 -- nvmf/common.sh@470 -- # nvmfpid=2470995 00:17:50.561 11:53:40 -- nvmf/common.sh@471 -- # waitforlisten 2470995 00:17:50.561 11:53:40 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:50.561 11:53:40 -- common/autotest_common.sh@817 -- # '[' -z 2470995 ']' 00:17:50.561 11:53:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:50.561 11:53:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:50.561 11:53:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:50.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:50.561 11:53:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:50.561 11:53:40 -- common/autotest_common.sh@10 -- # set +x 00:17:50.561 [2024-04-18 11:53:41.015565] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:17:50.561 [2024-04-18 11:53:41.015661] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:50.561 EAL: No free 2048 kB hugepages reported on node 1 00:17:50.820 [2024-04-18 11:53:41.146096] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.820 [2024-04-18 11:53:41.353768] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:50.820 [2024-04-18 11:53:41.353816] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:50.820 [2024-04-18 11:53:41.353829] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:50.820 [2024-04-18 11:53:41.353842] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:50.820 [2024-04-18 11:53:41.353852] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:50.820 [2024-04-18 11:53:41.353896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:51.389 11:53:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:51.389 11:53:41 -- common/autotest_common.sh@850 -- # return 0 00:17:51.389 11:53:41 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:51.389 11:53:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:51.389 11:53:41 -- common/autotest_common.sh@10 -- # set +x 00:17:51.389 11:53:41 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:51.389 11:53:41 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:51.389 11:53:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:51.389 11:53:41 -- common/autotest_common.sh@10 -- # set +x 00:17:51.389 [2024-04-18 11:53:41.813125] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:51.389 11:53:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:51.389 11:53:41 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:51.389 11:53:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:51.389 11:53:41 -- common/autotest_common.sh@10 -- # set +x 00:17:51.389 Malloc0 00:17:51.389 11:53:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:51.389 11:53:41 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:51.389 11:53:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:51.389 11:53:41 -- common/autotest_common.sh@10 -- # set +x 00:17:51.389 11:53:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:51.389 11:53:41 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:51.389 11:53:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:51.389 11:53:41 -- common/autotest_common.sh@10 -- # set +x 00:17:51.389 11:53:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:51.389 11:53:41 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:51.389 11:53:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:51.389 11:53:41 -- common/autotest_common.sh@10 -- # set +x 00:17:51.675 [2024-04-18 11:53:41.939404] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:51.676 11:53:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:51.676 11:53:41 -- target/queue_depth.sh@30 -- # bdevperf_pid=2471188 00:17:51.676 11:53:41 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:51.676 11:53:41 -- target/queue_depth.sh@33 -- # waitforlisten 2471188 /var/tmp/bdevperf.sock 00:17:51.676 11:53:41 -- common/autotest_common.sh@817 -- # '[' -z 2471188 ']' 00:17:51.676 11:53:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:51.676 11:53:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:51.676 11:53:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:51.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:51.676 11:53:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:51.676 11:53:41 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:17:51.676 11:53:41 -- common/autotest_common.sh@10 -- # set +x 00:17:51.676 [2024-04-18 11:53:42.023728] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:17:51.676 [2024-04-18 11:53:42.023824] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2471188 ] 00:17:51.676 EAL: No free 2048 kB hugepages reported on node 1 00:17:51.676 [2024-04-18 11:53:42.148364] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.935 [2024-04-18 11:53:42.362801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:52.504 11:53:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:52.504 11:53:42 -- common/autotest_common.sh@850 -- # return 0 00:17:52.504 11:53:42 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:52.504 11:53:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:52.504 11:53:42 -- common/autotest_common.sh@10 -- # set +x 00:17:52.504 NVMe0n1 00:17:52.504 11:53:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:52.504 11:53:42 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:52.764 Running I/O for 10 seconds... 00:18:02.759 00:18:02.759 Latency(us) 00:18:02.759 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:02.759 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:18:02.759 Verification LBA range: start 0x0 length 0x4000 00:18:02.759 NVMe0n1 : 10.07 11080.61 43.28 0.00 0.00 92100.35 19922.94 60817.41 00:18:02.759 =================================================================================================================== 00:18:02.759 Total : 11080.61 43.28 0.00 0.00 92100.35 19922.94 60817.41 00:18:02.759 0 00:18:02.759 11:53:53 -- target/queue_depth.sh@39 -- # killprocess 2471188 00:18:02.759 11:53:53 -- common/autotest_common.sh@936 -- # '[' -z 2471188 ']' 00:18:02.759 11:53:53 -- common/autotest_common.sh@940 -- # kill -0 2471188 00:18:02.759 11:53:53 -- common/autotest_common.sh@941 -- # uname 00:18:02.759 11:53:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:02.759 11:53:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2471188 00:18:02.759 11:53:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:02.759 11:53:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:02.759 11:53:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2471188' 00:18:02.759 killing process with pid 2471188 00:18:02.759 11:53:53 -- common/autotest_common.sh@955 -- # kill 2471188 00:18:02.759 Received shutdown signal, test time was about 10.000000 seconds 00:18:02.759 00:18:02.759 Latency(us) 00:18:02.759 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:02.759 =================================================================================================================== 00:18:02.759 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:02.759 11:53:53 -- common/autotest_common.sh@960 -- # wait 2471188 00:18:03.696 11:53:54 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:03.697 11:53:54 -- target/queue_depth.sh@43 -- # nvmftestfini 00:18:03.697 11:53:54 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:03.697 11:53:54 -- nvmf/common.sh@117 -- # sync 00:18:03.697 11:53:54 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:03.697 11:53:54 -- nvmf/common.sh@120 -- # set +e 00:18:03.697 11:53:54 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:03.697 11:53:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:03.697 rmmod nvme_tcp 00:18:03.956 rmmod nvme_fabrics 00:18:03.956 rmmod nvme_keyring 00:18:03.956 11:53:54 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:03.956 11:53:54 -- nvmf/common.sh@124 -- # set -e 00:18:03.956 11:53:54 -- nvmf/common.sh@125 -- # return 0 00:18:03.956 11:53:54 -- nvmf/common.sh@478 -- # '[' -n 2470995 ']' 00:18:03.956 11:53:54 -- nvmf/common.sh@479 -- # killprocess 2470995 00:18:03.956 11:53:54 -- common/autotest_common.sh@936 -- # '[' -z 2470995 ']' 00:18:03.956 11:53:54 -- common/autotest_common.sh@940 -- # kill -0 2470995 00:18:03.956 11:53:54 -- common/autotest_common.sh@941 -- # uname 00:18:03.956 11:53:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:03.956 11:53:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2470995 00:18:03.956 11:53:54 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:03.956 11:53:54 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:03.956 11:53:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2470995' 00:18:03.956 killing process with pid 2470995 00:18:03.956 11:53:54 -- common/autotest_common.sh@955 -- # kill 2470995 00:18:03.957 11:53:54 -- common/autotest_common.sh@960 -- # wait 2470995 00:18:05.337 11:53:55 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:05.337 11:53:55 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:05.337 11:53:55 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:05.337 11:53:55 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:05.337 11:53:55 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:05.337 11:53:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:05.337 11:53:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:05.337 11:53:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:07.872 11:53:57 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:07.872 00:18:07.872 real 0m23.495s 00:18:07.872 user 0m27.652s 00:18:07.872 sys 0m6.917s 00:18:07.872 11:53:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:07.872 11:53:57 -- common/autotest_common.sh@10 -- # set +x 00:18:07.872 ************************************ 00:18:07.872 END TEST nvmf_queue_depth 00:18:07.872 ************************************ 00:18:07.872 11:53:57 -- nvmf/nvmf.sh@52 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:07.872 11:53:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:07.872 11:53:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:07.872 11:53:57 -- common/autotest_common.sh@10 -- # set +x 00:18:07.872 ************************************ 00:18:07.872 START TEST nvmf_multipath 00:18:07.872 ************************************ 00:18:07.872 11:53:58 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:07.872 * Looking for test storage... 00:18:07.872 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:07.872 11:53:58 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:07.872 11:53:58 -- nvmf/common.sh@7 -- # uname -s 00:18:07.872 11:53:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:07.872 11:53:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:07.872 11:53:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:07.872 11:53:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:07.872 11:53:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:07.872 11:53:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:07.872 11:53:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:07.872 11:53:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:07.872 11:53:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:07.872 11:53:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:07.872 11:53:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:07.872 11:53:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:18:07.872 11:53:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:07.872 11:53:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:07.872 11:53:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:07.872 11:53:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:07.872 11:53:58 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:07.872 11:53:58 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:07.872 11:53:58 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:07.872 11:53:58 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:07.872 11:53:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.872 11:53:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.872 11:53:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.872 11:53:58 -- paths/export.sh@5 -- # export PATH 00:18:07.872 11:53:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.872 11:53:58 -- nvmf/common.sh@47 -- # : 0 00:18:07.872 11:53:58 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:07.872 11:53:58 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:07.872 11:53:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:07.872 11:53:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:07.872 11:53:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:07.872 11:53:58 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:07.872 11:53:58 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:07.872 11:53:58 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:07.873 11:53:58 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:07.873 11:53:58 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:07.873 11:53:58 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:07.873 11:53:58 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:07.873 11:53:58 -- target/multipath.sh@43 -- # nvmftestinit 00:18:07.873 11:53:58 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:07.873 11:53:58 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:07.873 11:53:58 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:07.873 11:53:58 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:07.873 11:53:58 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:07.873 11:53:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:07.873 11:53:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:07.873 11:53:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:07.873 11:53:58 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:07.873 11:53:58 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:07.873 11:53:58 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:07.873 11:53:58 -- common/autotest_common.sh@10 -- # set +x 00:18:14.433 11:54:04 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:14.433 11:54:04 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:14.433 11:54:04 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:14.433 11:54:04 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:14.433 11:54:04 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:14.433 11:54:04 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:14.433 11:54:04 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:14.433 11:54:04 -- nvmf/common.sh@295 -- # net_devs=() 00:18:14.433 11:54:04 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:14.433 11:54:04 -- nvmf/common.sh@296 -- # e810=() 00:18:14.433 11:54:04 -- nvmf/common.sh@296 -- # local -ga e810 00:18:14.433 11:54:04 -- nvmf/common.sh@297 -- # x722=() 00:18:14.433 11:54:04 -- nvmf/common.sh@297 -- # local -ga x722 00:18:14.433 11:54:04 -- nvmf/common.sh@298 -- # mlx=() 00:18:14.433 11:54:04 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:14.433 11:54:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:14.433 11:54:04 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:14.433 11:54:04 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:14.433 11:54:04 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:14.433 11:54:04 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:14.433 11:54:04 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:14.433 11:54:04 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:14.433 11:54:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:14.433 11:54:04 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:14.433 11:54:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:14.433 11:54:04 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:14.433 11:54:04 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:14.433 11:54:04 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:14.433 11:54:04 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:14.433 11:54:04 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:14.433 11:54:04 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:14.433 11:54:04 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:14.433 11:54:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:14.433 11:54:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:14.433 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:14.433 11:54:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:14.433 11:54:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:14.433 11:54:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:14.433 11:54:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:14.433 11:54:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:14.433 11:54:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:14.433 11:54:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:14.433 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:14.433 11:54:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:14.433 11:54:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:14.433 11:54:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:14.433 11:54:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:14.433 11:54:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:14.433 11:54:04 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:14.433 11:54:04 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:14.433 11:54:04 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:14.433 11:54:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:14.433 11:54:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:14.433 11:54:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:14.433 11:54:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:14.433 11:54:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:14.433 Found net devices under 0000:af:00.0: cvl_0_0 00:18:14.433 11:54:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:14.433 11:54:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:14.433 11:54:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:14.433 11:54:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:14.433 11:54:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:14.433 11:54:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:14.433 Found net devices under 0000:af:00.1: cvl_0_1 00:18:14.433 11:54:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:14.433 11:54:04 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:14.433 11:54:04 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:14.433 11:54:04 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:14.433 11:54:04 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:14.433 11:54:04 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:14.433 11:54:04 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:14.433 11:54:04 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:14.433 11:54:04 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:14.433 11:54:04 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:14.433 11:54:04 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:14.433 11:54:04 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:14.433 11:54:04 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:14.433 11:54:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:14.433 11:54:04 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:14.433 11:54:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:14.433 11:54:04 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:14.433 11:54:04 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:14.433 11:54:04 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:14.433 11:54:04 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:14.433 11:54:04 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:14.433 11:54:04 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:14.433 11:54:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:14.692 11:54:04 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:14.692 11:54:04 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:14.692 11:54:05 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:14.692 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:14.692 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:18:14.692 00:18:14.692 --- 10.0.0.2 ping statistics --- 00:18:14.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.692 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:18:14.692 11:54:05 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:14.692 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:14.692 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:18:14.692 00:18:14.692 --- 10.0.0.1 ping statistics --- 00:18:14.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.692 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:18:14.692 11:54:05 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:14.692 11:54:05 -- nvmf/common.sh@411 -- # return 0 00:18:14.692 11:54:05 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:14.692 11:54:05 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:14.692 11:54:05 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:14.692 11:54:05 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:14.692 11:54:05 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:14.692 11:54:05 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:14.692 11:54:05 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:14.692 11:54:05 -- target/multipath.sh@45 -- # '[' -z ']' 00:18:14.692 11:54:05 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:18:14.692 only one NIC for nvmf test 00:18:14.692 11:54:05 -- target/multipath.sh@47 -- # nvmftestfini 00:18:14.693 11:54:05 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:14.693 11:54:05 -- nvmf/common.sh@117 -- # sync 00:18:14.693 11:54:05 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:14.693 11:54:05 -- nvmf/common.sh@120 -- # set +e 00:18:14.693 11:54:05 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:14.693 11:54:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:14.693 rmmod nvme_tcp 00:18:14.693 rmmod nvme_fabrics 00:18:14.693 rmmod nvme_keyring 00:18:14.693 11:54:05 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:14.693 11:54:05 -- nvmf/common.sh@124 -- # set -e 00:18:14.693 11:54:05 -- nvmf/common.sh@125 -- # return 0 00:18:14.693 11:54:05 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:18:14.693 11:54:05 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:14.693 11:54:05 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:14.693 11:54:05 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:14.693 11:54:05 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:14.693 11:54:05 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:14.693 11:54:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:14.693 11:54:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:14.693 11:54:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:17.224 11:54:07 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:17.224 11:54:07 -- target/multipath.sh@48 -- # exit 0 00:18:17.224 11:54:07 -- target/multipath.sh@1 -- # nvmftestfini 00:18:17.224 11:54:07 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:17.224 11:54:07 -- nvmf/common.sh@117 -- # sync 00:18:17.224 11:54:07 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:17.224 11:54:07 -- nvmf/common.sh@120 -- # set +e 00:18:17.224 11:54:07 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:17.224 11:54:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:17.224 11:54:07 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:17.224 11:54:07 -- nvmf/common.sh@124 -- # set -e 00:18:17.224 11:54:07 -- nvmf/common.sh@125 -- # return 0 00:18:17.224 11:54:07 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:18:17.224 11:54:07 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:17.224 11:54:07 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:17.224 11:54:07 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:17.224 11:54:07 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:17.224 11:54:07 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:17.224 11:54:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:17.224 11:54:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:17.224 11:54:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:17.224 11:54:07 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:17.224 00:18:17.224 real 0m9.168s 00:18:17.224 user 0m1.968s 00:18:17.224 sys 0m5.239s 00:18:17.224 11:54:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:17.224 11:54:07 -- common/autotest_common.sh@10 -- # set +x 00:18:17.224 ************************************ 00:18:17.224 END TEST nvmf_multipath 00:18:17.224 ************************************ 00:18:17.224 11:54:07 -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:17.224 11:54:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:17.224 11:54:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:17.224 11:54:07 -- common/autotest_common.sh@10 -- # set +x 00:18:17.224 ************************************ 00:18:17.224 START TEST nvmf_zcopy 00:18:17.224 ************************************ 00:18:17.224 11:54:07 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:17.224 * Looking for test storage... 00:18:17.224 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:17.224 11:54:07 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:17.224 11:54:07 -- nvmf/common.sh@7 -- # uname -s 00:18:17.224 11:54:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:17.224 11:54:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:17.224 11:54:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:17.224 11:54:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:17.225 11:54:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:17.225 11:54:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:17.225 11:54:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:17.225 11:54:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:17.225 11:54:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:17.225 11:54:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:17.225 11:54:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:17.225 11:54:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:18:17.225 11:54:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:17.225 11:54:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:17.225 11:54:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:17.225 11:54:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:17.225 11:54:07 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:17.225 11:54:07 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:17.225 11:54:07 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:17.225 11:54:07 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:17.225 11:54:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.225 11:54:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.225 11:54:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.225 11:54:07 -- paths/export.sh@5 -- # export PATH 00:18:17.225 11:54:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.225 11:54:07 -- nvmf/common.sh@47 -- # : 0 00:18:17.225 11:54:07 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:17.225 11:54:07 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:17.225 11:54:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:17.225 11:54:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:17.225 11:54:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:17.225 11:54:07 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:17.225 11:54:07 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:17.225 11:54:07 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:17.225 11:54:07 -- target/zcopy.sh@12 -- # nvmftestinit 00:18:17.225 11:54:07 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:17.225 11:54:07 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:17.225 11:54:07 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:17.225 11:54:07 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:17.225 11:54:07 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:17.225 11:54:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:17.225 11:54:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:17.225 11:54:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:17.225 11:54:07 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:17.225 11:54:07 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:17.225 11:54:07 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:17.225 11:54:07 -- common/autotest_common.sh@10 -- # set +x 00:18:23.866 11:54:13 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:23.866 11:54:13 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:23.866 11:54:13 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:23.866 11:54:13 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:23.866 11:54:13 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:23.866 11:54:13 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:23.866 11:54:13 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:23.866 11:54:13 -- nvmf/common.sh@295 -- # net_devs=() 00:18:23.866 11:54:13 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:23.866 11:54:13 -- nvmf/common.sh@296 -- # e810=() 00:18:23.866 11:54:13 -- nvmf/common.sh@296 -- # local -ga e810 00:18:23.866 11:54:13 -- nvmf/common.sh@297 -- # x722=() 00:18:23.866 11:54:13 -- nvmf/common.sh@297 -- # local -ga x722 00:18:23.866 11:54:13 -- nvmf/common.sh@298 -- # mlx=() 00:18:23.866 11:54:13 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:23.866 11:54:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:23.866 11:54:13 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:23.866 11:54:13 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:23.866 11:54:13 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:23.866 11:54:13 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:23.866 11:54:13 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:23.866 11:54:13 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:23.866 11:54:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:23.866 11:54:13 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:23.866 11:54:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:23.866 11:54:13 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:23.866 11:54:13 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:23.866 11:54:13 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:23.866 11:54:13 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:23.866 11:54:13 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:23.866 11:54:13 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:23.866 11:54:13 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:23.866 11:54:13 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:23.866 11:54:13 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:23.866 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:23.866 11:54:13 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:23.866 11:54:13 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:23.866 11:54:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:23.866 11:54:13 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:23.866 11:54:13 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:23.866 11:54:13 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:23.866 11:54:13 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:23.866 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:23.866 11:54:13 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:23.866 11:54:13 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:23.866 11:54:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:23.866 11:54:13 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:23.866 11:54:13 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:23.866 11:54:13 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:23.866 11:54:13 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:23.866 11:54:13 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:23.866 11:54:13 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:23.866 11:54:13 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:23.866 11:54:13 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:23.866 11:54:13 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:23.866 11:54:13 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:23.866 Found net devices under 0000:af:00.0: cvl_0_0 00:18:23.866 11:54:13 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:23.866 11:54:13 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:23.866 11:54:13 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:23.866 11:54:13 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:23.866 11:54:13 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:23.866 11:54:13 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:23.866 Found net devices under 0000:af:00.1: cvl_0_1 00:18:23.866 11:54:13 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:23.866 11:54:13 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:23.866 11:54:13 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:23.867 11:54:13 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:23.867 11:54:13 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:23.867 11:54:13 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:23.867 11:54:13 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:23.867 11:54:13 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:23.867 11:54:13 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:23.867 11:54:13 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:23.867 11:54:13 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:23.867 11:54:13 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:23.867 11:54:13 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:23.867 11:54:13 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:23.867 11:54:13 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:23.867 11:54:13 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:23.867 11:54:13 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:23.867 11:54:13 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:23.867 11:54:13 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:23.867 11:54:13 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:23.867 11:54:13 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:23.867 11:54:13 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:23.867 11:54:13 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:23.867 11:54:13 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:23.867 11:54:13 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:23.867 11:54:13 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:23.867 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:23.867 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:18:23.867 00:18:23.867 --- 10.0.0.2 ping statistics --- 00:18:23.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:23.867 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:18:23.867 11:54:13 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:23.867 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:23.867 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.249 ms 00:18:23.867 00:18:23.867 --- 10.0.0.1 ping statistics --- 00:18:23.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:23.867 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:18:23.867 11:54:13 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:23.867 11:54:13 -- nvmf/common.sh@411 -- # return 0 00:18:23.867 11:54:13 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:23.867 11:54:13 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:23.867 11:54:13 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:23.867 11:54:13 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:23.867 11:54:13 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:23.867 11:54:13 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:23.867 11:54:13 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:23.867 11:54:13 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:18:23.867 11:54:13 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:23.867 11:54:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:23.867 11:54:13 -- common/autotest_common.sh@10 -- # set +x 00:18:23.867 11:54:13 -- nvmf/common.sh@470 -- # nvmfpid=2481358 00:18:23.867 11:54:13 -- nvmf/common.sh@471 -- # waitforlisten 2481358 00:18:23.867 11:54:13 -- common/autotest_common.sh@817 -- # '[' -z 2481358 ']' 00:18:23.867 11:54:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:23.867 11:54:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:23.867 11:54:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:23.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:23.867 11:54:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:23.867 11:54:13 -- common/autotest_common.sh@10 -- # set +x 00:18:23.867 11:54:13 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:23.867 [2024-04-18 11:54:13.845079] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:18:23.867 [2024-04-18 11:54:13.845169] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:23.867 EAL: No free 2048 kB hugepages reported on node 1 00:18:23.867 [2024-04-18 11:54:13.972816] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.867 [2024-04-18 11:54:14.175152] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:23.867 [2024-04-18 11:54:14.175198] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:23.867 [2024-04-18 11:54:14.175211] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:23.867 [2024-04-18 11:54:14.175224] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:23.867 [2024-04-18 11:54:14.175234] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:23.867 [2024-04-18 11:54:14.175268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:24.126 11:54:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:24.126 11:54:14 -- common/autotest_common.sh@850 -- # return 0 00:18:24.126 11:54:14 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:24.126 11:54:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:24.126 11:54:14 -- common/autotest_common.sh@10 -- # set +x 00:18:24.126 11:54:14 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:24.126 11:54:14 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:18:24.126 11:54:14 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:18:24.126 11:54:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:24.126 11:54:14 -- common/autotest_common.sh@10 -- # set +x 00:18:24.126 [2024-04-18 11:54:14.644857] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:24.126 11:54:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:24.126 11:54:14 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:24.126 11:54:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:24.126 11:54:14 -- common/autotest_common.sh@10 -- # set +x 00:18:24.126 11:54:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:24.126 11:54:14 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:24.126 11:54:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:24.126 11:54:14 -- common/autotest_common.sh@10 -- # set +x 00:18:24.126 [2024-04-18 11:54:14.661056] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:24.126 11:54:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:24.126 11:54:14 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:24.126 11:54:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:24.126 11:54:14 -- common/autotest_common.sh@10 -- # set +x 00:18:24.126 11:54:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:24.126 11:54:14 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:18:24.126 11:54:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:24.126 11:54:14 -- common/autotest_common.sh@10 -- # set +x 00:18:24.386 malloc0 00:18:24.386 11:54:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:24.386 11:54:14 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:24.386 11:54:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:24.386 11:54:14 -- common/autotest_common.sh@10 -- # set +x 00:18:24.386 11:54:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:24.386 11:54:14 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:18:24.386 11:54:14 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:18:24.386 11:54:14 -- nvmf/common.sh@521 -- # config=() 00:18:24.386 11:54:14 -- nvmf/common.sh@521 -- # local subsystem config 00:18:24.386 11:54:14 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:24.386 11:54:14 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:24.386 { 00:18:24.386 "params": { 00:18:24.386 "name": "Nvme$subsystem", 00:18:24.386 "trtype": "$TEST_TRANSPORT", 00:18:24.386 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:24.386 "adrfam": "ipv4", 00:18:24.386 "trsvcid": "$NVMF_PORT", 00:18:24.386 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:24.386 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:24.386 "hdgst": ${hdgst:-false}, 00:18:24.386 "ddgst": ${ddgst:-false} 00:18:24.386 }, 00:18:24.386 "method": "bdev_nvme_attach_controller" 00:18:24.386 } 00:18:24.386 EOF 00:18:24.386 )") 00:18:24.386 11:54:14 -- nvmf/common.sh@543 -- # cat 00:18:24.386 11:54:14 -- nvmf/common.sh@545 -- # jq . 00:18:24.386 11:54:14 -- nvmf/common.sh@546 -- # IFS=, 00:18:24.386 11:54:14 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:18:24.386 "params": { 00:18:24.386 "name": "Nvme1", 00:18:24.386 "trtype": "tcp", 00:18:24.386 "traddr": "10.0.0.2", 00:18:24.386 "adrfam": "ipv4", 00:18:24.386 "trsvcid": "4420", 00:18:24.386 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:24.386 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:24.386 "hdgst": false, 00:18:24.386 "ddgst": false 00:18:24.386 }, 00:18:24.386 "method": "bdev_nvme_attach_controller" 00:18:24.386 }' 00:18:24.386 [2024-04-18 11:54:14.819082] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:18:24.386 [2024-04-18 11:54:14.819172] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2481393 ] 00:18:24.386 EAL: No free 2048 kB hugepages reported on node 1 00:18:24.646 [2024-04-18 11:54:14.944533] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.646 [2024-04-18 11:54:15.157920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.215 Running I/O for 10 seconds... 00:18:37.427 00:18:37.427 Latency(us) 00:18:37.427 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:37.427 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:18:37.427 Verification LBA range: start 0x0 length 0x1000 00:18:37.427 Nvme1n1 : 10.01 7480.05 58.44 0.00 0.00 17066.82 1992.29 34183.58 00:18:37.427 =================================================================================================================== 00:18:37.427 Total : 7480.05 58.44 0.00 0.00 17066.82 1992.29 34183.58 00:18:37.427 11:54:26 -- target/zcopy.sh@39 -- # perfpid=2483503 00:18:37.427 11:54:26 -- target/zcopy.sh@41 -- # xtrace_disable 00:18:37.427 11:54:26 -- common/autotest_common.sh@10 -- # set +x 00:18:37.427 11:54:26 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:18:37.427 11:54:26 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:18:37.427 11:54:26 -- nvmf/common.sh@521 -- # config=() 00:18:37.427 11:54:26 -- nvmf/common.sh@521 -- # local subsystem config 00:18:37.427 11:54:26 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:37.427 [2024-04-18 11:54:26.767335] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.427 11:54:26 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:37.427 { 00:18:37.427 "params": { 00:18:37.427 "name": "Nvme$subsystem", 00:18:37.428 "trtype": "$TEST_TRANSPORT", 00:18:37.428 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:37.428 "adrfam": "ipv4", 00:18:37.428 "trsvcid": "$NVMF_PORT", 00:18:37.428 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:37.428 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:37.428 "hdgst": ${hdgst:-false}, 00:18:37.428 "ddgst": ${ddgst:-false} 00:18:37.428 }, 00:18:37.428 "method": "bdev_nvme_attach_controller" 00:18:37.428 } 00:18:37.428 EOF 00:18:37.428 )") 00:18:37.428 [2024-04-18 11:54:26.767376] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.428 11:54:26 -- nvmf/common.sh@543 -- # cat 00:18:37.428 [2024-04-18 11:54:26.775321] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.428 [2024-04-18 11:54:26.775348] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.428 11:54:26 -- nvmf/common.sh@545 -- # jq . 00:18:37.428 11:54:26 -- nvmf/common.sh@546 -- # IFS=, 00:18:37.428 11:54:26 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:18:37.428 "params": { 00:18:37.428 "name": "Nvme1", 00:18:37.428 "trtype": "tcp", 00:18:37.428 "traddr": "10.0.0.2", 00:18:37.428 "adrfam": "ipv4", 00:18:37.428 "trsvcid": "4420", 00:18:37.428 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.428 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:37.428 "hdgst": false, 00:18:37.428 "ddgst": false 00:18:37.428 }, 00:18:37.428 "method": "bdev_nvme_attach_controller" 00:18:37.428 }' 00:18:37.428 [2024-04-18 11:54:26.783314] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.428 [2024-04-18 11:54:26.783338] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.428 [2024-04-18 11:54:26.791357] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.428 [2024-04-18 11:54:26.791381] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.428 [2024-04-18 11:54:26.799365] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.428 [2024-04-18 11:54:26.799386] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.428 [2024-04-18 11:54:26.807374] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.428 [2024-04-18 11:54:26.807396] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.428 [2024-04-18 11:54:26.815410] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.428 [2024-04-18 11:54:26.815431] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.428 [2024-04-18 11:54:26.823431] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.428 [2024-04-18 11:54:26.823458] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.428 [2024-04-18 11:54:26.831445] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.428 [2024-04-18 11:54:26.831472] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.428 [2024-04-18 11:54:26.839479] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.428 [2024-04-18 11:54:26.839500] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.428 [2024-04-18 11:54:26.842199] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:18:37.428 [2024-04-18 11:54:26.842278] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2483503 ] 00:18:37.428 [2024-04-18 11:54:26.847491] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.428 [2024-04-18 11:54:26.847511] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.428 [2024-04-18 11:54:26.855524] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.428 [2024-04-18 11:54:26.855544] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.428 [2024-04-18 11:54:26.863558] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.428 [2024-04-18 11:54:26.863578] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.428 [2024-04-18 11:54:26.871551] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.428 [2024-04-18 11:54:26.871571] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.428 [2024-04-18 11:54:26.879586] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.428 [2024-04-18 11:54:26.879605] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.428 [2024-04-18 11:54:26.887606] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.428 [2024-04-18 11:54:26.887627] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.428 [2024-04-18 11:54:26.895614] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.428 [2024-04-18 11:54:26.895637] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.428 [2024-04-18 11:54:26.903645] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.428 [2024-04-18 11:54:26.903665] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.428 EAL: No free 2048 kB hugepages reported on node 1 00:18:37.428 [2024-04-18 11:54:26.911660] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.428 [2024-04-18 11:54:26.911679] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.428 [2024-04-18 11:54:26.919698] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.428 [2024-04-18 11:54:26.919717] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.428 [2024-04-18 11:54:26.927712] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.428 [2024-04-18 11:54:26.927733] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.428 [2024-04-18 11:54:26.935721] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.428 [2024-04-18 11:54:26.935741] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.428 [2024-04-18 11:54:26.943757] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.428 [2024-04-18 11:54:26.943777] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.428 [2024-04-18 11:54:26.951778] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.428 [2024-04-18 11:54:26.951798] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.428 [2024-04-18 11:54:26.959811] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.428 [2024-04-18 11:54:26.959831] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.428 [2024-04-18 11:54:26.966398] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.428 [2024-04-18 11:54:26.967832] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.428 [2024-04-18 11:54:26.967852] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.428 [2024-04-18 11:54:26.975837] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.428 [2024-04-18 11:54:26.975858] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.428 [2024-04-18 11:54:26.983866] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.428 [2024-04-18 11:54:26.983887] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.428 [2024-04-18 11:54:26.991887] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.428 [2024-04-18 11:54:26.991907] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.428 [2024-04-18 11:54:26.999909] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.428 [2024-04-18 11:54:26.999929] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.428 [2024-04-18 11:54:27.007931] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.428 [2024-04-18 11:54:27.007951] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.428 [2024-04-18 11:54:27.015952] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.428 [2024-04-18 11:54:27.015973] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.428 [2024-04-18 11:54:27.023969] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.428 [2024-04-18 11:54:27.023990] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.428 [2024-04-18 11:54:27.032001] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.428 [2024-04-18 11:54:27.032021] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.428 [2024-04-18 11:54:27.040008] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.428 [2024-04-18 11:54:27.040031] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.428 [2024-04-18 11:54:27.048053] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.428 [2024-04-18 11:54:27.048073] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.428 [2024-04-18 11:54:27.056073] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.428 [2024-04-18 11:54:27.056092] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.428 [2024-04-18 11:54:27.064068] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.428 [2024-04-18 11:54:27.064088] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.428 [2024-04-18 11:54:27.072108] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.428 [2024-04-18 11:54:27.072128] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.428 [2024-04-18 11:54:27.080130] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.428 [2024-04-18 11:54:27.080150] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.428 [2024-04-18 11:54:27.088143] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.428 [2024-04-18 11:54:27.088163] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.428 [2024-04-18 11:54:27.096173] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.428 [2024-04-18 11:54:27.096192] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.428 [2024-04-18 11:54:27.104182] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.428 [2024-04-18 11:54:27.104202] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.428 [2024-04-18 11:54:27.112221] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.428 [2024-04-18 11:54:27.112241] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-04-18 11:54:27.120241] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-04-18 11:54:27.120261] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-04-18 11:54:27.128252] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-04-18 11:54:27.128271] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-04-18 11:54:27.136282] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-04-18 11:54:27.136301] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-04-18 11:54:27.144302] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-04-18 11:54:27.144321] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-04-18 11:54:27.152341] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-04-18 11:54:27.152360] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-04-18 11:54:27.160347] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-04-18 11:54:27.160366] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-04-18 11:54:27.168356] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-04-18 11:54:27.168375] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-04-18 11:54:27.176392] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-04-18 11:54:27.176411] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-04-18 11:54:27.180932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.429 [2024-04-18 11:54:27.184416] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-04-18 11:54:27.184436] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-04-18 11:54:27.192425] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-04-18 11:54:27.192446] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-04-18 11:54:27.200472] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-04-18 11:54:27.200493] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-04-18 11:54:27.208491] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-04-18 11:54:27.208510] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-04-18 11:54:27.216502] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-04-18 11:54:27.216521] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-04-18 11:54:27.224533] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-04-18 11:54:27.224552] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-04-18 11:54:27.232539] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-04-18 11:54:27.232558] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-04-18 11:54:27.240578] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-04-18 11:54:27.240597] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-04-18 11:54:27.248604] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-04-18 11:54:27.248623] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-04-18 11:54:27.256611] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-04-18 11:54:27.256631] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-04-18 11:54:27.264632] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-04-18 11:54:27.264652] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-04-18 11:54:27.272663] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-04-18 11:54:27.272683] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-04-18 11:54:27.280669] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-04-18 11:54:27.280689] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-04-18 11:54:27.288709] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-04-18 11:54:27.288729] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-04-18 11:54:27.296710] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-04-18 11:54:27.296730] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-04-18 11:54:27.304750] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-04-18 11:54:27.304770] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-04-18 11:54:27.312763] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-04-18 11:54:27.312782] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-04-18 11:54:27.320772] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-04-18 11:54:27.320791] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-04-18 11:54:27.328809] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-04-18 11:54:27.328829] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-04-18 11:54:27.336831] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-04-18 11:54:27.336851] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-04-18 11:54:27.344853] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-04-18 11:54:27.344874] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-04-18 11:54:27.352872] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-04-18 11:54:27.352892] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-04-18 11:54:27.360889] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-04-18 11:54:27.360909] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-04-18 11:54:27.368915] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-04-18 11:54:27.368935] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-04-18 11:54:27.376942] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-04-18 11:54:27.376962] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-04-18 11:54:27.384949] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-04-18 11:54:27.384970] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-04-18 11:54:27.392989] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-04-18 11:54:27.393010] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-04-18 11:54:27.401003] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-04-18 11:54:27.401023] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-04-18 11:54:27.409015] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-04-18 11:54:27.409036] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-04-18 11:54:27.417046] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-04-18 11:54:27.417066] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-04-18 11:54:27.425057] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-04-18 11:54:27.425077] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-04-18 11:54:27.433096] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-04-18 11:54:27.433116] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-04-18 11:54:27.441124] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-04-18 11:54:27.441143] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-04-18 11:54:27.449123] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-04-18 11:54:27.449143] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-04-18 11:54:27.457153] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-04-18 11:54:27.457172] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-04-18 11:54:27.465173] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-04-18 11:54:27.465193] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-04-18 11:54:27.473191] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-04-18 11:54:27.473211] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-04-18 11:54:27.481224] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-04-18 11:54:27.481244] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-04-18 11:54:27.489229] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-04-18 11:54:27.489252] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-04-18 11:54:27.497262] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-04-18 11:54:27.497282] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-04-18 11:54:27.505283] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-04-18 11:54:27.505302] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-04-18 11:54:27.513323] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.429 [2024-04-18 11:54:27.513343] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.429 [2024-04-18 11:54:27.521329] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.430 [2024-04-18 11:54:27.521349] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.430 [2024-04-18 11:54:27.529350] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.430 [2024-04-18 11:54:27.529370] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.430 [2024-04-18 11:54:27.537382] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.430 [2024-04-18 11:54:27.537401] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.430 [2024-04-18 11:54:27.545396] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.430 [2024-04-18 11:54:27.545415] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.430 [2024-04-18 11:54:27.553411] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.430 [2024-04-18 11:54:27.553431] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.430 [2024-04-18 11:54:27.561440] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.430 [2024-04-18 11:54:27.561467] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.430 [2024-04-18 11:54:27.569494] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.430 [2024-04-18 11:54:27.569519] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.430 [2024-04-18 11:54:27.577487] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.430 [2024-04-18 11:54:27.577510] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.430 [2024-04-18 11:54:27.585539] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.430 [2024-04-18 11:54:27.585561] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.430 [2024-04-18 11:54:27.593545] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.430 [2024-04-18 11:54:27.593567] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.430 [2024-04-18 11:54:27.601553] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.430 [2024-04-18 11:54:27.601574] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.430 [2024-04-18 11:54:27.609589] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.430 [2024-04-18 11:54:27.609609] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.430 [2024-04-18 11:54:27.617597] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.430 [2024-04-18 11:54:27.617618] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.430 [2024-04-18 11:54:27.625634] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.430 [2024-04-18 11:54:27.625654] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.430 [2024-04-18 11:54:27.633671] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.430 [2024-04-18 11:54:27.633693] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.430 [2024-04-18 11:54:27.641677] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.430 [2024-04-18 11:54:27.641722] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.430 [2024-04-18 11:54:27.649710] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.430 [2024-04-18 11:54:27.649732] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.430 [2024-04-18 11:54:27.657732] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.430 [2024-04-18 11:54:27.657753] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.430 [2024-04-18 11:54:27.665800] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.430 [2024-04-18 11:54:27.665823] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.430 [2024-04-18 11:54:27.673784] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.430 [2024-04-18 11:54:27.673808] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.430 Running I/O for 5 seconds... 00:18:37.430 [2024-04-18 11:54:27.681787] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.430 [2024-04-18 11:54:27.681808] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.430 [2024-04-18 11:54:27.700234] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.430 [2024-04-18 11:54:27.700260] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.430 [2024-04-18 11:54:27.709031] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.430 [2024-04-18 11:54:27.709057] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.430 [2024-04-18 11:54:27.718009] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.430 [2024-04-18 11:54:27.718034] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.430 [2024-04-18 11:54:27.727338] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.430 [2024-04-18 11:54:27.727366] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.430 [2024-04-18 11:54:27.735770] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.430 [2024-04-18 11:54:27.735795] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.430 [2024-04-18 11:54:27.744790] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.430 [2024-04-18 11:54:27.744816] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.430 [2024-04-18 11:54:27.753603] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.430 [2024-04-18 11:54:27.753628] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.430 [2024-04-18 11:54:27.762317] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.430 [2024-04-18 11:54:27.762342] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.430 [2024-04-18 11:54:27.770994] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.430 [2024-04-18 11:54:27.771019] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.430 [2024-04-18 11:54:27.779787] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.430 [2024-04-18 11:54:27.779813] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.430 [2024-04-18 11:54:27.788399] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.430 [2024-04-18 11:54:27.788424] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.430 [2024-04-18 11:54:27.797163] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.430 [2024-04-18 11:54:27.797188] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.430 [2024-04-18 11:54:27.806098] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.430 [2024-04-18 11:54:27.806123] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.430 [2024-04-18 11:54:27.814819] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.430 [2024-04-18 11:54:27.814847] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.430 [2024-04-18 11:54:27.823655] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.430 [2024-04-18 11:54:27.823680] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.430 [2024-04-18 11:54:27.832331] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.430 [2024-04-18 11:54:27.832363] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.430 [2024-04-18 11:54:27.841088] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.430 [2024-04-18 11:54:27.841113] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.430 [2024-04-18 11:54:27.849779] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.430 [2024-04-18 11:54:27.849803] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.430 [2024-04-18 11:54:27.858525] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.430 [2024-04-18 11:54:27.858550] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.430 [2024-04-18 11:54:27.867263] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.430 [2024-04-18 11:54:27.867288] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.430 [2024-04-18 11:54:27.876344] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.430 [2024-04-18 11:54:27.876370] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.430 [2024-04-18 11:54:27.885158] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.430 [2024-04-18 11:54:27.885183] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.430 [2024-04-18 11:54:27.894165] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.430 [2024-04-18 11:54:27.894191] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.430 [2024-04-18 11:54:27.902974] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.430 [2024-04-18 11:54:27.902999] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.430 [2024-04-18 11:54:27.911856] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.430 [2024-04-18 11:54:27.911881] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.430 [2024-04-18 11:54:27.920877] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.430 [2024-04-18 11:54:27.920902] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.430 [2024-04-18 11:54:27.932645] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.430 [2024-04-18 11:54:27.932670] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.430 [2024-04-18 11:54:27.940806] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.430 [2024-04-18 11:54:27.940830] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.430 [2024-04-18 11:54:27.951463] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.430 [2024-04-18 11:54:27.951488] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.430 [2024-04-18 11:54:27.959679] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.430 [2024-04-18 11:54:27.959704] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.430 [2024-04-18 11:54:27.968696] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.430 [2024-04-18 11:54:27.968722] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.691 [2024-04-18 11:54:27.980171] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.691 [2024-04-18 11:54:27.980197] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.691 [2024-04-18 11:54:27.987897] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.691 [2024-04-18 11:54:27.987922] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.691 [2024-04-18 11:54:27.998569] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.691 [2024-04-18 11:54:27.998593] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.691 [2024-04-18 11:54:28.006515] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.691 [2024-04-18 11:54:28.006539] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.691 [2024-04-18 11:54:28.017254] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.691 [2024-04-18 11:54:28.017279] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.691 [2024-04-18 11:54:28.025761] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.691 [2024-04-18 11:54:28.025785] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.691 [2024-04-18 11:54:28.034865] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.691 [2024-04-18 11:54:28.034889] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.691 [2024-04-18 11:54:28.043830] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.691 [2024-04-18 11:54:28.043854] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.691 [2024-04-18 11:54:28.053269] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.691 [2024-04-18 11:54:28.053294] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.691 [2024-04-18 11:54:28.063644] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.691 [2024-04-18 11:54:28.063669] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.691 [2024-04-18 11:54:28.072806] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.691 [2024-04-18 11:54:28.072831] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.691 [2024-04-18 11:54:28.081394] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.691 [2024-04-18 11:54:28.081418] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.691 [2024-04-18 11:54:28.090783] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.691 [2024-04-18 11:54:28.090808] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.691 [2024-04-18 11:54:28.099480] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.691 [2024-04-18 11:54:28.099505] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.691 [2024-04-18 11:54:28.108002] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.691 [2024-04-18 11:54:28.108027] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.691 [2024-04-18 11:54:28.116437] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.691 [2024-04-18 11:54:28.116469] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.691 [2024-04-18 11:54:28.125179] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.691 [2024-04-18 11:54:28.125203] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.691 [2024-04-18 11:54:28.133386] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.691 [2024-04-18 11:54:28.133411] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.691 [2024-04-18 11:54:28.143936] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.691 [2024-04-18 11:54:28.143961] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.691 [2024-04-18 11:54:28.152421] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.691 [2024-04-18 11:54:28.152445] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.691 [2024-04-18 11:54:28.163947] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.691 [2024-04-18 11:54:28.163972] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.691 [2024-04-18 11:54:28.172936] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.691 [2024-04-18 11:54:28.172961] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.691 [2024-04-18 11:54:28.184072] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.691 [2024-04-18 11:54:28.184101] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.691 [2024-04-18 11:54:28.195086] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.691 [2024-04-18 11:54:28.195112] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.691 [2024-04-18 11:54:28.202738] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.691 [2024-04-18 11:54:28.202763] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.691 [2024-04-18 11:54:28.213717] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.691 [2024-04-18 11:54:28.213746] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.691 [2024-04-18 11:54:28.222307] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.691 [2024-04-18 11:54:28.222332] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.691 [2024-04-18 11:54:28.230953] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.691 [2024-04-18 11:54:28.230978] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.951 [2024-04-18 11:54:28.240360] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.951 [2024-04-18 11:54:28.240384] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.951 [2024-04-18 11:54:28.249126] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.951 [2024-04-18 11:54:28.249151] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.951 [2024-04-18 11:54:28.257986] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.951 [2024-04-18 11:54:28.258011] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.951 [2024-04-18 11:54:28.266820] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.951 [2024-04-18 11:54:28.266844] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.951 [2024-04-18 11:54:28.275659] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.951 [2024-04-18 11:54:28.275683] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.951 [2024-04-18 11:54:28.284241] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.951 [2024-04-18 11:54:28.284266] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.951 [2024-04-18 11:54:28.292997] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.951 [2024-04-18 11:54:28.293021] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.951 [2024-04-18 11:54:28.301499] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.951 [2024-04-18 11:54:28.301524] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.951 [2024-04-18 11:54:28.310272] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.951 [2024-04-18 11:54:28.310298] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.951 [2024-04-18 11:54:28.318792] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.951 [2024-04-18 11:54:28.318817] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.951 [2024-04-18 11:54:28.329890] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.951 [2024-04-18 11:54:28.329915] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.951 [2024-04-18 11:54:28.339699] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.951 [2024-04-18 11:54:28.339723] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.951 [2024-04-18 11:54:28.347336] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.951 [2024-04-18 11:54:28.347360] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.951 [2024-04-18 11:54:28.356159] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.951 [2024-04-18 11:54:28.356190] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.951 [2024-04-18 11:54:28.365154] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.951 [2024-04-18 11:54:28.365184] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.951 [2024-04-18 11:54:28.373851] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.951 [2024-04-18 11:54:28.373876] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.951 [2024-04-18 11:54:28.382659] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.951 [2024-04-18 11:54:28.382683] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.951 [2024-04-18 11:54:28.390645] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.951 [2024-04-18 11:54:28.390670] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.951 [2024-04-18 11:54:28.401114] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.951 [2024-04-18 11:54:28.401139] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.951 [2024-04-18 11:54:28.409135] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.951 [2024-04-18 11:54:28.409159] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.951 [2024-04-18 11:54:28.418058] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.951 [2024-04-18 11:54:28.418089] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.951 [2024-04-18 11:54:28.426377] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.951 [2024-04-18 11:54:28.426402] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.951 [2024-04-18 11:54:28.435549] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.951 [2024-04-18 11:54:28.435574] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.951 [2024-04-18 11:54:28.443290] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.951 [2024-04-18 11:54:28.443315] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.951 [2024-04-18 11:54:28.453912] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.951 [2024-04-18 11:54:28.453938] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.951 [2024-04-18 11:54:28.462477] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.951 [2024-04-18 11:54:28.462502] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.951 [2024-04-18 11:54:28.471130] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.951 [2024-04-18 11:54:28.471155] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.951 [2024-04-18 11:54:28.480133] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.951 [2024-04-18 11:54:28.480158] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.951 [2024-04-18 11:54:28.488954] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.951 [2024-04-18 11:54:28.488985] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.951 [2024-04-18 11:54:28.498350] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.951 [2024-04-18 11:54:28.498376] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.211 [2024-04-18 11:54:28.508338] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.211 [2024-04-18 11:54:28.508363] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.211 [2024-04-18 11:54:28.519038] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.211 [2024-04-18 11:54:28.519071] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.211 [2024-04-18 11:54:28.527082] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.211 [2024-04-18 11:54:28.527107] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.211 [2024-04-18 11:54:28.537281] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.211 [2024-04-18 11:54:28.537306] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.211 [2024-04-18 11:54:28.546109] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.211 [2024-04-18 11:54:28.546134] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.211 [2024-04-18 11:54:28.555728] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.211 [2024-04-18 11:54:28.555753] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.211 [2024-04-18 11:54:28.564557] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.211 [2024-04-18 11:54:28.564581] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.211 [2024-04-18 11:54:28.575134] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.211 [2024-04-18 11:54:28.575159] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.211 [2024-04-18 11:54:28.582876] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.211 [2024-04-18 11:54:28.582901] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.211 [2024-04-18 11:54:28.591457] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.211 [2024-04-18 11:54:28.591483] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.211 [2024-04-18 11:54:28.602269] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.211 [2024-04-18 11:54:28.602294] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.211 [2024-04-18 11:54:28.612167] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.212 [2024-04-18 11:54:28.612191] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.212 [2024-04-18 11:54:28.620743] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.212 [2024-04-18 11:54:28.620767] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.212 [2024-04-18 11:54:28.630028] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.212 [2024-04-18 11:54:28.630052] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.212 [2024-04-18 11:54:28.640887] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.212 [2024-04-18 11:54:28.640912] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.212 [2024-04-18 11:54:28.651056] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.212 [2024-04-18 11:54:28.651081] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.212 [2024-04-18 11:54:28.658841] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.212 [2024-04-18 11:54:28.658866] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.212 [2024-04-18 11:54:28.670790] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.212 [2024-04-18 11:54:28.670815] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.212 [2024-04-18 11:54:28.681238] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.212 [2024-04-18 11:54:28.681267] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.212 [2024-04-18 11:54:28.688959] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.212 [2024-04-18 11:54:28.688984] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.212 [2024-04-18 11:54:28.700601] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.212 [2024-04-18 11:54:28.700626] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.212 [2024-04-18 11:54:28.711961] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.212 [2024-04-18 11:54:28.711986] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.212 [2024-04-18 11:54:28.719490] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.212 [2024-04-18 11:54:28.719525] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.212 [2024-04-18 11:54:28.730649] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.212 [2024-04-18 11:54:28.730674] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.212 [2024-04-18 11:54:28.740236] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.212 [2024-04-18 11:54:28.740261] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.212 [2024-04-18 11:54:28.748297] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.212 [2024-04-18 11:54:28.748321] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.212 [2024-04-18 11:54:28.758807] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.212 [2024-04-18 11:54:28.758832] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.472 [2024-04-18 11:54:28.766920] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.472 [2024-04-18 11:54:28.766945] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.472 [2024-04-18 11:54:28.775938] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.472 [2024-04-18 11:54:28.775962] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.472 [2024-04-18 11:54:28.783949] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.472 [2024-04-18 11:54:28.783974] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.472 [2024-04-18 11:54:28.793153] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.472 [2024-04-18 11:54:28.793178] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.472 [2024-04-18 11:54:28.801938] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.472 [2024-04-18 11:54:28.801963] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.472 [2024-04-18 11:54:28.810890] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.472 [2024-04-18 11:54:28.810914] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.472 [2024-04-18 11:54:28.819534] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.472 [2024-04-18 11:54:28.819559] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.472 [2024-04-18 11:54:28.828063] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.472 [2024-04-18 11:54:28.828087] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.472 [2024-04-18 11:54:28.836805] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.472 [2024-04-18 11:54:28.836829] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.472 [2024-04-18 11:54:28.845559] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.472 [2024-04-18 11:54:28.845583] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.472 [2024-04-18 11:54:28.854312] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.472 [2024-04-18 11:54:28.854340] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.472 [2024-04-18 11:54:28.862925] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.472 [2024-04-18 11:54:28.862949] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.472 [2024-04-18 11:54:28.873075] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.472 [2024-04-18 11:54:28.873099] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.472 [2024-04-18 11:54:28.883546] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.472 [2024-04-18 11:54:28.883571] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.472 [2024-04-18 11:54:28.891241] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.472 [2024-04-18 11:54:28.891265] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.472 [2024-04-18 11:54:28.901881] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.472 [2024-04-18 11:54:28.901905] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.472 [2024-04-18 11:54:28.909839] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.472 [2024-04-18 11:54:28.909864] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.472 [2024-04-18 11:54:28.920585] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.472 [2024-04-18 11:54:28.920610] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.472 [2024-04-18 11:54:28.928997] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.472 [2024-04-18 11:54:28.929023] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.472 [2024-04-18 11:54:28.937651] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.472 [2024-04-18 11:54:28.937676] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.472 [2024-04-18 11:54:28.946513] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.472 [2024-04-18 11:54:28.946538] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.472 [2024-04-18 11:54:28.955103] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.472 [2024-04-18 11:54:28.955128] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.472 [2024-04-18 11:54:28.963853] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.472 [2024-04-18 11:54:28.963879] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.472 [2024-04-18 11:54:28.972652] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.472 [2024-04-18 11:54:28.972677] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.472 [2024-04-18 11:54:28.981213] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.472 [2024-04-18 11:54:28.981239] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.472 [2024-04-18 11:54:28.989772] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.472 [2024-04-18 11:54:28.989798] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.472 [2024-04-18 11:54:28.998601] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.472 [2024-04-18 11:54:28.998626] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.472 [2024-04-18 11:54:29.007351] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.472 [2024-04-18 11:54:29.007383] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.472 [2024-04-18 11:54:29.016125] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.472 [2024-04-18 11:54:29.016150] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.732 [2024-04-18 11:54:29.024726] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.732 [2024-04-18 11:54:29.024754] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.732 [2024-04-18 11:54:29.033263] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.732 [2024-04-18 11:54:29.033288] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.733 [2024-04-18 11:54:29.042153] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.733 [2024-04-18 11:54:29.042178] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.733 [2024-04-18 11:54:29.050911] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.733 [2024-04-18 11:54:29.050936] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.733 [2024-04-18 11:54:29.059710] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.733 [2024-04-18 11:54:29.059734] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.733 [2024-04-18 11:54:29.068285] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.733 [2024-04-18 11:54:29.068309] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.733 [2024-04-18 11:54:29.077207] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.733 [2024-04-18 11:54:29.077231] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.733 [2024-04-18 11:54:29.085974] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.733 [2024-04-18 11:54:29.085999] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.733 [2024-04-18 11:54:29.094571] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.733 [2024-04-18 11:54:29.094596] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.733 [2024-04-18 11:54:29.103402] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.733 [2024-04-18 11:54:29.103428] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.733 [2024-04-18 11:54:29.112242] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.733 [2024-04-18 11:54:29.112267] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.733 [2024-04-18 11:54:29.120933] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.733 [2024-04-18 11:54:29.120958] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.733 [2024-04-18 11:54:29.129615] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.733 [2024-04-18 11:54:29.129639] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.733 [2024-04-18 11:54:29.138320] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.733 [2024-04-18 11:54:29.138345] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.733 [2024-04-18 11:54:29.146920] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.733 [2024-04-18 11:54:29.146945] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.733 [2024-04-18 11:54:29.155837] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.733 [2024-04-18 11:54:29.155861] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.733 [2024-04-18 11:54:29.164635] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.733 [2024-04-18 11:54:29.164660] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.733 [2024-04-18 11:54:29.173339] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.733 [2024-04-18 11:54:29.173364] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.733 [2024-04-18 11:54:29.182482] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.733 [2024-04-18 11:54:29.182507] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.733 [2024-04-18 11:54:29.191607] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.733 [2024-04-18 11:54:29.191635] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.733 [2024-04-18 11:54:29.200943] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.733 [2024-04-18 11:54:29.200968] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.733 [2024-04-18 11:54:29.209653] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.733 [2024-04-18 11:54:29.209678] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.733 [2024-04-18 11:54:29.218240] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.733 [2024-04-18 11:54:29.218265] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.733 [2024-04-18 11:54:29.226680] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.733 [2024-04-18 11:54:29.226705] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.733 [2024-04-18 11:54:29.235364] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.733 [2024-04-18 11:54:29.235390] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.733 [2024-04-18 11:54:29.243848] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.733 [2024-04-18 11:54:29.243874] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.733 [2024-04-18 11:54:29.252283] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.733 [2024-04-18 11:54:29.252308] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.733 [2024-04-18 11:54:29.260863] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.733 [2024-04-18 11:54:29.260887] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.733 [2024-04-18 11:54:29.269483] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.733 [2024-04-18 11:54:29.269506] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.733 [2024-04-18 11:54:29.278511] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.733 [2024-04-18 11:54:29.278535] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.993 [2024-04-18 11:54:29.287153] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.993 [2024-04-18 11:54:29.287177] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.993 [2024-04-18 11:54:29.295985] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.993 [2024-04-18 11:54:29.296009] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.993 [2024-04-18 11:54:29.304709] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.993 [2024-04-18 11:54:29.304733] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.993 [2024-04-18 11:54:29.313722] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.993 [2024-04-18 11:54:29.313746] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.993 [2024-04-18 11:54:29.322574] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.993 [2024-04-18 11:54:29.322599] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.993 [2024-04-18 11:54:29.331244] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.993 [2024-04-18 11:54:29.331268] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.993 [2024-04-18 11:54:29.339930] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.993 [2024-04-18 11:54:29.339955] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.993 [2024-04-18 11:54:29.348850] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.993 [2024-04-18 11:54:29.348874] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.993 [2024-04-18 11:54:29.357379] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.993 [2024-04-18 11:54:29.357404] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.993 [2024-04-18 11:54:29.365601] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.993 [2024-04-18 11:54:29.365625] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.993 [2024-04-18 11:54:29.374314] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.993 [2024-04-18 11:54:29.374339] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.993 [2024-04-18 11:54:29.383433] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.993 [2024-04-18 11:54:29.383466] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.993 [2024-04-18 11:54:29.392397] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.993 [2024-04-18 11:54:29.392421] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.993 [2024-04-18 11:54:29.401194] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.993 [2024-04-18 11:54:29.401218] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.993 [2024-04-18 11:54:29.409847] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.993 [2024-04-18 11:54:29.409872] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.993 [2024-04-18 11:54:29.418762] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.993 [2024-04-18 11:54:29.418787] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.993 [2024-04-18 11:54:29.427712] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.993 [2024-04-18 11:54:29.427737] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.993 [2024-04-18 11:54:29.436368] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.993 [2024-04-18 11:54:29.436392] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.993 [2024-04-18 11:54:29.445118] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.993 [2024-04-18 11:54:29.445143] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.993 [2024-04-18 11:54:29.453740] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.993 [2024-04-18 11:54:29.453766] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.993 [2024-04-18 11:54:29.462155] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.993 [2024-04-18 11:54:29.462180] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.993 [2024-04-18 11:54:29.471994] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.993 [2024-04-18 11:54:29.472020] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.993 [2024-04-18 11:54:29.481727] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.993 [2024-04-18 11:54:29.481752] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.993 [2024-04-18 11:54:29.489007] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.993 [2024-04-18 11:54:29.489031] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.993 [2024-04-18 11:54:29.499853] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.993 [2024-04-18 11:54:29.499878] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.993 [2024-04-18 11:54:29.507800] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.993 [2024-04-18 11:54:29.507824] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.993 [2024-04-18 11:54:29.518305] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.993 [2024-04-18 11:54:29.518330] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.993 [2024-04-18 11:54:29.526363] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.993 [2024-04-18 11:54:29.526387] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.993 [2024-04-18 11:54:29.536867] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.993 [2024-04-18 11:54:29.536892] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.253 [2024-04-18 11:54:29.544915] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.253 [2024-04-18 11:54:29.544941] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.253 [2024-04-18 11:54:29.555952] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.253 [2024-04-18 11:54:29.555977] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.253 [2024-04-18 11:54:29.563960] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.253 [2024-04-18 11:54:29.563985] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.253 [2024-04-18 11:54:29.575596] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.253 [2024-04-18 11:54:29.575628] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.253 [2024-04-18 11:54:29.587107] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.253 [2024-04-18 11:54:29.587133] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.253 [2024-04-18 11:54:29.595052] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.253 [2024-04-18 11:54:29.595077] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.253 [2024-04-18 11:54:29.605923] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.253 [2024-04-18 11:54:29.605948] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.253 [2024-04-18 11:54:29.613802] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.254 [2024-04-18 11:54:29.613826] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.254 [2024-04-18 11:54:29.624913] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.254 [2024-04-18 11:54:29.624937] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.254 [2024-04-18 11:54:29.633406] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.254 [2024-04-18 11:54:29.633431] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.254 [2024-04-18 11:54:29.642214] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.254 [2024-04-18 11:54:29.642239] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.254 [2024-04-18 11:54:29.650783] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.254 [2024-04-18 11:54:29.650807] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.254 [2024-04-18 11:54:29.659385] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.254 [2024-04-18 11:54:29.659410] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.254 [2024-04-18 11:54:29.667911] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.254 [2024-04-18 11:54:29.667935] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.254 [2024-04-18 11:54:29.676707] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.254 [2024-04-18 11:54:29.676731] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.254 [2024-04-18 11:54:29.685278] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.254 [2024-04-18 11:54:29.685303] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.254 [2024-04-18 11:54:29.694232] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.254 [2024-04-18 11:54:29.694257] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.254 [2024-04-18 11:54:29.703019] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.254 [2024-04-18 11:54:29.703044] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.254 [2024-04-18 11:54:29.711855] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.254 [2024-04-18 11:54:29.711881] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.254 [2024-04-18 11:54:29.720833] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.254 [2024-04-18 11:54:29.720858] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.254 [2024-04-18 11:54:29.729528] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.254 [2024-04-18 11:54:29.729553] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.254 [2024-04-18 11:54:29.738128] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.254 [2024-04-18 11:54:29.738152] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.254 [2024-04-18 11:54:29.746772] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.254 [2024-04-18 11:54:29.746796] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.254 [2024-04-18 11:54:29.755308] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.254 [2024-04-18 11:54:29.755332] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.254 [2024-04-18 11:54:29.764132] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.254 [2024-04-18 11:54:29.764157] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.254 [2024-04-18 11:54:29.772671] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.254 [2024-04-18 11:54:29.772695] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.254 [2024-04-18 11:54:29.781422] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.254 [2024-04-18 11:54:29.781447] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.254 [2024-04-18 11:54:29.790347] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.254 [2024-04-18 11:54:29.790373] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.254 [2024-04-18 11:54:29.799440] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.254 [2024-04-18 11:54:29.799471] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.514 [2024-04-18 11:54:29.808046] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.514 [2024-04-18 11:54:29.808071] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.514 [2024-04-18 11:54:29.816700] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.514 [2024-04-18 11:54:29.816724] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.514 [2024-04-18 11:54:29.825309] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.514 [2024-04-18 11:54:29.825333] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.514 [2024-04-18 11:54:29.833942] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.514 [2024-04-18 11:54:29.833967] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.514 [2024-04-18 11:54:29.842649] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.514 [2024-04-18 11:54:29.842673] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.514 [2024-04-18 11:54:29.851345] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.514 [2024-04-18 11:54:29.851371] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.514 [2024-04-18 11:54:29.860010] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.514 [2024-04-18 11:54:29.860036] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.514 [2024-04-18 11:54:29.868716] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.514 [2024-04-18 11:54:29.868741] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.514 [2024-04-18 11:54:29.877093] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.514 [2024-04-18 11:54:29.877117] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.514 [2024-04-18 11:54:29.886485] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.514 [2024-04-18 11:54:29.886509] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.514 [2024-04-18 11:54:29.895235] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.514 [2024-04-18 11:54:29.895260] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.514 [2024-04-18 11:54:29.904106] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.514 [2024-04-18 11:54:29.904130] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.514 [2024-04-18 11:54:29.912914] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.514 [2024-04-18 11:54:29.912939] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.514 [2024-04-18 11:54:29.921532] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.514 [2024-04-18 11:54:29.921556] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.514 [2024-04-18 11:54:29.930201] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.514 [2024-04-18 11:54:29.930226] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.514 [2024-04-18 11:54:29.938256] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.514 [2024-04-18 11:54:29.938281] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.514 [2024-04-18 11:54:29.947582] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.514 [2024-04-18 11:54:29.947606] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.514 [2024-04-18 11:54:29.956194] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.514 [2024-04-18 11:54:29.956219] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.514 [2024-04-18 11:54:29.965367] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.514 [2024-04-18 11:54:29.965393] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.514 [2024-04-18 11:54:29.974225] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.514 [2024-04-18 11:54:29.974249] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.514 [2024-04-18 11:54:29.983067] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.514 [2024-04-18 11:54:29.983092] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.514 [2024-04-18 11:54:29.991485] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.514 [2024-04-18 11:54:29.991509] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.514 [2024-04-18 11:54:30.001045] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.514 [2024-04-18 11:54:30.001071] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.514 [2024-04-18 11:54:30.009566] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.514 [2024-04-18 11:54:30.009591] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.514 [2024-04-18 11:54:30.019081] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.514 [2024-04-18 11:54:30.019106] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.514 [2024-04-18 11:54:30.029686] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.514 [2024-04-18 11:54:30.029716] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.514 [2024-04-18 11:54:30.039439] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.514 [2024-04-18 11:54:30.039471] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.514 [2024-04-18 11:54:30.047214] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.514 [2024-04-18 11:54:30.047239] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.514 [2024-04-18 11:54:30.058597] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.514 [2024-04-18 11:54:30.058622] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.774 [2024-04-18 11:54:30.066537] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.774 [2024-04-18 11:54:30.066562] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.774 [2024-04-18 11:54:30.077865] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.774 [2024-04-18 11:54:30.077892] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.774 [2024-04-18 11:54:30.086412] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.774 [2024-04-18 11:54:30.086460] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.774 [2024-04-18 11:54:30.097266] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.774 [2024-04-18 11:54:30.097291] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.774 [2024-04-18 11:54:30.106086] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.774 [2024-04-18 11:54:30.106111] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.774 [2024-04-18 11:54:30.117332] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.774 [2024-04-18 11:54:30.117359] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.774 [2024-04-18 11:54:30.127140] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.774 [2024-04-18 11:54:30.127166] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.774 [2024-04-18 11:54:30.134691] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.774 [2024-04-18 11:54:30.134714] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.774 [2024-04-18 11:54:30.145813] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.774 [2024-04-18 11:54:30.145837] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.774 [2024-04-18 11:54:30.153838] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.774 [2024-04-18 11:54:30.153869] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.774 [2024-04-18 11:54:30.163259] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.774 [2024-04-18 11:54:30.163285] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.774 [2024-04-18 11:54:30.172169] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.774 [2024-04-18 11:54:30.172194] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.774 [2024-04-18 11:54:30.180675] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.774 [2024-04-18 11:54:30.180700] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.774 [2024-04-18 11:54:30.189285] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.774 [2024-04-18 11:54:30.189310] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.774 [2024-04-18 11:54:30.198016] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.774 [2024-04-18 11:54:30.198041] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.775 [2024-04-18 11:54:30.207079] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.775 [2024-04-18 11:54:30.207108] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.775 [2024-04-18 11:54:30.215858] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.775 [2024-04-18 11:54:30.215883] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.775 [2024-04-18 11:54:30.224795] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.775 [2024-04-18 11:54:30.224820] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.775 [2024-04-18 11:54:30.233780] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.775 [2024-04-18 11:54:30.233804] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.775 [2024-04-18 11:54:30.242923] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.775 [2024-04-18 11:54:30.242948] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.775 [2024-04-18 11:54:30.252018] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.775 [2024-04-18 11:54:30.252044] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.775 [2024-04-18 11:54:30.261049] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.775 [2024-04-18 11:54:30.261075] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.775 [2024-04-18 11:54:30.269875] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.775 [2024-04-18 11:54:30.269901] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.775 [2024-04-18 11:54:30.278968] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.775 [2024-04-18 11:54:30.278992] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.775 [2024-04-18 11:54:30.287965] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.775 [2024-04-18 11:54:30.287990] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.775 [2024-04-18 11:54:30.296016] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.775 [2024-04-18 11:54:30.296041] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.775 [2024-04-18 11:54:30.305458] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.775 [2024-04-18 11:54:30.305483] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.775 [2024-04-18 11:54:30.314036] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.775 [2024-04-18 11:54:30.314061] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.034 [2024-04-18 11:54:30.323194] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.034 [2024-04-18 11:54:30.323220] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.034 [2024-04-18 11:54:30.332181] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.034 [2024-04-18 11:54:30.332206] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.034 [2024-04-18 11:54:30.341165] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.034 [2024-04-18 11:54:30.341190] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.034 [2024-04-18 11:54:30.349959] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.034 [2024-04-18 11:54:30.349984] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.034 [2024-04-18 11:54:30.359443] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.035 [2024-04-18 11:54:30.359474] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.035 [2024-04-18 11:54:30.369779] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.035 [2024-04-18 11:54:30.369804] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.035 [2024-04-18 11:54:30.377310] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.035 [2024-04-18 11:54:30.377338] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.035 [2024-04-18 11:54:30.388640] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.035 [2024-04-18 11:54:30.388666] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.035 [2024-04-18 11:54:30.397123] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.035 [2024-04-18 11:54:30.397148] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.035 [2024-04-18 11:54:30.406602] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.035 [2024-04-18 11:54:30.406627] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.035 [2024-04-18 11:54:30.415107] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.035 [2024-04-18 11:54:30.415132] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.035 [2024-04-18 11:54:30.423766] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.035 [2024-04-18 11:54:30.423791] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.035 [2024-04-18 11:54:30.433429] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.035 [2024-04-18 11:54:30.433471] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.035 [2024-04-18 11:54:30.444540] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.035 [2024-04-18 11:54:30.444565] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.035 [2024-04-18 11:54:30.452590] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.035 [2024-04-18 11:54:30.452615] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.035 [2024-04-18 11:54:30.463436] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.035 [2024-04-18 11:54:30.463469] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.035 [2024-04-18 11:54:30.474001] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.035 [2024-04-18 11:54:30.474026] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.035 [2024-04-18 11:54:30.482531] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.035 [2024-04-18 11:54:30.482555] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.035 [2024-04-18 11:54:30.491514] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.035 [2024-04-18 11:54:30.491555] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.035 [2024-04-18 11:54:30.500548] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.035 [2024-04-18 11:54:30.500585] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.035 [2024-04-18 11:54:30.509732] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.035 [2024-04-18 11:54:30.509757] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.035 [2024-04-18 11:54:30.518672] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.035 [2024-04-18 11:54:30.518697] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.035 [2024-04-18 11:54:30.527437] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.035 [2024-04-18 11:54:30.527471] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.035 [2024-04-18 11:54:30.536471] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.035 [2024-04-18 11:54:30.536497] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.035 [2024-04-18 11:54:30.545380] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.035 [2024-04-18 11:54:30.545421] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.035 [2024-04-18 11:54:30.554387] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.035 [2024-04-18 11:54:30.554417] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.035 [2024-04-18 11:54:30.563302] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.035 [2024-04-18 11:54:30.563328] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.035 [2024-04-18 11:54:30.572174] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.035 [2024-04-18 11:54:30.572200] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.035 [2024-04-18 11:54:30.581011] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.035 [2024-04-18 11:54:30.581037] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.294 [2024-04-18 11:54:30.589947] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.294 [2024-04-18 11:54:30.589972] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.294 [2024-04-18 11:54:30.599041] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.294 [2024-04-18 11:54:30.599066] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.294 [2024-04-18 11:54:30.608086] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.294 [2024-04-18 11:54:30.608111] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.294 [2024-04-18 11:54:30.618262] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.294 [2024-04-18 11:54:30.618288] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.294 [2024-04-18 11:54:30.627960] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.294 [2024-04-18 11:54:30.627986] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.294 [2024-04-18 11:54:30.635612] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.294 [2024-04-18 11:54:30.635637] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.294 [2024-04-18 11:54:30.647192] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.294 [2024-04-18 11:54:30.647218] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.294 [2024-04-18 11:54:30.655671] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.294 [2024-04-18 11:54:30.655696] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.294 [2024-04-18 11:54:30.666427] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.294 [2024-04-18 11:54:30.666459] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.294 [2024-04-18 11:54:30.674140] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.294 [2024-04-18 11:54:30.674166] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.294 [2024-04-18 11:54:30.685017] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.294 [2024-04-18 11:54:30.685043] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.294 [2024-04-18 11:54:30.693593] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.294 [2024-04-18 11:54:30.693617] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.294 [2024-04-18 11:54:30.702543] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.294 [2024-04-18 11:54:30.702568] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.294 [2024-04-18 11:54:30.711417] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.294 [2024-04-18 11:54:30.711443] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.294 [2024-04-18 11:54:30.719986] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.294 [2024-04-18 11:54:30.720011] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.294 [2024-04-18 11:54:30.728611] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.294 [2024-04-18 11:54:30.728636] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.294 [2024-04-18 11:54:30.737111] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.294 [2024-04-18 11:54:30.737160] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.294 [2024-04-18 11:54:30.745993] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.294 [2024-04-18 11:54:30.746019] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.294 [2024-04-18 11:54:30.754815] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.294 [2024-04-18 11:54:30.754840] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.294 [2024-04-18 11:54:30.763526] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.294 [2024-04-18 11:54:30.763550] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.294 [2024-04-18 11:54:30.776874] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.294 [2024-04-18 11:54:30.776900] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.294 [2024-04-18 11:54:30.784636] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.294 [2024-04-18 11:54:30.784661] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.294 [2024-04-18 11:54:30.795402] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.294 [2024-04-18 11:54:30.795428] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.294 [2024-04-18 11:54:30.803561] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.294 [2024-04-18 11:54:30.803586] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.294 [2024-04-18 11:54:30.814270] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.294 [2024-04-18 11:54:30.814296] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.294 [2024-04-18 11:54:30.822343] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.294 [2024-04-18 11:54:30.822368] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.294 [2024-04-18 11:54:30.832751] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.294 [2024-04-18 11:54:30.832776] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.294 [2024-04-18 11:54:30.840713] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.294 [2024-04-18 11:54:30.840738] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.553 [2024-04-18 11:54:30.851553] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.553 [2024-04-18 11:54:30.851579] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.553 [2024-04-18 11:54:30.859412] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.553 [2024-04-18 11:54:30.859438] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.553 [2024-04-18 11:54:30.870995] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.553 [2024-04-18 11:54:30.871020] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.553 [2024-04-18 11:54:30.879660] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.553 [2024-04-18 11:54:30.879684] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.553 [2024-04-18 11:54:30.888495] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.553 [2024-04-18 11:54:30.888519] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.553 [2024-04-18 11:54:30.897562] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.553 [2024-04-18 11:54:30.897587] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.553 [2024-04-18 11:54:30.906380] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.554 [2024-04-18 11:54:30.906405] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.554 [2024-04-18 11:54:30.915456] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.554 [2024-04-18 11:54:30.915481] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.554 [2024-04-18 11:54:30.924258] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.554 [2024-04-18 11:54:30.924284] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.554 [2024-04-18 11:54:30.933152] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.554 [2024-04-18 11:54:30.933177] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.554 [2024-04-18 11:54:30.942109] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.554 [2024-04-18 11:54:30.942134] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.554 [2024-04-18 11:54:30.950934] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.554 [2024-04-18 11:54:30.950959] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.554 [2024-04-18 11:54:30.959745] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.554 [2024-04-18 11:54:30.959770] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.554 [2024-04-18 11:54:30.968501] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.554 [2024-04-18 11:54:30.968527] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.554 [2024-04-18 11:54:30.977396] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.554 [2024-04-18 11:54:30.977421] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.554 [2024-04-18 11:54:30.986421] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.554 [2024-04-18 11:54:30.986446] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.554 [2024-04-18 11:54:30.995559] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.554 [2024-04-18 11:54:30.995585] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.554 [2024-04-18 11:54:31.004543] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.554 [2024-04-18 11:54:31.004568] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.554 [2024-04-18 11:54:31.013414] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.554 [2024-04-18 11:54:31.013441] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.554 [2024-04-18 11:54:31.022167] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.554 [2024-04-18 11:54:31.022192] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.554 [2024-04-18 11:54:31.030850] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.554 [2024-04-18 11:54:31.030875] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.554 [2024-04-18 11:54:31.039746] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.554 [2024-04-18 11:54:31.039771] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.554 [2024-04-18 11:54:31.048424] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.554 [2024-04-18 11:54:31.048458] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.554 [2024-04-18 11:54:31.057322] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.554 [2024-04-18 11:54:31.057347] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.554 [2024-04-18 11:54:31.066122] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.554 [2024-04-18 11:54:31.066146] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.554 [2024-04-18 11:54:31.074958] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.554 [2024-04-18 11:54:31.074984] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.554 [2024-04-18 11:54:31.083612] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.554 [2024-04-18 11:54:31.083636] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.554 [2024-04-18 11:54:31.092437] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.554 [2024-04-18 11:54:31.092469] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.554 [2024-04-18 11:54:31.101510] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.554 [2024-04-18 11:54:31.101535] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.814 [2024-04-18 11:54:31.110406] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.814 [2024-04-18 11:54:31.110431] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.814 [2024-04-18 11:54:31.119429] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.814 [2024-04-18 11:54:31.119460] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.814 [2024-04-18 11:54:31.128489] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.814 [2024-04-18 11:54:31.128514] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.814 [2024-04-18 11:54:31.137079] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.814 [2024-04-18 11:54:31.137104] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.814 [2024-04-18 11:54:31.145662] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.814 [2024-04-18 11:54:31.145686] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.814 [2024-04-18 11:54:31.154149] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.814 [2024-04-18 11:54:31.154174] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.814 [2024-04-18 11:54:31.163221] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.814 [2024-04-18 11:54:31.163246] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.814 [2024-04-18 11:54:31.172343] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.814 [2024-04-18 11:54:31.172368] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.814 [2024-04-18 11:54:31.181468] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.814 [2024-04-18 11:54:31.181493] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.814 [2024-04-18 11:54:31.190623] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.814 [2024-04-18 11:54:31.190647] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.814 [2024-04-18 11:54:31.199536] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.814 [2024-04-18 11:54:31.199560] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.814 [2024-04-18 11:54:31.208031] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.814 [2024-04-18 11:54:31.208055] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.814 [2024-04-18 11:54:31.216812] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.814 [2024-04-18 11:54:31.216837] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.814 [2024-04-18 11:54:31.225493] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.814 [2024-04-18 11:54:31.225517] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.814 [2024-04-18 11:54:31.234237] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.814 [2024-04-18 11:54:31.234262] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.814 [2024-04-18 11:54:31.243001] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.814 [2024-04-18 11:54:31.243026] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.814 [2024-04-18 11:54:31.251773] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.814 [2024-04-18 11:54:31.251797] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.814 [2024-04-18 11:54:31.260369] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.814 [2024-04-18 11:54:31.260394] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.814 [2024-04-18 11:54:31.268935] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.814 [2024-04-18 11:54:31.268960] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.814 [2024-04-18 11:54:31.277371] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.814 [2024-04-18 11:54:31.277396] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.814 [2024-04-18 11:54:31.286172] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.814 [2024-04-18 11:54:31.286197] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.814 [2024-04-18 11:54:31.294788] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.814 [2024-04-18 11:54:31.294813] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.814 [2024-04-18 11:54:31.303688] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.814 [2024-04-18 11:54:31.303713] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.814 [2024-04-18 11:54:31.312883] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.814 [2024-04-18 11:54:31.312915] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.814 [2024-04-18 11:54:31.321922] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.814 [2024-04-18 11:54:31.321947] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.814 [2024-04-18 11:54:31.330572] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.814 [2024-04-18 11:54:31.330597] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.814 [2024-04-18 11:54:31.339329] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.814 [2024-04-18 11:54:31.339353] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.814 [2024-04-18 11:54:31.348061] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.814 [2024-04-18 11:54:31.348085] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.814 [2024-04-18 11:54:31.356829] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.814 [2024-04-18 11:54:31.356854] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.074 [2024-04-18 11:54:31.365692] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.074 [2024-04-18 11:54:31.365717] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.074 [2024-04-18 11:54:31.374670] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.074 [2024-04-18 11:54:31.374695] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.074 [2024-04-18 11:54:31.383566] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.074 [2024-04-18 11:54:31.383591] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.074 [2024-04-18 11:54:31.392422] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.074 [2024-04-18 11:54:31.392447] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.074 [2024-04-18 11:54:31.401075] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.074 [2024-04-18 11:54:31.401103] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.074 [2024-04-18 11:54:31.409795] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.074 [2024-04-18 11:54:31.409820] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.074 [2024-04-18 11:54:31.418662] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.074 [2024-04-18 11:54:31.418687] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.074 [2024-04-18 11:54:31.427126] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.074 [2024-04-18 11:54:31.427151] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.075 [2024-04-18 11:54:31.435831] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.075 [2024-04-18 11:54:31.435856] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.075 [2024-04-18 11:54:31.444723] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.075 [2024-04-18 11:54:31.444749] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.075 [2024-04-18 11:54:31.453969] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.075 [2024-04-18 11:54:31.453994] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.075 [2024-04-18 11:54:31.462818] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.075 [2024-04-18 11:54:31.462843] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.075 [2024-04-18 11:54:31.471808] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.075 [2024-04-18 11:54:31.471833] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.075 [2024-04-18 11:54:31.480619] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.075 [2024-04-18 11:54:31.480644] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.075 [2024-04-18 11:54:31.489606] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.075 [2024-04-18 11:54:31.489631] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.075 [2024-04-18 11:54:31.498239] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.075 [2024-04-18 11:54:31.498264] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.075 [2024-04-18 11:54:31.506788] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.075 [2024-04-18 11:54:31.506813] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.075 [2024-04-18 11:54:31.516776] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.075 [2024-04-18 11:54:31.516801] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.075 [2024-04-18 11:54:31.524720] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.075 [2024-04-18 11:54:31.524745] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.075 [2024-04-18 11:54:31.536020] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.075 [2024-04-18 11:54:31.536045] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.075 [2024-04-18 11:54:31.544260] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.075 [2024-04-18 11:54:31.544284] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.075 [2024-04-18 11:54:31.554746] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.075 [2024-04-18 11:54:31.554770] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.075 [2024-04-18 11:54:31.562667] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.075 [2024-04-18 11:54:31.562691] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.075 [2024-04-18 11:54:31.574045] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.075 [2024-04-18 11:54:31.574075] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.075 [2024-04-18 11:54:31.582564] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.075 [2024-04-18 11:54:31.582589] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.075 [2024-04-18 11:54:31.591130] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.075 [2024-04-18 11:54:31.591154] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.075 [2024-04-18 11:54:31.599927] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.075 [2024-04-18 11:54:31.599951] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.075 [2024-04-18 11:54:31.608635] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.075 [2024-04-18 11:54:31.608659] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.075 [2024-04-18 11:54:31.618749] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.075 [2024-04-18 11:54:31.618774] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.335 [2024-04-18 11:54:31.627046] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.335 [2024-04-18 11:54:31.627070] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.335 [2024-04-18 11:54:31.638838] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.335 [2024-04-18 11:54:31.638863] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.335 [2024-04-18 11:54:31.648562] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.335 [2024-04-18 11:54:31.648587] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.335 [2024-04-18 11:54:31.656083] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.335 [2024-04-18 11:54:31.656107] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.335 [2024-04-18 11:54:31.667388] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.335 [2024-04-18 11:54:31.667413] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.335 [2024-04-18 11:54:31.675297] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.335 [2024-04-18 11:54:31.675321] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.335 [2024-04-18 11:54:31.685626] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.335 [2024-04-18 11:54:31.685650] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.335 [2024-04-18 11:54:31.693656] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.335 [2024-04-18 11:54:31.693680] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.335 [2024-04-18 11:54:31.704243] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.335 [2024-04-18 11:54:31.704268] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.335 [2024-04-18 11:54:31.713938] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.335 [2024-04-18 11:54:31.713963] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.335 [2024-04-18 11:54:31.721318] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.335 [2024-04-18 11:54:31.721343] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.335 [2024-04-18 11:54:31.732573] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.335 [2024-04-18 11:54:31.732597] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.335 [2024-04-18 11:54:31.742618] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.335 [2024-04-18 11:54:31.742642] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.335 [2024-04-18 11:54:31.750335] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.335 [2024-04-18 11:54:31.750363] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.335 [2024-04-18 11:54:31.761577] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.335 [2024-04-18 11:54:31.761602] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.335 [2024-04-18 11:54:31.769398] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.335 [2024-04-18 11:54:31.769422] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.335 [2024-04-18 11:54:31.779496] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.335 [2024-04-18 11:54:31.779520] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.335 [2024-04-18 11:54:31.787206] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.335 [2024-04-18 11:54:31.787230] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.335 [2024-04-18 11:54:31.798103] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.335 [2024-04-18 11:54:31.798129] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.335 [2024-04-18 11:54:31.807510] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.335 [2024-04-18 11:54:31.807535] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.335 [2024-04-18 11:54:31.816548] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.335 [2024-04-18 11:54:31.816574] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.335 [2024-04-18 11:54:31.824583] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.335 [2024-04-18 11:54:31.824607] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.335 [2024-04-18 11:54:31.833245] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.335 [2024-04-18 11:54:31.833270] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.335 [2024-04-18 11:54:31.842186] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.335 [2024-04-18 11:54:31.842211] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.335 [2024-04-18 11:54:31.851096] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.335 [2024-04-18 11:54:31.851121] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.335 [2024-04-18 11:54:31.860016] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.335 [2024-04-18 11:54:31.860041] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.335 [2024-04-18 11:54:31.868981] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.336 [2024-04-18 11:54:31.869006] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.336 [2024-04-18 11:54:31.876614] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.336 [2024-04-18 11:54:31.876638] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.595 [2024-04-18 11:54:31.886259] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.595 [2024-04-18 11:54:31.886284] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.595 [2024-04-18 11:54:31.894623] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.595 [2024-04-18 11:54:31.894655] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.595 [2024-04-18 11:54:31.903915] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.595 [2024-04-18 11:54:31.903940] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.595 [2024-04-18 11:54:31.912692] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.595 [2024-04-18 11:54:31.912717] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.595 [2024-04-18 11:54:31.921889] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.595 [2024-04-18 11:54:31.921919] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.595 [2024-04-18 11:54:31.931078] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.595 [2024-04-18 11:54:31.931103] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.595 [2024-04-18 11:54:31.942787] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.595 [2024-04-18 11:54:31.942812] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.595 [2024-04-18 11:54:31.950595] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.595 [2024-04-18 11:54:31.950620] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.595 [2024-04-18 11:54:31.962236] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.595 [2024-04-18 11:54:31.962262] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.595 [2024-04-18 11:54:31.970865] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.595 [2024-04-18 11:54:31.970890] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.595 [2024-04-18 11:54:31.979432] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.595 [2024-04-18 11:54:31.979464] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.595 [2024-04-18 11:54:31.987288] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.595 [2024-04-18 11:54:31.987312] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.595 [2024-04-18 11:54:31.996095] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.595 [2024-04-18 11:54:31.996119] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.595 [2024-04-18 11:54:32.005522] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.595 [2024-04-18 11:54:32.005548] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.595 [2024-04-18 11:54:32.013563] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.595 [2024-04-18 11:54:32.013588] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.595 [2024-04-18 11:54:32.024318] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.595 [2024-04-18 11:54:32.024342] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.595 [2024-04-18 11:54:32.033410] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.595 [2024-04-18 11:54:32.033435] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.595 [2024-04-18 11:54:32.044149] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.595 [2024-04-18 11:54:32.044174] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.595 [2024-04-18 11:54:32.052760] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.595 [2024-04-18 11:54:32.052785] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.595 [2024-04-18 11:54:32.061370] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.596 [2024-04-18 11:54:32.061394] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.596 [2024-04-18 11:54:32.070104] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.596 [2024-04-18 11:54:32.070128] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.596 [2024-04-18 11:54:32.079001] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.596 [2024-04-18 11:54:32.079025] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.596 [2024-04-18 11:54:32.088075] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.596 [2024-04-18 11:54:32.088101] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.596 [2024-04-18 11:54:32.096976] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.596 [2024-04-18 11:54:32.097002] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.596 [2024-04-18 11:54:32.105815] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.596 [2024-04-18 11:54:32.105840] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.596 [2024-04-18 11:54:32.114721] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.596 [2024-04-18 11:54:32.114747] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.596 [2024-04-18 11:54:32.123835] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.596 [2024-04-18 11:54:32.123861] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.596 [2024-04-18 11:54:32.132932] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.596 [2024-04-18 11:54:32.132958] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.596 [2024-04-18 11:54:32.141596] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.596 [2024-04-18 11:54:32.141622] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.854 [2024-04-18 11:54:32.150440] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.855 [2024-04-18 11:54:32.150473] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.855 [2024-04-18 11:54:32.159315] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.855 [2024-04-18 11:54:32.159341] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.855 [2024-04-18 11:54:32.168425] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.855 [2024-04-18 11:54:32.168458] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.855 [2024-04-18 11:54:32.177225] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.855 [2024-04-18 11:54:32.177251] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.855 [2024-04-18 11:54:32.186099] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.855 [2024-04-18 11:54:32.186123] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.855 [2024-04-18 11:54:32.194826] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.855 [2024-04-18 11:54:32.194851] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.855 [2024-04-18 11:54:32.203724] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.855 [2024-04-18 11:54:32.203750] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.855 [2024-04-18 11:54:32.212504] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.855 [2024-04-18 11:54:32.212528] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.855 [2024-04-18 11:54:32.221148] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.855 [2024-04-18 11:54:32.221172] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.855 [2024-04-18 11:54:32.229956] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.855 [2024-04-18 11:54:32.229981] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.855 [2024-04-18 11:54:32.238695] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.855 [2024-04-18 11:54:32.238720] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.855 [2024-04-18 11:54:32.247239] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.855 [2024-04-18 11:54:32.247265] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.855 [2024-04-18 11:54:32.256353] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.855 [2024-04-18 11:54:32.256378] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.855 [2024-04-18 11:54:32.265144] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.855 [2024-04-18 11:54:32.265168] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.855 [2024-04-18 11:54:32.273719] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.855 [2024-04-18 11:54:32.273744] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.855 [2024-04-18 11:54:32.282328] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.855 [2024-04-18 11:54:32.282354] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.855 [2024-04-18 11:54:32.290828] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.855 [2024-04-18 11:54:32.290853] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.855 [2024-04-18 11:54:32.299548] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.855 [2024-04-18 11:54:32.299574] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.855 [2024-04-18 11:54:32.308511] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.855 [2024-04-18 11:54:32.308536] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.855 [2024-04-18 11:54:32.317152] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.855 [2024-04-18 11:54:32.317177] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.855 [2024-04-18 11:54:32.325880] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.855 [2024-04-18 11:54:32.325904] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.855 [2024-04-18 11:54:32.334341] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.855 [2024-04-18 11:54:32.334366] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.855 [2024-04-18 11:54:32.342995] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.855 [2024-04-18 11:54:32.343021] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.855 [2024-04-18 11:54:32.351922] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.855 [2024-04-18 11:54:32.351947] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.855 [2024-04-18 11:54:32.360702] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.855 [2024-04-18 11:54:32.360726] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.855 [2024-04-18 11:54:32.369610] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.855 [2024-04-18 11:54:32.369635] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.855 [2024-04-18 11:54:32.378346] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.855 [2024-04-18 11:54:32.378371] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.855 [2024-04-18 11:54:32.387517] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.855 [2024-04-18 11:54:32.387542] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.855 [2024-04-18 11:54:32.395686] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.855 [2024-04-18 11:54:32.395710] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.114 [2024-04-18 11:54:32.406362] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.114 [2024-04-18 11:54:32.406388] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.114 [2024-04-18 11:54:32.414501] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.114 [2024-04-18 11:54:32.414526] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.114 [2024-04-18 11:54:32.423529] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.114 [2024-04-18 11:54:32.423554] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.115 [2024-04-18 11:54:32.432260] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.115 [2024-04-18 11:54:32.432285] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.115 [2024-04-18 11:54:32.440824] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.115 [2024-04-18 11:54:32.440848] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.115 [2024-04-18 11:54:32.449238] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.115 [2024-04-18 11:54:32.449263] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.115 [2024-04-18 11:54:32.458253] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.115 [2024-04-18 11:54:32.458278] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.115 [2024-04-18 11:54:32.466903] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.115 [2024-04-18 11:54:32.466935] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.115 [2024-04-18 11:54:32.475587] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.115 [2024-04-18 11:54:32.475611] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.115 [2024-04-18 11:54:32.484218] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.115 [2024-04-18 11:54:32.484242] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.115 [2024-04-18 11:54:32.492747] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.115 [2024-04-18 11:54:32.492771] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.115 [2024-04-18 11:54:32.501707] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.115 [2024-04-18 11:54:32.501731] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.115 [2024-04-18 11:54:32.510677] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.115 [2024-04-18 11:54:32.510702] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.115 [2024-04-18 11:54:32.519303] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.115 [2024-04-18 11:54:32.519327] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.115 [2024-04-18 11:54:32.528288] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.115 [2024-04-18 11:54:32.528313] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.115 [2024-04-18 11:54:32.537391] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.115 [2024-04-18 11:54:32.537416] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.115 [2024-04-18 11:54:32.546302] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.115 [2024-04-18 11:54:32.546328] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.115 [2024-04-18 11:54:32.555215] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.115 [2024-04-18 11:54:32.555240] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.115 [2024-04-18 11:54:32.564119] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.115 [2024-04-18 11:54:32.564144] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.115 [2024-04-18 11:54:32.573071] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.115 [2024-04-18 11:54:32.573096] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.115 [2024-04-18 11:54:32.581915] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.115 [2024-04-18 11:54:32.581940] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.115 [2024-04-18 11:54:32.590667] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.115 [2024-04-18 11:54:32.590692] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.115 [2024-04-18 11:54:32.599433] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.115 [2024-04-18 11:54:32.599466] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.115 [2024-04-18 11:54:32.608212] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.115 [2024-04-18 11:54:32.608237] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.115 [2024-04-18 11:54:32.616719] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.115 [2024-04-18 11:54:32.616744] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.115 [2024-04-18 11:54:32.625230] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.115 [2024-04-18 11:54:32.625255] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.115 [2024-04-18 11:54:32.633850] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.115 [2024-04-18 11:54:32.633875] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.115 [2024-04-18 11:54:32.642431] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.115 [2024-04-18 11:54:32.642465] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.115 [2024-04-18 11:54:32.651213] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.115 [2024-04-18 11:54:32.651238] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.115 [2024-04-18 11:54:32.659985] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.115 [2024-04-18 11:54:32.660010] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.374 [2024-04-18 11:54:32.668559] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.374 [2024-04-18 11:54:32.668585] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.374 [2024-04-18 11:54:32.677448] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.374 [2024-04-18 11:54:32.677480] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.374 [2024-04-18 11:54:32.687788] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.374 [2024-04-18 11:54:32.687814] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.374 [2024-04-18 11:54:32.695659] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.374 [2024-04-18 11:54:32.695684] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.374 00:18:42.374 Latency(us) 00:18:42.374 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.374 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:18:42.374 Nvme1n1 : 5.00 14386.06 112.39 0.00 0.00 8891.86 2582.12 23697.82 00:18:42.374 =================================================================================================================== 00:18:42.374 Total : 14386.06 112.39 0.00 0.00 8891.86 2582.12 23697.82 00:18:42.374 [2024-04-18 11:54:32.703358] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.374 [2024-04-18 11:54:32.703380] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.374 [2024-04-18 11:54:32.711382] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.374 [2024-04-18 11:54:32.711405] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.374 [2024-04-18 11:54:32.719403] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.374 [2024-04-18 11:54:32.719424] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.374 [2024-04-18 11:54:32.727423] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.374 [2024-04-18 11:54:32.727447] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.374 [2024-04-18 11:54:32.735439] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.374 [2024-04-18 11:54:32.735464] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.374 [2024-04-18 11:54:32.743459] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.374 [2024-04-18 11:54:32.743482] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.374 [2024-04-18 11:54:32.751509] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.374 [2024-04-18 11:54:32.751532] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.374 [2024-04-18 11:54:32.759514] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.374 [2024-04-18 11:54:32.759534] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.374 [2024-04-18 11:54:32.767518] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.374 [2024-04-18 11:54:32.767539] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.374 [2024-04-18 11:54:32.775554] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.374 [2024-04-18 11:54:32.775575] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.374 [2024-04-18 11:54:32.783569] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.374 [2024-04-18 11:54:32.783588] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.374 [2024-04-18 11:54:32.791581] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.374 [2024-04-18 11:54:32.791601] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.375 [2024-04-18 11:54:32.799611] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.375 [2024-04-18 11:54:32.799630] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.375 [2024-04-18 11:54:32.807620] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.375 [2024-04-18 11:54:32.807639] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.375 [2024-04-18 11:54:32.815656] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.375 [2024-04-18 11:54:32.815675] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.375 [2024-04-18 11:54:32.823694] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.375 [2024-04-18 11:54:32.823713] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.375 [2024-04-18 11:54:32.831694] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.375 [2024-04-18 11:54:32.831714] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.375 [2024-04-18 11:54:32.839727] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.375 [2024-04-18 11:54:32.839747] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.375 [2024-04-18 11:54:32.847750] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.375 [2024-04-18 11:54:32.847770] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.375 [2024-04-18 11:54:32.855761] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.375 [2024-04-18 11:54:32.855781] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.375 [2024-04-18 11:54:32.863791] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.375 [2024-04-18 11:54:32.863810] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.375 [2024-04-18 11:54:32.871815] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.375 [2024-04-18 11:54:32.871835] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.375 [2024-04-18 11:54:32.879838] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.375 [2024-04-18 11:54:32.879862] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.375 [2024-04-18 11:54:32.887862] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.375 [2024-04-18 11:54:32.887882] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.375 [2024-04-18 11:54:32.895867] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.375 [2024-04-18 11:54:32.895886] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.375 [2024-04-18 11:54:32.903899] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.375 [2024-04-18 11:54:32.903919] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.375 [2024-04-18 11:54:32.911923] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.375 [2024-04-18 11:54:32.911944] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.375 [2024-04-18 11:54:32.919955] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.375 [2024-04-18 11:54:32.919975] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.648 [2024-04-18 11:54:32.927967] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.648 [2024-04-18 11:54:32.927987] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.648 [2024-04-18 11:54:32.935974] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.648 [2024-04-18 11:54:32.935994] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.648 [2024-04-18 11:54:32.944010] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.648 [2024-04-18 11:54:32.944030] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.648 [2024-04-18 11:54:32.952028] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.648 [2024-04-18 11:54:32.952047] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.648 [2024-04-18 11:54:32.960042] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.648 [2024-04-18 11:54:32.960061] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.648 [2024-04-18 11:54:32.968070] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.648 [2024-04-18 11:54:32.968090] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.648 [2024-04-18 11:54:32.976102] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.648 [2024-04-18 11:54:32.976122] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.648 [2024-04-18 11:54:32.984106] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.648 [2024-04-18 11:54:32.984126] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.648 [2024-04-18 11:54:32.992142] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.648 [2024-04-18 11:54:32.992162] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.648 [2024-04-18 11:54:33.000149] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.648 [2024-04-18 11:54:33.000174] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.648 [2024-04-18 11:54:33.008187] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.648 [2024-04-18 11:54:33.008207] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.648 [2024-04-18 11:54:33.016212] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.648 [2024-04-18 11:54:33.016231] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.648 [2024-04-18 11:54:33.024210] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.648 [2024-04-18 11:54:33.024229] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.648 [2024-04-18 11:54:33.032254] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.648 [2024-04-18 11:54:33.032277] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.648 [2024-04-18 11:54:33.040269] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.648 [2024-04-18 11:54:33.040288] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.648 [2024-04-18 11:54:33.048278] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.648 [2024-04-18 11:54:33.048298] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.648 [2024-04-18 11:54:33.056321] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.648 [2024-04-18 11:54:33.056341] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.648 [2024-04-18 11:54:33.064325] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.648 [2024-04-18 11:54:33.064344] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.648 [2024-04-18 11:54:33.072359] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.648 [2024-04-18 11:54:33.072379] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.648 [2024-04-18 11:54:33.080388] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.648 [2024-04-18 11:54:33.080407] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.648 [2024-04-18 11:54:33.088382] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.648 [2024-04-18 11:54:33.088402] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.648 [2024-04-18 11:54:33.096424] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.648 [2024-04-18 11:54:33.096443] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.648 [2024-04-18 11:54:33.104442] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.648 [2024-04-18 11:54:33.104467] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.648 [2024-04-18 11:54:33.112473] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.648 [2024-04-18 11:54:33.112492] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.648 [2024-04-18 11:54:33.120493] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.648 [2024-04-18 11:54:33.120512] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.648 [2024-04-18 11:54:33.128499] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.648 [2024-04-18 11:54:33.128518] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.648 [2024-04-18 11:54:33.136533] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.648 [2024-04-18 11:54:33.136552] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.648 [2024-04-18 11:54:33.144556] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.648 [2024-04-18 11:54:33.144576] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.648 [2024-04-18 11:54:33.152567] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.648 [2024-04-18 11:54:33.152586] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.648 [2024-04-18 11:54:33.160595] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.648 [2024-04-18 11:54:33.160615] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.648 [2024-04-18 11:54:33.168616] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.648 [2024-04-18 11:54:33.168635] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.648 [2024-04-18 11:54:33.176635] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.648 [2024-04-18 11:54:33.176655] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.648 [2024-04-18 11:54:33.184668] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.648 [2024-04-18 11:54:33.184691] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.936 [2024-04-18 11:54:33.192675] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.936 [2024-04-18 11:54:33.192695] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.936 [2024-04-18 11:54:33.200711] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.936 [2024-04-18 11:54:33.200732] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.936 [2024-04-18 11:54:33.208730] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.936 [2024-04-18 11:54:33.208750] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.936 [2024-04-18 11:54:33.216761] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.936 [2024-04-18 11:54:33.216780] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.936 [2024-04-18 11:54:33.224773] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.936 [2024-04-18 11:54:33.224793] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.936 [2024-04-18 11:54:33.232799] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.936 [2024-04-18 11:54:33.232818] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.936 [2024-04-18 11:54:33.240805] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.936 [2024-04-18 11:54:33.240824] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.936 [2024-04-18 11:54:33.248833] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.936 [2024-04-18 11:54:33.248853] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.936 [2024-04-18 11:54:33.256844] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.936 [2024-04-18 11:54:33.256864] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.936 [2024-04-18 11:54:33.264887] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.936 [2024-04-18 11:54:33.264907] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.936 [2024-04-18 11:54:33.272904] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.936 [2024-04-18 11:54:33.272924] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.936 [2024-04-18 11:54:33.280911] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.936 [2024-04-18 11:54:33.280930] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.936 [2024-04-18 11:54:33.288949] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.936 [2024-04-18 11:54:33.288969] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.936 [2024-04-18 11:54:33.296973] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.936 [2024-04-18 11:54:33.296993] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.936 [2024-04-18 11:54:33.304995] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.936 [2024-04-18 11:54:33.305015] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.936 [2024-04-18 11:54:33.313012] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.936 [2024-04-18 11:54:33.313032] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.936 [2024-04-18 11:54:33.321022] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.936 [2024-04-18 11:54:33.321044] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.936 [2024-04-18 11:54:33.329058] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.936 [2024-04-18 11:54:33.329079] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.937 [2024-04-18 11:54:33.337086] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.937 [2024-04-18 11:54:33.337105] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.937 [2024-04-18 11:54:33.345092] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.937 [2024-04-18 11:54:33.345111] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.937 [2024-04-18 11:54:33.353119] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.937 [2024-04-18 11:54:33.353139] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.937 [2024-04-18 11:54:33.361139] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.937 [2024-04-18 11:54:33.361159] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.937 [2024-04-18 11:54:33.369155] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.937 [2024-04-18 11:54:33.369178] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.937 [2024-04-18 11:54:33.377184] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.937 [2024-04-18 11:54:33.377204] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.937 [2024-04-18 11:54:33.385192] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.937 [2024-04-18 11:54:33.385211] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.937 [2024-04-18 11:54:33.393224] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.937 [2024-04-18 11:54:33.393243] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.937 [2024-04-18 11:54:33.401263] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.937 [2024-04-18 11:54:33.401282] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.937 [2024-04-18 11:54:33.409259] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.937 [2024-04-18 11:54:33.409280] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.937 [2024-04-18 11:54:33.417293] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.937 [2024-04-18 11:54:33.417313] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.937 [2024-04-18 11:54:33.425317] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.937 [2024-04-18 11:54:33.425338] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.937 [2024-04-18 11:54:33.433326] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.937 [2024-04-18 11:54:33.433345] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.937 [2024-04-18 11:54:33.441361] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.937 [2024-04-18 11:54:33.441380] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.937 [2024-04-18 11:54:33.449363] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.937 [2024-04-18 11:54:33.449383] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.937 [2024-04-18 11:54:33.457396] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.937 [2024-04-18 11:54:33.457415] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.937 [2024-04-18 11:54:33.465419] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.937 [2024-04-18 11:54:33.465439] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.937 [2024-04-18 11:54:33.473435] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.937 [2024-04-18 11:54:33.473460] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.937 [2024-04-18 11:54:33.481474] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.937 [2024-04-18 11:54:33.481494] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.196 [2024-04-18 11:54:33.489487] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.196 [2024-04-18 11:54:33.489507] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.196 [2024-04-18 11:54:33.497513] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.196 [2024-04-18 11:54:33.497532] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.196 [2024-04-18 11:54:33.505530] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.196 [2024-04-18 11:54:33.505550] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.196 [2024-04-18 11:54:33.513547] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.196 [2024-04-18 11:54:33.513573] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.196 [2024-04-18 11:54:33.521578] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.196 [2024-04-18 11:54:33.521598] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.196 [2024-04-18 11:54:33.529588] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.196 [2024-04-18 11:54:33.529608] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.196 [2024-04-18 11:54:33.537600] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.196 [2024-04-18 11:54:33.537619] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.196 [2024-04-18 11:54:33.545638] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.196 [2024-04-18 11:54:33.545657] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.196 [2024-04-18 11:54:33.553659] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.196 [2024-04-18 11:54:33.553678] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.196 [2024-04-18 11:54:33.561664] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.196 [2024-04-18 11:54:33.561683] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.196 [2024-04-18 11:54:33.569699] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.196 [2024-04-18 11:54:33.569719] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.196 [2024-04-18 11:54:33.577708] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.196 [2024-04-18 11:54:33.577727] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.196 [2024-04-18 11:54:33.585746] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.196 [2024-04-18 11:54:33.585765] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.196 [2024-04-18 11:54:33.593773] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.196 [2024-04-18 11:54:33.593793] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.196 [2024-04-18 11:54:33.601772] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.196 [2024-04-18 11:54:33.601792] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.196 [2024-04-18 11:54:33.609808] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.196 [2024-04-18 11:54:33.609827] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.196 [2024-04-18 11:54:33.617828] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.196 [2024-04-18 11:54:33.617847] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.196 [2024-04-18 11:54:33.625839] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.196 [2024-04-18 11:54:33.625858] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.196 [2024-04-18 11:54:33.633875] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.196 [2024-04-18 11:54:33.633895] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.196 [2024-04-18 11:54:33.641886] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.196 [2024-04-18 11:54:33.641905] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.196 [2024-04-18 11:54:33.649916] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.196 [2024-04-18 11:54:33.649935] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.196 [2024-04-18 11:54:33.657940] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.196 [2024-04-18 11:54:33.657960] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.196 [2024-04-18 11:54:33.665951] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.196 [2024-04-18 11:54:33.665972] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.196 [2024-04-18 11:54:33.673986] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.196 [2024-04-18 11:54:33.674006] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.196 [2024-04-18 11:54:33.682002] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.196 [2024-04-18 11:54:33.682022] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.197 [2024-04-18 11:54:33.690042] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.197 [2024-04-18 11:54:33.690062] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.197 [2024-04-18 11:54:33.698051] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.197 [2024-04-18 11:54:33.698071] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2483503) - No such process 00:18:43.197 11:54:33 -- target/zcopy.sh@49 -- # wait 2483503 00:18:43.197 11:54:33 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:43.197 11:54:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:43.197 11:54:33 -- common/autotest_common.sh@10 -- # set +x 00:18:43.197 11:54:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:43.197 11:54:33 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:43.197 11:54:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:43.197 11:54:33 -- common/autotest_common.sh@10 -- # set +x 00:18:43.197 delay0 00:18:43.197 11:54:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:43.197 11:54:33 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:18:43.197 11:54:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:43.197 11:54:33 -- common/autotest_common.sh@10 -- # set +x 00:18:43.197 11:54:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:43.197 11:54:33 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:18:43.455 EAL: No free 2048 kB hugepages reported on node 1 00:18:43.455 [2024-04-18 11:54:33.863779] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:50.022 Initializing NVMe Controllers 00:18:50.022 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:50.022 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:50.022 Initialization complete. Launching workers. 00:18:50.022 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 103 00:18:50.022 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 387, failed to submit 36 00:18:50.022 success 215, unsuccess 172, failed 0 00:18:50.022 11:54:40 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:18:50.022 11:54:40 -- target/zcopy.sh@60 -- # nvmftestfini 00:18:50.022 11:54:40 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:50.022 11:54:40 -- nvmf/common.sh@117 -- # sync 00:18:50.022 11:54:40 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:50.022 11:54:40 -- nvmf/common.sh@120 -- # set +e 00:18:50.022 11:54:40 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:50.022 11:54:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:50.022 rmmod nvme_tcp 00:18:50.022 rmmod nvme_fabrics 00:18:50.022 rmmod nvme_keyring 00:18:50.022 11:54:40 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:50.022 11:54:40 -- nvmf/common.sh@124 -- # set -e 00:18:50.022 11:54:40 -- nvmf/common.sh@125 -- # return 0 00:18:50.022 11:54:40 -- nvmf/common.sh@478 -- # '[' -n 2481358 ']' 00:18:50.022 11:54:40 -- nvmf/common.sh@479 -- # killprocess 2481358 00:18:50.022 11:54:40 -- common/autotest_common.sh@936 -- # '[' -z 2481358 ']' 00:18:50.022 11:54:40 -- common/autotest_common.sh@940 -- # kill -0 2481358 00:18:50.022 11:54:40 -- common/autotest_common.sh@941 -- # uname 00:18:50.022 11:54:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:50.022 11:54:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2481358 00:18:50.022 11:54:40 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:50.022 11:54:40 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:50.022 11:54:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2481358' 00:18:50.022 killing process with pid 2481358 00:18:50.022 11:54:40 -- common/autotest_common.sh@955 -- # kill 2481358 00:18:50.022 11:54:40 -- common/autotest_common.sh@960 -- # wait 2481358 00:18:51.397 11:54:41 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:51.397 11:54:41 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:51.397 11:54:41 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:51.397 11:54:41 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:51.397 11:54:41 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:51.397 11:54:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:51.397 11:54:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:51.397 11:54:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:53.301 11:54:43 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:53.301 00:18:53.301 real 0m36.254s 00:18:53.301 user 0m48.628s 00:18:53.301 sys 0m12.389s 00:18:53.301 11:54:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:53.301 11:54:43 -- common/autotest_common.sh@10 -- # set +x 00:18:53.301 ************************************ 00:18:53.301 END TEST nvmf_zcopy 00:18:53.301 ************************************ 00:18:53.301 11:54:43 -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:53.301 11:54:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:53.301 11:54:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:53.301 11:54:43 -- common/autotest_common.sh@10 -- # set +x 00:18:53.559 ************************************ 00:18:53.559 START TEST nvmf_nmic 00:18:53.559 ************************************ 00:18:53.559 11:54:43 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:53.559 * Looking for test storage... 00:18:53.559 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:53.559 11:54:44 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:53.559 11:54:44 -- nvmf/common.sh@7 -- # uname -s 00:18:53.559 11:54:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:53.559 11:54:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:53.559 11:54:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:53.559 11:54:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:53.559 11:54:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:53.559 11:54:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:53.559 11:54:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:53.559 11:54:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:53.559 11:54:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:53.560 11:54:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:53.560 11:54:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:53.560 11:54:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:18:53.560 11:54:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:53.560 11:54:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:53.560 11:54:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:53.560 11:54:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:53.560 11:54:44 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:53.560 11:54:44 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:53.560 11:54:44 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:53.560 11:54:44 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:53.560 11:54:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.560 11:54:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.560 11:54:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.560 11:54:44 -- paths/export.sh@5 -- # export PATH 00:18:53.560 11:54:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.560 11:54:44 -- nvmf/common.sh@47 -- # : 0 00:18:53.560 11:54:44 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:53.560 11:54:44 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:53.560 11:54:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:53.560 11:54:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:53.560 11:54:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:53.560 11:54:44 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:53.560 11:54:44 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:53.560 11:54:44 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:53.560 11:54:44 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:53.560 11:54:44 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:53.560 11:54:44 -- target/nmic.sh@14 -- # nvmftestinit 00:18:53.560 11:54:44 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:53.560 11:54:44 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:53.560 11:54:44 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:53.560 11:54:44 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:53.560 11:54:44 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:53.560 11:54:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:53.560 11:54:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:53.560 11:54:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:53.560 11:54:44 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:53.560 11:54:44 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:53.560 11:54:44 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:53.560 11:54:44 -- common/autotest_common.sh@10 -- # set +x 00:19:00.120 11:54:49 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:00.120 11:54:49 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:00.120 11:54:49 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:00.120 11:54:49 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:00.120 11:54:49 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:00.120 11:54:49 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:00.120 11:54:49 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:00.120 11:54:49 -- nvmf/common.sh@295 -- # net_devs=() 00:19:00.120 11:54:49 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:00.120 11:54:49 -- nvmf/common.sh@296 -- # e810=() 00:19:00.120 11:54:49 -- nvmf/common.sh@296 -- # local -ga e810 00:19:00.120 11:54:49 -- nvmf/common.sh@297 -- # x722=() 00:19:00.120 11:54:49 -- nvmf/common.sh@297 -- # local -ga x722 00:19:00.120 11:54:49 -- nvmf/common.sh@298 -- # mlx=() 00:19:00.120 11:54:49 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:00.120 11:54:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:00.120 11:54:49 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:00.120 11:54:49 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:00.120 11:54:49 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:00.120 11:54:49 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:00.120 11:54:49 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:00.120 11:54:49 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:00.120 11:54:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:00.120 11:54:49 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:00.120 11:54:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:00.120 11:54:49 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:00.120 11:54:49 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:00.120 11:54:49 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:00.120 11:54:49 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:00.120 11:54:49 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:00.120 11:54:49 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:00.120 11:54:49 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:00.120 11:54:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:00.120 11:54:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:00.120 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:00.120 11:54:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:00.120 11:54:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:00.120 11:54:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:00.120 11:54:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:00.120 11:54:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:00.120 11:54:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:00.120 11:54:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:00.120 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:00.120 11:54:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:00.120 11:54:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:00.120 11:54:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:00.120 11:54:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:00.120 11:54:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:00.120 11:54:49 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:00.120 11:54:49 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:00.120 11:54:49 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:00.120 11:54:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:00.120 11:54:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:00.120 11:54:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:00.120 11:54:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:00.120 11:54:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:00.120 Found net devices under 0000:af:00.0: cvl_0_0 00:19:00.120 11:54:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:00.120 11:54:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:00.120 11:54:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:00.120 11:54:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:00.120 11:54:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:00.120 11:54:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:00.120 Found net devices under 0000:af:00.1: cvl_0_1 00:19:00.120 11:54:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:00.120 11:54:49 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:00.120 11:54:49 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:00.120 11:54:49 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:00.120 11:54:49 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:00.120 11:54:49 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:00.120 11:54:49 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:00.120 11:54:49 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:00.120 11:54:49 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:00.120 11:54:49 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:00.120 11:54:49 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:00.120 11:54:49 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:00.120 11:54:49 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:00.120 11:54:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:00.120 11:54:49 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:00.120 11:54:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:00.120 11:54:49 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:00.120 11:54:49 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:00.120 11:54:49 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:00.120 11:54:50 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:00.120 11:54:50 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:00.120 11:54:50 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:00.120 11:54:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:00.120 11:54:50 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:00.120 11:54:50 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:00.120 11:54:50 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:00.120 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:00.120 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:19:00.120 00:19:00.120 --- 10.0.0.2 ping statistics --- 00:19:00.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.120 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:19:00.120 11:54:50 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:00.120 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:00.120 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:19:00.120 00:19:00.120 --- 10.0.0.1 ping statistics --- 00:19:00.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.120 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:19:00.120 11:54:50 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:00.120 11:54:50 -- nvmf/common.sh@411 -- # return 0 00:19:00.120 11:54:50 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:00.120 11:54:50 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:00.120 11:54:50 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:00.120 11:54:50 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:00.120 11:54:50 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:00.120 11:54:50 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:00.120 11:54:50 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:00.120 11:54:50 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:19:00.120 11:54:50 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:00.120 11:54:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:00.120 11:54:50 -- common/autotest_common.sh@10 -- # set +x 00:19:00.120 11:54:50 -- nvmf/common.sh@470 -- # nvmfpid=2489592 00:19:00.120 11:54:50 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:00.120 11:54:50 -- nvmf/common.sh@471 -- # waitforlisten 2489592 00:19:00.120 11:54:50 -- common/autotest_common.sh@817 -- # '[' -z 2489592 ']' 00:19:00.120 11:54:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:00.120 11:54:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:00.120 11:54:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:00.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:00.120 11:54:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:00.120 11:54:50 -- common/autotest_common.sh@10 -- # set +x 00:19:00.120 [2024-04-18 11:54:50.363895] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:19:00.120 [2024-04-18 11:54:50.363982] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:00.120 EAL: No free 2048 kB hugepages reported on node 1 00:19:00.120 [2024-04-18 11:54:50.495632] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:00.379 [2024-04-18 11:54:50.702510] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:00.380 [2024-04-18 11:54:50.702558] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:00.380 [2024-04-18 11:54:50.702571] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:00.380 [2024-04-18 11:54:50.702585] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:00.380 [2024-04-18 11:54:50.702594] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:00.380 [2024-04-18 11:54:50.702673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:00.380 [2024-04-18 11:54:50.702789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:00.380 [2024-04-18 11:54:50.702850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.380 [2024-04-18 11:54:50.702858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:00.638 11:54:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:00.638 11:54:51 -- common/autotest_common.sh@850 -- # return 0 00:19:00.638 11:54:51 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:00.638 11:54:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:00.638 11:54:51 -- common/autotest_common.sh@10 -- # set +x 00:19:00.638 11:54:51 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:00.638 11:54:51 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:00.638 11:54:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:00.638 11:54:51 -- common/autotest_common.sh@10 -- # set +x 00:19:00.638 [2024-04-18 11:54:51.183129] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:00.898 11:54:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:00.898 11:54:51 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:00.898 11:54:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:00.898 11:54:51 -- common/autotest_common.sh@10 -- # set +x 00:19:00.898 Malloc0 00:19:00.898 11:54:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:00.898 11:54:51 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:00.898 11:54:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:00.898 11:54:51 -- common/autotest_common.sh@10 -- # set +x 00:19:00.898 11:54:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:00.898 11:54:51 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:00.898 11:54:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:00.898 11:54:51 -- common/autotest_common.sh@10 -- # set +x 00:19:00.898 11:54:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:00.898 11:54:51 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:00.898 11:54:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:00.898 11:54:51 -- common/autotest_common.sh@10 -- # set +x 00:19:00.898 [2024-04-18 11:54:51.305762] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:00.898 11:54:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:00.898 11:54:51 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:19:00.898 test case1: single bdev can't be used in multiple subsystems 00:19:00.898 11:54:51 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:19:00.898 11:54:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:00.898 11:54:51 -- common/autotest_common.sh@10 -- # set +x 00:19:00.898 11:54:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:00.898 11:54:51 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:00.898 11:54:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:00.898 11:54:51 -- common/autotest_common.sh@10 -- # set +x 00:19:00.898 11:54:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:00.898 11:54:51 -- target/nmic.sh@28 -- # nmic_status=0 00:19:00.898 11:54:51 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:19:00.898 11:54:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:00.898 11:54:51 -- common/autotest_common.sh@10 -- # set +x 00:19:00.898 [2024-04-18 11:54:51.329589] bdev.c:7988:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:19:00.898 [2024-04-18 11:54:51.329623] subsystem.c:1930:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:19:00.898 [2024-04-18 11:54:51.329641] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.898 request: 00:19:00.898 { 00:19:00.898 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:19:00.898 "namespace": { 00:19:00.898 "bdev_name": "Malloc0", 00:19:00.898 "no_auto_visible": false 00:19:00.898 }, 00:19:00.898 "method": "nvmf_subsystem_add_ns", 00:19:00.898 "req_id": 1 00:19:00.898 } 00:19:00.898 Got JSON-RPC error response 00:19:00.898 response: 00:19:00.898 { 00:19:00.898 "code": -32602, 00:19:00.898 "message": "Invalid parameters" 00:19:00.898 } 00:19:00.898 11:54:51 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:19:00.898 11:54:51 -- target/nmic.sh@29 -- # nmic_status=1 00:19:00.898 11:54:51 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:19:00.898 11:54:51 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:19:00.898 Adding namespace failed - expected result. 00:19:00.898 11:54:51 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:19:00.898 test case2: host connect to nvmf target in multiple paths 00:19:00.898 11:54:51 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:00.898 11:54:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:00.898 11:54:51 -- common/autotest_common.sh@10 -- # set +x 00:19:00.898 [2024-04-18 11:54:51.341779] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:00.898 11:54:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:00.898 11:54:51 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:02.276 11:54:52 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:19:03.653 11:54:54 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:19:03.653 11:54:54 -- common/autotest_common.sh@1184 -- # local i=0 00:19:03.653 11:54:54 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:19:03.653 11:54:54 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:19:03.654 11:54:54 -- common/autotest_common.sh@1191 -- # sleep 2 00:19:05.556 11:54:56 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:19:05.556 11:54:56 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:19:05.556 11:54:56 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:19:05.556 11:54:56 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:19:05.556 11:54:56 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:19:05.556 11:54:56 -- common/autotest_common.sh@1194 -- # return 0 00:19:05.556 11:54:56 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:05.556 [global] 00:19:05.556 thread=1 00:19:05.556 invalidate=1 00:19:05.556 rw=write 00:19:05.556 time_based=1 00:19:05.556 runtime=1 00:19:05.556 ioengine=libaio 00:19:05.556 direct=1 00:19:05.556 bs=4096 00:19:05.556 iodepth=1 00:19:05.556 norandommap=0 00:19:05.556 numjobs=1 00:19:05.556 00:19:05.556 verify_dump=1 00:19:05.556 verify_backlog=512 00:19:05.556 verify_state_save=0 00:19:05.556 do_verify=1 00:19:05.556 verify=crc32c-intel 00:19:05.556 [job0] 00:19:05.556 filename=/dev/nvme0n1 00:19:05.825 Could not set queue depth (nvme0n1) 00:19:06.111 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:06.111 fio-3.35 00:19:06.111 Starting 1 thread 00:19:07.049 00:19:07.049 job0: (groupid=0, jobs=1): err= 0: pid=2490829: Thu Apr 18 11:54:57 2024 00:19:07.049 read: IOPS=20, BW=81.7KiB/s (83.7kB/s)(84.0KiB/1028msec) 00:19:07.049 slat (nsec): min=11255, max=27226, avg=23127.62, stdev=4326.77 00:19:07.049 clat (usec): min=40855, max=42868, avg=41203.85, stdev=505.76 00:19:07.049 lat (usec): min=40881, max=42884, avg=41226.98, stdev=504.91 00:19:07.049 clat percentiles (usec): 00:19:07.049 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:19:07.049 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:07.049 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:19:07.049 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:19:07.049 | 99.99th=[42730] 00:19:07.049 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:19:07.049 slat (usec): min=12, max=24909, avg=62.23, stdev=1100.27 00:19:07.049 clat (usec): min=198, max=544, avg=246.95, stdev=35.64 00:19:07.049 lat (usec): min=222, max=25449, avg=309.18, stdev=1113.74 00:19:07.049 clat percentiles (usec): 00:19:07.049 | 1.00th=[ 215], 5.00th=[ 219], 10.00th=[ 221], 20.00th=[ 225], 00:19:07.049 | 30.00th=[ 227], 40.00th=[ 229], 50.00th=[ 231], 60.00th=[ 237], 00:19:07.049 | 70.00th=[ 251], 80.00th=[ 281], 90.00th=[ 302], 95.00th=[ 310], 00:19:07.049 | 99.00th=[ 334], 99.50th=[ 355], 99.90th=[ 545], 99.95th=[ 545], 00:19:07.049 | 99.99th=[ 545] 00:19:07.049 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:19:07.049 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:07.049 lat (usec) : 250=66.60%, 500=29.08%, 750=0.38% 00:19:07.049 lat (msec) : 50=3.94% 00:19:07.049 cpu : usr=0.78%, sys=0.68%, ctx=535, majf=0, minf=2 00:19:07.049 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:07.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.049 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.049 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:07.049 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:07.049 00:19:07.049 Run status group 0 (all jobs): 00:19:07.049 READ: bw=81.7KiB/s (83.7kB/s), 81.7KiB/s-81.7KiB/s (83.7kB/s-83.7kB/s), io=84.0KiB (86.0kB), run=1028-1028msec 00:19:07.049 WRITE: bw=1992KiB/s (2040kB/s), 1992KiB/s-1992KiB/s (2040kB/s-2040kB/s), io=2048KiB (2097kB), run=1028-1028msec 00:19:07.049 00:19:07.049 Disk stats (read/write): 00:19:07.049 nvme0n1: ios=43/512, merge=0/0, ticks=1697/120, in_queue=1817, util=98.70% 00:19:07.049 11:54:57 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:07.617 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:19:07.617 11:54:58 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:07.617 11:54:58 -- common/autotest_common.sh@1205 -- # local i=0 00:19:07.617 11:54:58 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:19:07.617 11:54:58 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:07.617 11:54:58 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:19:07.617 11:54:58 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:07.876 11:54:58 -- common/autotest_common.sh@1217 -- # return 0 00:19:07.876 11:54:58 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:19:07.876 11:54:58 -- target/nmic.sh@53 -- # nvmftestfini 00:19:07.876 11:54:58 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:07.876 11:54:58 -- nvmf/common.sh@117 -- # sync 00:19:07.876 11:54:58 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:07.876 11:54:58 -- nvmf/common.sh@120 -- # set +e 00:19:07.876 11:54:58 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:07.876 11:54:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:07.877 rmmod nvme_tcp 00:19:07.877 rmmod nvme_fabrics 00:19:07.877 rmmod nvme_keyring 00:19:07.877 11:54:58 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:07.877 11:54:58 -- nvmf/common.sh@124 -- # set -e 00:19:07.877 11:54:58 -- nvmf/common.sh@125 -- # return 0 00:19:07.877 11:54:58 -- nvmf/common.sh@478 -- # '[' -n 2489592 ']' 00:19:07.877 11:54:58 -- nvmf/common.sh@479 -- # killprocess 2489592 00:19:07.877 11:54:58 -- common/autotest_common.sh@936 -- # '[' -z 2489592 ']' 00:19:07.877 11:54:58 -- common/autotest_common.sh@940 -- # kill -0 2489592 00:19:07.877 11:54:58 -- common/autotest_common.sh@941 -- # uname 00:19:07.877 11:54:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:07.877 11:54:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2489592 00:19:07.877 11:54:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:07.877 11:54:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:07.877 11:54:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2489592' 00:19:07.877 killing process with pid 2489592 00:19:07.877 11:54:58 -- common/autotest_common.sh@955 -- # kill 2489592 00:19:07.877 11:54:58 -- common/autotest_common.sh@960 -- # wait 2489592 00:19:09.253 11:54:59 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:09.253 11:54:59 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:09.253 11:54:59 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:09.253 11:54:59 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:09.253 11:54:59 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:09.253 11:54:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:09.253 11:54:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:09.253 11:54:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:11.785 11:55:01 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:11.785 00:19:11.785 real 0m17.911s 00:19:11.785 user 0m44.736s 00:19:11.785 sys 0m5.840s 00:19:11.785 11:55:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:11.785 11:55:01 -- common/autotest_common.sh@10 -- # set +x 00:19:11.785 ************************************ 00:19:11.785 END TEST nvmf_nmic 00:19:11.785 ************************************ 00:19:11.785 11:55:01 -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:11.785 11:55:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:11.785 11:55:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:11.785 11:55:01 -- common/autotest_common.sh@10 -- # set +x 00:19:11.785 ************************************ 00:19:11.785 START TEST nvmf_fio_target 00:19:11.785 ************************************ 00:19:11.785 11:55:01 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:11.785 * Looking for test storage... 00:19:11.785 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:11.786 11:55:02 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:11.786 11:55:02 -- nvmf/common.sh@7 -- # uname -s 00:19:11.786 11:55:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:11.786 11:55:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:11.786 11:55:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:11.786 11:55:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:11.786 11:55:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:11.786 11:55:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:11.786 11:55:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:11.786 11:55:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:11.786 11:55:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:11.786 11:55:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:11.786 11:55:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:11.786 11:55:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:19:11.786 11:55:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:11.786 11:55:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:11.786 11:55:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:11.786 11:55:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:11.786 11:55:02 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:11.786 11:55:02 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:11.786 11:55:02 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:11.786 11:55:02 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:11.786 11:55:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.786 11:55:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.786 11:55:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.786 11:55:02 -- paths/export.sh@5 -- # export PATH 00:19:11.786 11:55:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.786 11:55:02 -- nvmf/common.sh@47 -- # : 0 00:19:11.786 11:55:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:11.786 11:55:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:11.786 11:55:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:11.786 11:55:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:11.786 11:55:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:11.786 11:55:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:11.786 11:55:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:11.786 11:55:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:11.786 11:55:02 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:11.786 11:55:02 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:11.786 11:55:02 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:11.786 11:55:02 -- target/fio.sh@16 -- # nvmftestinit 00:19:11.786 11:55:02 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:11.786 11:55:02 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:11.786 11:55:02 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:11.786 11:55:02 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:11.786 11:55:02 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:11.786 11:55:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:11.786 11:55:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:11.786 11:55:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:11.786 11:55:02 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:11.786 11:55:02 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:11.786 11:55:02 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:11.786 11:55:02 -- common/autotest_common.sh@10 -- # set +x 00:19:18.356 11:55:07 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:18.356 11:55:07 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:18.356 11:55:07 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:18.356 11:55:07 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:18.356 11:55:07 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:18.356 11:55:07 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:18.356 11:55:07 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:18.356 11:55:07 -- nvmf/common.sh@295 -- # net_devs=() 00:19:18.356 11:55:07 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:18.356 11:55:07 -- nvmf/common.sh@296 -- # e810=() 00:19:18.356 11:55:07 -- nvmf/common.sh@296 -- # local -ga e810 00:19:18.356 11:55:07 -- nvmf/common.sh@297 -- # x722=() 00:19:18.356 11:55:07 -- nvmf/common.sh@297 -- # local -ga x722 00:19:18.356 11:55:07 -- nvmf/common.sh@298 -- # mlx=() 00:19:18.356 11:55:07 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:18.356 11:55:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:18.356 11:55:07 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:18.356 11:55:07 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:18.356 11:55:07 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:18.356 11:55:07 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:18.356 11:55:07 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:18.356 11:55:07 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:18.356 11:55:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:18.356 11:55:07 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:18.356 11:55:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:18.356 11:55:07 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:18.356 11:55:07 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:18.356 11:55:07 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:18.356 11:55:07 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:18.356 11:55:07 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:18.356 11:55:07 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:18.356 11:55:07 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:18.356 11:55:07 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:18.356 11:55:07 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:18.356 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:18.356 11:55:07 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:18.356 11:55:07 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:18.356 11:55:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:18.356 11:55:07 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:18.356 11:55:07 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:18.356 11:55:07 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:18.356 11:55:07 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:18.356 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:18.356 11:55:07 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:18.356 11:55:07 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:18.356 11:55:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:18.356 11:55:07 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:18.356 11:55:07 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:18.356 11:55:07 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:18.356 11:55:07 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:18.356 11:55:07 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:18.356 11:55:07 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:18.356 11:55:07 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:18.356 11:55:07 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:18.356 11:55:07 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:18.356 11:55:07 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:18.356 Found net devices under 0000:af:00.0: cvl_0_0 00:19:18.356 11:55:07 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:18.356 11:55:07 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:18.356 11:55:07 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:18.356 11:55:07 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:18.356 11:55:07 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:18.356 11:55:07 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:18.356 Found net devices under 0000:af:00.1: cvl_0_1 00:19:18.356 11:55:07 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:18.356 11:55:07 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:18.356 11:55:07 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:18.356 11:55:07 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:18.356 11:55:07 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:18.356 11:55:07 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:18.356 11:55:07 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:18.356 11:55:07 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:18.356 11:55:07 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:18.356 11:55:07 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:18.356 11:55:07 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:18.356 11:55:07 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:18.356 11:55:07 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:18.356 11:55:07 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:18.356 11:55:07 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:18.356 11:55:07 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:18.356 11:55:07 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:18.356 11:55:07 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:18.356 11:55:07 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:18.356 11:55:08 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:18.356 11:55:08 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:18.356 11:55:08 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:18.356 11:55:08 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:18.356 11:55:08 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:18.356 11:55:08 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:18.356 11:55:08 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:18.356 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:18.356 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:19:18.356 00:19:18.356 --- 10.0.0.2 ping statistics --- 00:19:18.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:18.356 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:19:18.356 11:55:08 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:18.356 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:18.356 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.241 ms 00:19:18.356 00:19:18.356 --- 10.0.0.1 ping statistics --- 00:19:18.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:18.356 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:19:18.356 11:55:08 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:18.356 11:55:08 -- nvmf/common.sh@411 -- # return 0 00:19:18.356 11:55:08 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:18.356 11:55:08 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:18.356 11:55:08 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:18.356 11:55:08 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:18.356 11:55:08 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:18.356 11:55:08 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:18.356 11:55:08 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:18.356 11:55:08 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:19:18.356 11:55:08 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:18.356 11:55:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:18.356 11:55:08 -- common/autotest_common.sh@10 -- # set +x 00:19:18.356 11:55:08 -- nvmf/common.sh@470 -- # nvmfpid=2494814 00:19:18.356 11:55:08 -- nvmf/common.sh@471 -- # waitforlisten 2494814 00:19:18.356 11:55:08 -- common/autotest_common.sh@817 -- # '[' -z 2494814 ']' 00:19:18.357 11:55:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:18.357 11:55:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:18.357 11:55:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:18.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:18.357 11:55:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:18.357 11:55:08 -- common/autotest_common.sh@10 -- # set +x 00:19:18.357 11:55:08 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:18.357 [2024-04-18 11:55:08.289361] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:19:18.357 [2024-04-18 11:55:08.289449] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:18.357 EAL: No free 2048 kB hugepages reported on node 1 00:19:18.357 [2024-04-18 11:55:08.417878] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:18.357 [2024-04-18 11:55:08.627158] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:18.357 [2024-04-18 11:55:08.627208] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:18.357 [2024-04-18 11:55:08.627220] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:18.357 [2024-04-18 11:55:08.627233] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:18.357 [2024-04-18 11:55:08.627243] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:18.357 [2024-04-18 11:55:08.627320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:18.357 [2024-04-18 11:55:08.627390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:18.357 [2024-04-18 11:55:08.627461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:18.357 [2024-04-18 11:55:08.627474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:18.615 11:55:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:18.615 11:55:09 -- common/autotest_common.sh@850 -- # return 0 00:19:18.615 11:55:09 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:18.615 11:55:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:18.615 11:55:09 -- common/autotest_common.sh@10 -- # set +x 00:19:18.615 11:55:09 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:18.615 11:55:09 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:18.874 [2024-04-18 11:55:09.244047] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:18.874 11:55:09 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:19.132 11:55:09 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:19:19.132 11:55:09 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:19.391 11:55:09 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:19:19.391 11:55:09 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:19.650 11:55:10 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:19:19.650 11:55:10 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:19.908 11:55:10 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:19:19.908 11:55:10 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:19:20.167 11:55:10 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:20.426 11:55:10 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:19:20.426 11:55:10 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:20.685 11:55:11 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:19:20.685 11:55:11 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:21.015 11:55:11 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:19:21.015 11:55:11 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:19:21.015 11:55:11 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:21.274 11:55:11 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:21.274 11:55:11 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:21.533 11:55:11 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:21.533 11:55:11 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:21.533 11:55:12 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:21.792 [2024-04-18 11:55:12.190647] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:21.792 11:55:12 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:19:22.051 11:55:12 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:19:22.051 11:55:12 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:23.428 11:55:13 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:19:23.428 11:55:13 -- common/autotest_common.sh@1184 -- # local i=0 00:19:23.428 11:55:13 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:19:23.428 11:55:13 -- common/autotest_common.sh@1186 -- # [[ -n 4 ]] 00:19:23.428 11:55:13 -- common/autotest_common.sh@1187 -- # nvme_device_counter=4 00:19:23.428 11:55:13 -- common/autotest_common.sh@1191 -- # sleep 2 00:19:25.378 11:55:15 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:19:25.378 11:55:15 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:19:25.378 11:55:15 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:19:25.378 11:55:15 -- common/autotest_common.sh@1193 -- # nvme_devices=4 00:19:25.378 11:55:15 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:19:25.378 11:55:15 -- common/autotest_common.sh@1194 -- # return 0 00:19:25.378 11:55:15 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:25.378 [global] 00:19:25.378 thread=1 00:19:25.378 invalidate=1 00:19:25.378 rw=write 00:19:25.378 time_based=1 00:19:25.378 runtime=1 00:19:25.378 ioengine=libaio 00:19:25.378 direct=1 00:19:25.378 bs=4096 00:19:25.378 iodepth=1 00:19:25.378 norandommap=0 00:19:25.378 numjobs=1 00:19:25.378 00:19:25.378 verify_dump=1 00:19:25.378 verify_backlog=512 00:19:25.378 verify_state_save=0 00:19:25.378 do_verify=1 00:19:25.378 verify=crc32c-intel 00:19:25.378 [job0] 00:19:25.378 filename=/dev/nvme0n1 00:19:25.637 [job1] 00:19:25.637 filename=/dev/nvme0n2 00:19:25.637 [job2] 00:19:25.637 filename=/dev/nvme0n3 00:19:25.637 [job3] 00:19:25.637 filename=/dev/nvme0n4 00:19:25.637 Could not set queue depth (nvme0n1) 00:19:25.637 Could not set queue depth (nvme0n2) 00:19:25.637 Could not set queue depth (nvme0n3) 00:19:25.637 Could not set queue depth (nvme0n4) 00:19:25.902 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:25.902 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:25.902 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:25.902 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:25.902 fio-3.35 00:19:25.902 Starting 4 threads 00:19:27.280 00:19:27.280 job0: (groupid=0, jobs=1): err= 0: pid=2496363: Thu Apr 18 11:55:17 2024 00:19:27.280 read: IOPS=1044, BW=4180KiB/s (4280kB/s)(4184KiB/1001msec) 00:19:27.280 slat (nsec): min=8601, max=41357, avg=9790.98, stdev=1761.88 00:19:27.280 clat (usec): min=347, max=8601, avg=543.58, stdev=258.29 00:19:27.280 lat (usec): min=359, max=8611, avg=553.37, stdev=258.24 00:19:27.280 clat percentiles (usec): 00:19:27.280 | 1.00th=[ 429], 5.00th=[ 445], 10.00th=[ 453], 20.00th=[ 465], 00:19:27.280 | 30.00th=[ 482], 40.00th=[ 498], 50.00th=[ 537], 60.00th=[ 570], 00:19:27.280 | 70.00th=[ 586], 80.00th=[ 603], 90.00th=[ 611], 95.00th=[ 627], 00:19:27.280 | 99.00th=[ 717], 99.50th=[ 742], 99.90th=[ 824], 99.95th=[ 8586], 00:19:27.280 | 99.99th=[ 8586] 00:19:27.280 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:19:27.280 slat (nsec): min=6721, max=48818, avg=11160.89, stdev=3186.54 00:19:27.280 clat (usec): min=185, max=452, avg=259.05, stdev=33.99 00:19:27.280 lat (usec): min=193, max=465, avg=270.21, stdev=35.42 00:19:27.280 clat percentiles (usec): 00:19:27.280 | 1.00th=[ 198], 5.00th=[ 219], 10.00th=[ 225], 20.00th=[ 235], 00:19:27.280 | 30.00th=[ 239], 40.00th=[ 245], 50.00th=[ 253], 60.00th=[ 260], 00:19:27.280 | 70.00th=[ 269], 80.00th=[ 281], 90.00th=[ 310], 95.00th=[ 322], 00:19:27.280 | 99.00th=[ 375], 99.50th=[ 392], 99.90th=[ 437], 99.95th=[ 453], 00:19:27.280 | 99.99th=[ 453] 00:19:27.280 bw ( KiB/s): min= 6856, max= 6856, per=38.50%, avg=6856.00, stdev= 0.00, samples=1 00:19:27.280 iops : min= 1714, max= 1714, avg=1714.00, stdev= 0.00, samples=1 00:19:27.280 lat (usec) : 250=27.96%, 500=47.99%, 750=23.93%, 1000=0.08% 00:19:27.280 lat (msec) : 10=0.04% 00:19:27.280 cpu : usr=2.10%, sys=3.50%, ctx=2583, majf=0, minf=1 00:19:27.280 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:27.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.280 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.280 issued rwts: total=1046,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.280 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:27.280 job1: (groupid=0, jobs=1): err= 0: pid=2496365: Thu Apr 18 11:55:17 2024 00:19:27.280 read: IOPS=505, BW=2021KiB/s (2070kB/s)(2092KiB/1035msec) 00:19:27.280 slat (nsec): min=8685, max=42656, avg=11091.83, stdev=4381.50 00:19:27.280 clat (usec): min=350, max=41061, avg=1407.46, stdev=5807.71 00:19:27.280 lat (usec): min=359, max=41078, avg=1418.55, stdev=5808.00 00:19:27.280 clat percentiles (usec): 00:19:27.280 | 1.00th=[ 371], 5.00th=[ 445], 10.00th=[ 461], 20.00th=[ 474], 00:19:27.280 | 30.00th=[ 490], 40.00th=[ 537], 50.00th=[ 578], 60.00th=[ 594], 00:19:27.280 | 70.00th=[ 603], 80.00th=[ 619], 90.00th=[ 668], 95.00th=[ 701], 00:19:27.280 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:27.280 | 99.99th=[41157] 00:19:27.280 write: IOPS=989, BW=3957KiB/s (4052kB/s)(4096KiB/1035msec); 0 zone resets 00:19:27.280 slat (nsec): min=11500, max=38369, avg=12906.92, stdev=2013.12 00:19:27.280 clat (usec): min=207, max=612, avg=269.38, stdev=37.74 00:19:27.280 lat (usec): min=219, max=624, avg=282.29, stdev=38.52 00:19:27.280 clat percentiles (usec): 00:19:27.280 | 1.00th=[ 217], 5.00th=[ 227], 10.00th=[ 231], 20.00th=[ 241], 00:19:27.280 | 30.00th=[ 247], 40.00th=[ 253], 50.00th=[ 260], 60.00th=[ 269], 00:19:27.280 | 70.00th=[ 285], 80.00th=[ 306], 90.00th=[ 314], 95.00th=[ 322], 00:19:27.280 | 99.00th=[ 379], 99.50th=[ 388], 99.90th=[ 537], 99.95th=[ 611], 00:19:27.280 | 99.99th=[ 611] 00:19:27.280 bw ( KiB/s): min= 2424, max= 5768, per=23.00%, avg=4096.00, stdev=2364.57, samples=2 00:19:27.280 iops : min= 606, max= 1442, avg=1024.00, stdev=591.14, samples=2 00:19:27.280 lat (usec) : 250=24.76%, 500=52.36%, 750=21.65%, 1000=0.52% 00:19:27.280 lat (msec) : 50=0.71% 00:19:27.280 cpu : usr=1.16%, sys=2.13%, ctx=1547, majf=0, minf=2 00:19:27.280 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:27.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.280 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.280 issued rwts: total=523,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.280 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:27.280 job2: (groupid=0, jobs=1): err= 0: pid=2496374: Thu Apr 18 11:55:17 2024 00:19:27.280 read: IOPS=1247, BW=4991KiB/s (5111kB/s)(4996KiB/1001msec) 00:19:27.280 slat (nsec): min=8982, max=33661, avg=9614.44, stdev=1221.26 00:19:27.280 clat (usec): min=343, max=661, avg=461.27, stdev=29.12 00:19:27.280 lat (usec): min=352, max=671, avg=470.89, stdev=29.25 00:19:27.280 clat percentiles (usec): 00:19:27.280 | 1.00th=[ 367], 5.00th=[ 396], 10.00th=[ 429], 20.00th=[ 449], 00:19:27.280 | 30.00th=[ 453], 40.00th=[ 461], 50.00th=[ 465], 60.00th=[ 469], 00:19:27.280 | 70.00th=[ 478], 80.00th=[ 482], 90.00th=[ 490], 95.00th=[ 494], 00:19:27.280 | 99.00th=[ 515], 99.50th=[ 529], 99.90th=[ 644], 99.95th=[ 660], 00:19:27.280 | 99.99th=[ 660] 00:19:27.280 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:19:27.280 slat (nsec): min=11933, max=45447, avg=12965.63, stdev=1633.35 00:19:27.280 clat (usec): min=192, max=2103, avg=250.85, stdev=55.76 00:19:27.280 lat (usec): min=208, max=2116, avg=263.82, stdev=55.88 00:19:27.280 clat percentiles (usec): 00:19:27.280 | 1.00th=[ 206], 5.00th=[ 215], 10.00th=[ 221], 20.00th=[ 227], 00:19:27.280 | 30.00th=[ 233], 40.00th=[ 239], 50.00th=[ 245], 60.00th=[ 251], 00:19:27.280 | 70.00th=[ 260], 80.00th=[ 269], 90.00th=[ 285], 95.00th=[ 310], 00:19:27.280 | 99.00th=[ 338], 99.50th=[ 347], 99.90th=[ 515], 99.95th=[ 2114], 00:19:27.280 | 99.99th=[ 2114] 00:19:27.280 bw ( KiB/s): min= 7552, max= 7552, per=42.41%, avg=7552.00, stdev= 0.00, samples=1 00:19:27.280 iops : min= 1888, max= 1888, avg=1888.00, stdev= 0.00, samples=1 00:19:27.280 lat (usec) : 250=33.07%, 500=65.60%, 750=1.29% 00:19:27.280 lat (msec) : 4=0.04% 00:19:27.280 cpu : usr=2.30%, sys=3.00%, ctx=2786, majf=0, minf=1 00:19:27.280 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:27.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.280 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.280 issued rwts: total=1249,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.280 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:27.280 job3: (groupid=0, jobs=1): err= 0: pid=2496378: Thu Apr 18 11:55:17 2024 00:19:27.280 read: IOPS=19, BW=79.3KiB/s (81.2kB/s)(80.0KiB/1009msec) 00:19:27.280 slat (nsec): min=11230, max=25270, avg=14086.05, stdev=3396.68 00:19:27.280 clat (usec): min=40883, max=41573, avg=41018.61, stdev=144.07 00:19:27.280 lat (usec): min=40900, max=41584, avg=41032.70, stdev=143.11 00:19:27.280 clat percentiles (usec): 00:19:27.280 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:19:27.280 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:27.280 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:27.280 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:19:27.280 | 99.99th=[41681] 00:19:27.280 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:19:27.280 slat (nsec): min=12779, max=50466, avg=15233.73, stdev=4290.27 00:19:27.280 clat (usec): min=228, max=965, avg=349.49, stdev=68.58 00:19:27.280 lat (usec): min=241, max=979, avg=364.73, stdev=68.96 00:19:27.280 clat percentiles (usec): 00:19:27.280 | 1.00th=[ 247], 5.00th=[ 273], 10.00th=[ 293], 20.00th=[ 310], 00:19:27.280 | 30.00th=[ 318], 40.00th=[ 326], 50.00th=[ 338], 60.00th=[ 355], 00:19:27.280 | 70.00th=[ 371], 80.00th=[ 388], 90.00th=[ 408], 95.00th=[ 441], 00:19:27.280 | 99.00th=[ 545], 99.50th=[ 873], 99.90th=[ 963], 99.95th=[ 963], 00:19:27.280 | 99.99th=[ 963] 00:19:27.281 bw ( KiB/s): min= 4096, max= 4096, per=23.00%, avg=4096.00, stdev= 0.00, samples=1 00:19:27.281 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:27.281 lat (usec) : 250=1.13%, 500=93.80%, 750=0.56%, 1000=0.75% 00:19:27.281 lat (msec) : 50=3.76% 00:19:27.281 cpu : usr=0.89%, sys=0.69%, ctx=533, majf=0, minf=1 00:19:27.281 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:27.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.281 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.281 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.281 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:27.281 00:19:27.281 Run status group 0 (all jobs): 00:19:27.281 READ: bw=10.7MiB/s (11.2MB/s), 79.3KiB/s-4991KiB/s (81.2kB/s-5111kB/s), io=11.1MiB (11.6MB), run=1001-1035msec 00:19:27.281 WRITE: bw=17.4MiB/s (18.2MB/s), 2030KiB/s-6138KiB/s (2078kB/s-6285kB/s), io=18.0MiB (18.9MB), run=1001-1035msec 00:19:27.281 00:19:27.281 Disk stats (read/write): 00:19:27.281 nvme0n1: ios=1018/1024, merge=0/0, ticks=549/252, in_queue=801, util=84.15% 00:19:27.281 nvme0n2: ios=537/1024, merge=0/0, ticks=529/273, in_queue=802, util=85.32% 00:19:27.281 nvme0n3: ios=1046/1240, merge=0/0, ticks=1372/313, in_queue=1685, util=99.89% 00:19:27.281 nvme0n4: ios=40/512, merge=0/0, ticks=1562/175, in_queue=1737, util=99.78% 00:19:27.281 11:55:17 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:19:27.281 [global] 00:19:27.281 thread=1 00:19:27.281 invalidate=1 00:19:27.281 rw=randwrite 00:19:27.281 time_based=1 00:19:27.281 runtime=1 00:19:27.281 ioengine=libaio 00:19:27.281 direct=1 00:19:27.281 bs=4096 00:19:27.281 iodepth=1 00:19:27.281 norandommap=0 00:19:27.281 numjobs=1 00:19:27.281 00:19:27.281 verify_dump=1 00:19:27.281 verify_backlog=512 00:19:27.281 verify_state_save=0 00:19:27.281 do_verify=1 00:19:27.281 verify=crc32c-intel 00:19:27.281 [job0] 00:19:27.281 filename=/dev/nvme0n1 00:19:27.281 [job1] 00:19:27.281 filename=/dev/nvme0n2 00:19:27.281 [job2] 00:19:27.281 filename=/dev/nvme0n3 00:19:27.281 [job3] 00:19:27.281 filename=/dev/nvme0n4 00:19:27.281 Could not set queue depth (nvme0n1) 00:19:27.281 Could not set queue depth (nvme0n2) 00:19:27.281 Could not set queue depth (nvme0n3) 00:19:27.281 Could not set queue depth (nvme0n4) 00:19:27.539 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:27.539 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:27.539 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:27.539 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:27.539 fio-3.35 00:19:27.539 Starting 4 threads 00:19:28.917 00:19:28.917 job0: (groupid=0, jobs=1): err= 0: pid=2496793: Thu Apr 18 11:55:19 2024 00:19:28.917 read: IOPS=20, BW=83.0KiB/s (85.0kB/s)(84.0KiB/1012msec) 00:19:28.917 slat (nsec): min=11624, max=26196, avg=22450.14, stdev=4799.44 00:19:28.917 clat (usec): min=40857, max=41973, avg=41150.25, stdev=365.65 00:19:28.917 lat (usec): min=40878, max=41998, avg=41172.70, stdev=364.07 00:19:28.917 clat percentiles (usec): 00:19:28.917 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:19:28.917 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:28.917 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:19:28.917 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:28.917 | 99.99th=[42206] 00:19:28.917 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:19:28.917 slat (nsec): min=7364, max=63353, avg=11694.81, stdev=3472.55 00:19:28.917 clat (usec): min=220, max=545, avg=273.04, stdev=36.98 00:19:28.917 lat (usec): min=233, max=609, avg=284.73, stdev=38.33 00:19:28.917 clat percentiles (usec): 00:19:28.917 | 1.00th=[ 225], 5.00th=[ 233], 10.00th=[ 239], 20.00th=[ 245], 00:19:28.917 | 30.00th=[ 251], 40.00th=[ 260], 50.00th=[ 269], 60.00th=[ 273], 00:19:28.917 | 70.00th=[ 281], 80.00th=[ 306], 90.00th=[ 314], 95.00th=[ 322], 00:19:28.917 | 99.00th=[ 433], 99.50th=[ 482], 99.90th=[ 545], 99.95th=[ 545], 00:19:28.917 | 99.99th=[ 545] 00:19:28.917 bw ( KiB/s): min= 4096, max= 4096, per=25.93%, avg=4096.00, stdev= 0.00, samples=1 00:19:28.917 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:28.917 lat (usec) : 250=26.08%, 500=69.61%, 750=0.38% 00:19:28.917 lat (msec) : 50=3.94% 00:19:28.917 cpu : usr=0.20%, sys=0.69%, ctx=534, majf=0, minf=2 00:19:28.917 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:28.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:28.917 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:28.917 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:28.917 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:28.917 job1: (groupid=0, jobs=1): err= 0: pid=2496802: Thu Apr 18 11:55:19 2024 00:19:28.917 read: IOPS=1111, BW=4448KiB/s (4554kB/s)(4452KiB/1001msec) 00:19:28.917 slat (nsec): min=8690, max=39632, avg=9555.81, stdev=1503.71 00:19:28.917 clat (usec): min=348, max=1558, avg=484.90, stdev=44.84 00:19:28.917 lat (usec): min=358, max=1567, avg=494.46, stdev=44.82 00:19:28.917 clat percentiles (usec): 00:19:28.917 | 1.00th=[ 379], 5.00th=[ 420], 10.00th=[ 449], 20.00th=[ 465], 00:19:28.917 | 30.00th=[ 474], 40.00th=[ 482], 50.00th=[ 490], 60.00th=[ 494], 00:19:28.917 | 70.00th=[ 498], 80.00th=[ 506], 90.00th=[ 519], 95.00th=[ 529], 00:19:28.917 | 99.00th=[ 553], 99.50th=[ 586], 99.90th=[ 619], 99.95th=[ 1565], 00:19:28.917 | 99.99th=[ 1565] 00:19:28.917 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:19:28.917 slat (nsec): min=12014, max=37475, avg=13158.47, stdev=1635.76 00:19:28.917 clat (usec): min=216, max=533, avg=274.61, stdev=38.76 00:19:28.917 lat (usec): min=229, max=546, avg=287.76, stdev=39.02 00:19:28.917 clat percentiles (usec): 00:19:28.917 | 1.00th=[ 225], 5.00th=[ 235], 10.00th=[ 239], 20.00th=[ 245], 00:19:28.917 | 30.00th=[ 251], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 273], 00:19:28.917 | 70.00th=[ 285], 80.00th=[ 306], 90.00th=[ 318], 95.00th=[ 338], 00:19:28.917 | 99.00th=[ 437], 99.50th=[ 457], 99.90th=[ 519], 99.95th=[ 537], 00:19:28.917 | 99.99th=[ 537] 00:19:28.917 bw ( KiB/s): min= 6496, max= 6496, per=41.12%, avg=6496.00, stdev= 0.00, samples=1 00:19:28.917 iops : min= 1624, max= 1624, avg=1624.00, stdev= 0.00, samples=1 00:19:28.917 lat (usec) : 250=17.52%, 500=70.29%, 750=12.16% 00:19:28.917 lat (msec) : 2=0.04% 00:19:28.917 cpu : usr=2.70%, sys=4.50%, ctx=2653, majf=0, minf=1 00:19:28.917 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:28.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:28.917 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:28.917 issued rwts: total=1113,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:28.917 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:28.917 job2: (groupid=0, jobs=1): err= 0: pid=2496818: Thu Apr 18 11:55:19 2024 00:19:28.917 read: IOPS=988, BW=3954KiB/s (4049kB/s)(4100KiB/1037msec) 00:19:28.917 slat (nsec): min=9022, max=36133, avg=9982.43, stdev=1601.39 00:19:28.917 clat (usec): min=369, max=41349, avg=541.77, stdev=1276.74 00:19:28.917 lat (usec): min=378, max=41360, avg=551.75, stdev=1276.76 00:19:28.917 clat percentiles (usec): 00:19:28.917 | 1.00th=[ 408], 5.00th=[ 457], 10.00th=[ 465], 20.00th=[ 474], 00:19:28.917 | 30.00th=[ 482], 40.00th=[ 490], 50.00th=[ 494], 60.00th=[ 498], 00:19:28.917 | 70.00th=[ 502], 80.00th=[ 510], 90.00th=[ 553], 95.00th=[ 627], 00:19:28.917 | 99.00th=[ 668], 99.50th=[ 693], 99.90th=[ 791], 99.95th=[41157], 00:19:28.917 | 99.99th=[41157] 00:19:28.917 write: IOPS=1481, BW=5925KiB/s (6067kB/s)(6144KiB/1037msec); 0 zone resets 00:19:28.917 slat (nsec): min=11676, max=49183, avg=13324.17, stdev=1992.50 00:19:28.917 clat (usec): min=211, max=605, avg=287.54, stdev=30.72 00:19:28.917 lat (usec): min=239, max=619, avg=300.87, stdev=30.99 00:19:28.917 clat percentiles (usec): 00:19:28.917 | 1.00th=[ 235], 5.00th=[ 243], 10.00th=[ 249], 20.00th=[ 262], 00:19:28.917 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 285], 60.00th=[ 293], 00:19:28.917 | 70.00th=[ 302], 80.00th=[ 314], 90.00th=[ 326], 95.00th=[ 338], 00:19:28.917 | 99.00th=[ 371], 99.50th=[ 383], 99.90th=[ 441], 99.95th=[ 603], 00:19:28.917 | 99.99th=[ 603] 00:19:28.917 bw ( KiB/s): min= 5928, max= 6360, per=38.89%, avg=6144.00, stdev=305.47, samples=2 00:19:28.917 iops : min= 1482, max= 1590, avg=1536.00, stdev=76.37, samples=2 00:19:28.917 lat (usec) : 250=6.56%, 500=79.27%, 750=14.10%, 1000=0.04% 00:19:28.917 lat (msec) : 50=0.04% 00:19:28.917 cpu : usr=2.41%, sys=3.96%, ctx=2563, majf=0, minf=1 00:19:28.917 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:28.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:28.917 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:28.917 issued rwts: total=1025,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:28.917 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:28.917 job3: (groupid=0, jobs=1): err= 0: pid=2496826: Thu Apr 18 11:55:19 2024 00:19:28.917 read: IOPS=20, BW=82.6KiB/s (84.6kB/s)(84.0KiB/1017msec) 00:19:28.917 slat (nsec): min=12590, max=28077, avg=26326.71, stdev=3203.55 00:19:28.917 clat (usec): min=40839, max=42112, avg=41119.11, stdev=376.53 00:19:28.917 lat (usec): min=40868, max=42125, avg=41145.44, stdev=374.70 00:19:28.917 clat percentiles (usec): 00:19:28.917 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:19:28.917 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:28.917 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:19:28.917 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:28.917 | 99.99th=[42206] 00:19:28.917 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:19:28.917 slat (nsec): min=12590, max=47367, avg=13922.03, stdev=2251.66 00:19:28.917 clat (usec): min=242, max=401, avg=277.63, stdev=20.82 00:19:28.917 lat (usec): min=256, max=448, avg=291.55, stdev=21.32 00:19:28.917 clat percentiles (usec): 00:19:28.917 | 1.00th=[ 247], 5.00th=[ 253], 10.00th=[ 255], 20.00th=[ 262], 00:19:28.917 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 277], 00:19:28.917 | 70.00th=[ 285], 80.00th=[ 293], 90.00th=[ 310], 95.00th=[ 322], 00:19:28.917 | 99.00th=[ 338], 99.50th=[ 347], 99.90th=[ 400], 99.95th=[ 400], 00:19:28.917 | 99.99th=[ 400] 00:19:28.917 bw ( KiB/s): min= 4096, max= 4096, per=25.93%, avg=4096.00, stdev= 0.00, samples=1 00:19:28.917 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:28.917 lat (usec) : 250=2.44%, 500=93.62% 00:19:28.917 lat (msec) : 50=3.94% 00:19:28.917 cpu : usr=0.79%, sys=0.69%, ctx=535, majf=0, minf=1 00:19:28.918 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:28.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:28.918 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:28.918 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:28.918 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:28.918 00:19:28.918 Run status group 0 (all jobs): 00:19:28.918 READ: bw=8409KiB/s (8611kB/s), 82.6KiB/s-4448KiB/s (84.6kB/s-4554kB/s), io=8720KiB (8929kB), run=1001-1037msec 00:19:28.918 WRITE: bw=15.4MiB/s (16.2MB/s), 2014KiB/s-6138KiB/s (2062kB/s-6285kB/s), io=16.0MiB (16.8MB), run=1001-1037msec 00:19:28.918 00:19:28.918 Disk stats (read/write): 00:19:28.918 nvme0n1: ios=66/512, merge=0/0, ticks=709/138, in_queue=847, util=85.27% 00:19:28.918 nvme0n2: ios=1048/1056, merge=0/0, ticks=1342/287, in_queue=1629, util=91.10% 00:19:28.918 nvme0n3: ios=1026/1024, merge=0/0, ticks=1398/285, in_queue=1683, util=95.30% 00:19:28.918 nvme0n4: ios=41/512, merge=0/0, ticks=1564/133, in_queue=1697, util=98.59% 00:19:28.918 11:55:19 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:19:28.918 [global] 00:19:28.918 thread=1 00:19:28.918 invalidate=1 00:19:28.918 rw=write 00:19:28.918 time_based=1 00:19:28.918 runtime=1 00:19:28.918 ioengine=libaio 00:19:28.918 direct=1 00:19:28.918 bs=4096 00:19:28.918 iodepth=128 00:19:28.918 norandommap=0 00:19:28.918 numjobs=1 00:19:28.918 00:19:28.918 verify_dump=1 00:19:28.918 verify_backlog=512 00:19:28.918 verify_state_save=0 00:19:28.918 do_verify=1 00:19:28.918 verify=crc32c-intel 00:19:28.918 [job0] 00:19:28.918 filename=/dev/nvme0n1 00:19:28.918 [job1] 00:19:28.918 filename=/dev/nvme0n2 00:19:28.918 [job2] 00:19:28.918 filename=/dev/nvme0n3 00:19:28.918 [job3] 00:19:28.918 filename=/dev/nvme0n4 00:19:28.918 Could not set queue depth (nvme0n1) 00:19:28.918 Could not set queue depth (nvme0n2) 00:19:28.918 Could not set queue depth (nvme0n3) 00:19:28.918 Could not set queue depth (nvme0n4) 00:19:29.175 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:29.175 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:29.175 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:29.175 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:29.175 fio-3.35 00:19:29.175 Starting 4 threads 00:19:30.554 00:19:30.554 job0: (groupid=0, jobs=1): err= 0: pid=2497228: Thu Apr 18 11:55:20 2024 00:19:30.554 read: IOPS=5598, BW=21.9MiB/s (22.9MB/s)(22.0MiB/1006msec) 00:19:30.554 slat (usec): min=2, max=18247, avg=80.14, stdev=689.67 00:19:30.554 clat (usec): min=5274, max=43987, avg=12165.78, stdev=6934.30 00:19:30.554 lat (usec): min=5310, max=44010, avg=12245.92, stdev=6976.24 00:19:30.554 clat percentiles (usec): 00:19:30.554 | 1.00th=[ 6325], 5.00th=[ 6980], 10.00th=[ 7439], 20.00th=[ 8160], 00:19:30.554 | 30.00th=[ 8586], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[10159], 00:19:30.554 | 70.00th=[11076], 80.00th=[13960], 90.00th=[22676], 95.00th=[31065], 00:19:30.554 | 99.00th=[36963], 99.50th=[37487], 99.90th=[40109], 99.95th=[40109], 00:19:30.554 | 99.99th=[43779] 00:19:30.554 write: IOPS=6035, BW=23.6MiB/s (24.7MB/s)(23.7MiB/1006msec); 0 zone resets 00:19:30.554 slat (usec): min=2, max=10927, avg=70.85, stdev=419.07 00:19:30.554 clat (usec): min=910, max=29924, avg=9732.19, stdev=3902.56 00:19:30.554 lat (usec): min=928, max=29954, avg=9803.04, stdev=3921.02 00:19:30.554 clat percentiles (usec): 00:19:30.554 | 1.00th=[ 2835], 5.00th=[ 5080], 10.00th=[ 5669], 20.00th=[ 6456], 00:19:30.554 | 30.00th=[ 7242], 40.00th=[ 8029], 50.00th=[ 9110], 60.00th=[10290], 00:19:30.554 | 70.00th=[11207], 80.00th=[12649], 90.00th=[14353], 95.00th=[16909], 00:19:30.554 | 99.00th=[21103], 99.50th=[28443], 99.90th=[28443], 99.95th=[28443], 00:19:30.554 | 99.99th=[30016] 00:19:30.554 bw ( KiB/s): min=20456, max=27096, per=43.53%, avg=23776.00, stdev=4695.19, samples=2 00:19:30.554 iops : min= 5114, max= 6774, avg=5944.00, stdev=1173.80, samples=2 00:19:30.554 lat (usec) : 1000=0.03% 00:19:30.554 lat (msec) : 2=0.19%, 4=0.59%, 10=57.62%, 20=34.78%, 50=6.79% 00:19:30.554 cpu : usr=6.37%, sys=8.86%, ctx=543, majf=0, minf=1 00:19:30.554 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:19:30.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.554 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:30.554 issued rwts: total=5632,6072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.554 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.554 job1: (groupid=0, jobs=1): err= 0: pid=2497242: Thu Apr 18 11:55:20 2024 00:19:30.554 read: IOPS=2005, BW=8024KiB/s (8216kB/s)(8192KiB/1021msec) 00:19:30.554 slat (nsec): min=1772, max=25149k, avg=238471.07, stdev=1574500.48 00:19:30.554 clat (msec): min=2, max=116, avg=26.45, stdev=20.19 00:19:30.554 lat (msec): min=2, max=116, avg=26.69, stdev=20.37 00:19:30.554 clat percentiles (msec): 00:19:30.554 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 7], 20.00th=[ 13], 00:19:30.554 | 30.00th=[ 16], 40.00th=[ 22], 50.00th=[ 24], 60.00th=[ 26], 00:19:30.554 | 70.00th=[ 32], 80.00th=[ 34], 90.00th=[ 47], 95.00th=[ 78], 00:19:30.554 | 99.00th=[ 108], 99.50th=[ 113], 99.90th=[ 117], 99.95th=[ 117], 00:19:30.554 | 99.99th=[ 117] 00:19:30.554 write: IOPS=2128, BW=8513KiB/s (8718kB/s)(8692KiB/1021msec); 0 zone resets 00:19:30.554 slat (usec): min=2, max=10196, avg=197.10, stdev=1010.35 00:19:30.554 clat (usec): min=393, max=116578, avg=34725.16, stdev=34510.14 00:19:30.554 lat (usec): min=424, max=116593, avg=34922.26, stdev=34709.20 00:19:30.554 clat percentiles (usec): 00:19:30.554 | 1.00th=[ 1020], 5.00th=[ 1876], 10.00th=[ 2245], 20.00th=[ 3032], 00:19:30.554 | 30.00th=[ 7832], 40.00th=[ 13173], 50.00th=[ 17957], 60.00th=[ 27657], 00:19:30.554 | 70.00th=[ 56361], 80.00th=[ 74974], 90.00th=[ 86508], 95.00th=[ 98042], 00:19:30.554 | 99.00th=[113771], 99.50th=[114820], 99.90th=[114820], 99.95th=[116917], 00:19:30.554 | 99.99th=[116917] 00:19:30.554 bw ( KiB/s): min= 5352, max=11080, per=15.04%, avg=8216.00, stdev=4050.31, samples=2 00:19:30.554 iops : min= 1338, max= 2770, avg=2054.00, stdev=1012.58, samples=2 00:19:30.554 lat (usec) : 500=0.07%, 750=0.02%, 1000=0.19% 00:19:30.554 lat (msec) : 2=3.15%, 4=12.60%, 10=8.88%, 20=20.35%, 50=33.43% 00:19:30.554 lat (msec) : 100=18.38%, 250=2.91% 00:19:30.554 cpu : usr=2.75%, sys=4.02%, ctx=266, majf=0, minf=1 00:19:30.554 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:19:30.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.554 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:30.554 issued rwts: total=2048,2173,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.554 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.554 job2: (groupid=0, jobs=1): err= 0: pid=2497263: Thu Apr 18 11:55:20 2024 00:19:30.554 read: IOPS=2534, BW=9.90MiB/s (10.4MB/s)(10.0MiB/1010msec) 00:19:30.554 slat (usec): min=2, max=20456, avg=138.06, stdev=985.47 00:19:30.554 clat (usec): min=6523, max=54837, avg=18186.15, stdev=7857.62 00:19:30.554 lat (usec): min=8529, max=54850, avg=18324.20, stdev=7927.38 00:19:30.554 clat percentiles (usec): 00:19:30.554 | 1.00th=[ 9372], 5.00th=[ 9503], 10.00th=[10421], 20.00th=[11076], 00:19:30.554 | 30.00th=[13042], 40.00th=[14484], 50.00th=[15401], 60.00th=[16712], 00:19:30.554 | 70.00th=[21890], 80.00th=[24511], 90.00th=[28443], 95.00th=[32637], 00:19:30.554 | 99.00th=[43254], 99.50th=[51119], 99.90th=[54789], 99.95th=[54789], 00:19:30.554 | 99.99th=[54789] 00:19:30.554 write: IOPS=2931, BW=11.5MiB/s (12.0MB/s)(11.6MiB/1010msec); 0 zone resets 00:19:30.554 slat (usec): min=6, max=37182, avg=205.65, stdev=1342.78 00:19:30.554 clat (msec): min=3, max=133, avg=25.73, stdev=31.01 00:19:30.554 lat (msec): min=5, max=133, avg=25.93, stdev=31.22 00:19:30.554 clat percentiles (msec): 00:19:30.554 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 10], 00:19:30.554 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 14], 60.00th=[ 16], 00:19:30.554 | 70.00th=[ 20], 80.00th=[ 22], 90.00th=[ 86], 95.00th=[ 107], 00:19:30.554 | 99.00th=[ 129], 99.50th=[ 132], 99.90th=[ 133], 99.95th=[ 133], 00:19:30.554 | 99.99th=[ 133] 00:19:30.554 bw ( KiB/s): min= 6896, max=15768, per=20.75%, avg=11332.00, stdev=6273.45, samples=2 00:19:30.554 iops : min= 1724, max= 3942, avg=2833.00, stdev=1568.36, samples=2 00:19:30.554 lat (msec) : 4=0.02%, 10=18.87%, 20=51.93%, 50=21.03%, 100=4.20% 00:19:30.554 lat (msec) : 250=3.95% 00:19:30.554 cpu : usr=4.76%, sys=5.25%, ctx=185, majf=0, minf=1 00:19:30.554 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:19:30.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.554 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:30.554 issued rwts: total=2560,2961,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.554 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.554 job3: (groupid=0, jobs=1): err= 0: pid=2497271: Thu Apr 18 11:55:20 2024 00:19:30.554 read: IOPS=2504, BW=9.78MiB/s (10.3MB/s)(10.0MiB/1022msec) 00:19:30.554 slat (usec): min=2, max=23700, avg=175.50, stdev=1230.48 00:19:30.554 clat (usec): min=3266, max=92013, avg=22089.52, stdev=12200.66 00:19:30.554 lat (usec): min=3276, max=92027, avg=22265.02, stdev=12316.13 00:19:30.554 clat percentiles (usec): 00:19:30.554 | 1.00th=[10028], 5.00th=[10683], 10.00th=[11338], 20.00th=[13829], 00:19:30.554 | 30.00th=[14877], 40.00th=[16712], 50.00th=[18482], 60.00th=[20841], 00:19:30.554 | 70.00th=[24773], 80.00th=[27919], 90.00th=[32637], 95.00th=[49021], 00:19:30.554 | 99.00th=[76022], 99.50th=[86508], 99.90th=[91751], 99.95th=[91751], 00:19:30.554 | 99.99th=[91751] 00:19:30.554 write: IOPS=2690, BW=10.5MiB/s (11.0MB/s)(10.7MiB/1022msec); 0 zone resets 00:19:30.554 slat (usec): min=2, max=11851, avg=165.39, stdev=938.81 00:19:30.554 clat (usec): min=1576, max=92736, avg=26591.26, stdev=25404.17 00:19:30.554 lat (usec): min=1599, max=92758, avg=26756.65, stdev=25559.72 00:19:30.554 clat percentiles (usec): 00:19:30.554 | 1.00th=[ 3130], 5.00th=[ 4015], 10.00th=[ 5932], 20.00th=[ 9896], 00:19:30.554 | 30.00th=[11994], 40.00th=[13173], 50.00th=[14484], 60.00th=[17433], 00:19:30.554 | 70.00th=[23462], 80.00th=[47449], 90.00th=[74974], 95.00th=[86508], 00:19:30.554 | 99.00th=[91751], 99.50th=[91751], 99.90th=[92799], 99.95th=[92799], 00:19:30.554 | 99.99th=[92799] 00:19:30.554 bw ( KiB/s): min= 4592, max=16384, per=19.20%, avg=10488.00, stdev=8338.20, samples=2 00:19:30.554 iops : min= 1148, max= 4096, avg=2622.00, stdev=2084.55, samples=2 00:19:30.554 lat (msec) : 2=0.04%, 4=1.83%, 10=9.00%, 20=48.21%, 50=28.57% 00:19:30.554 lat (msec) : 100=12.35% 00:19:30.554 cpu : usr=3.72%, sys=4.80%, ctx=225, majf=0, minf=1 00:19:30.554 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:19:30.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.554 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:30.554 issued rwts: total=2560,2750,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.554 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.554 00:19:30.554 Run status group 0 (all jobs): 00:19:30.554 READ: bw=48.9MiB/s (51.3MB/s), 8024KiB/s-21.9MiB/s (8216kB/s-22.9MB/s), io=50.0MiB (52.4MB), run=1006-1022msec 00:19:30.554 WRITE: bw=53.3MiB/s (55.9MB/s), 8513KiB/s-23.6MiB/s (8718kB/s-24.7MB/s), io=54.5MiB (57.2MB), run=1006-1022msec 00:19:30.554 00:19:30.554 Disk stats (read/write): 00:19:30.554 nvme0n1: ios=4627/4674, merge=0/0, ticks=57147/41447, in_queue=98594, util=98.40% 00:19:30.554 nvme0n2: ios=1585/1959, merge=0/0, ticks=33435/63885, in_queue=97320, util=91.15% 00:19:30.554 nvme0n3: ios=1957/2048, merge=0/0, ticks=33541/58270, in_queue=91811, util=96.78% 00:19:30.555 nvme0n4: ios=2105/2511, merge=0/0, ticks=37914/58677, in_queue=96591, util=91.73% 00:19:30.555 11:55:20 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:19:30.555 [global] 00:19:30.555 thread=1 00:19:30.555 invalidate=1 00:19:30.555 rw=randwrite 00:19:30.555 time_based=1 00:19:30.555 runtime=1 00:19:30.555 ioengine=libaio 00:19:30.555 direct=1 00:19:30.555 bs=4096 00:19:30.555 iodepth=128 00:19:30.555 norandommap=0 00:19:30.555 numjobs=1 00:19:30.555 00:19:30.555 verify_dump=1 00:19:30.555 verify_backlog=512 00:19:30.555 verify_state_save=0 00:19:30.555 do_verify=1 00:19:30.555 verify=crc32c-intel 00:19:30.555 [job0] 00:19:30.555 filename=/dev/nvme0n1 00:19:30.555 [job1] 00:19:30.555 filename=/dev/nvme0n2 00:19:30.555 [job2] 00:19:30.555 filename=/dev/nvme0n3 00:19:30.555 [job3] 00:19:30.555 filename=/dev/nvme0n4 00:19:30.555 Could not set queue depth (nvme0n1) 00:19:30.555 Could not set queue depth (nvme0n2) 00:19:30.555 Could not set queue depth (nvme0n3) 00:19:30.555 Could not set queue depth (nvme0n4) 00:19:30.814 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:30.814 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:30.814 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:30.814 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:30.814 fio-3.35 00:19:30.814 Starting 4 threads 00:19:32.194 00:19:32.194 job0: (groupid=0, jobs=1): err= 0: pid=2497649: Thu Apr 18 11:55:22 2024 00:19:32.194 read: IOPS=3555, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1008msec) 00:19:32.194 slat (nsec): min=1853, max=12520k, avg=143076.41, stdev=908923.69 00:19:32.194 clat (usec): min=5907, max=45299, avg=18059.43, stdev=7120.56 00:19:32.194 lat (usec): min=5924, max=45324, avg=18202.51, stdev=7200.87 00:19:32.194 clat percentiles (usec): 00:19:32.194 | 1.00th=[ 6915], 5.00th=[ 8356], 10.00th=[10159], 20.00th=[11207], 00:19:32.194 | 30.00th=[14353], 40.00th=[15270], 50.00th=[16909], 60.00th=[18220], 00:19:32.194 | 70.00th=[21103], 80.00th=[22414], 90.00th=[28443], 95.00th=[32900], 00:19:32.194 | 99.00th=[37487], 99.50th=[41157], 99.90th=[42206], 99.95th=[45351], 00:19:32.194 | 99.99th=[45351] 00:19:32.194 write: IOPS=3716, BW=14.5MiB/s (15.2MB/s)(14.6MiB/1008msec); 0 zone resets 00:19:32.194 slat (usec): min=2, max=8329, avg=116.89, stdev=652.13 00:19:32.194 clat (usec): min=1528, max=56820, avg=16740.12, stdev=9810.22 00:19:32.194 lat (usec): min=1541, max=56831, avg=16857.01, stdev=9869.99 00:19:32.194 clat percentiles (usec): 00:19:32.194 | 1.00th=[ 3490], 5.00th=[ 6063], 10.00th=[ 9241], 20.00th=[10945], 00:19:32.194 | 30.00th=[12125], 40.00th=[12780], 50.00th=[13435], 60.00th=[14353], 00:19:32.194 | 70.00th=[15926], 80.00th=[20579], 90.00th=[33817], 95.00th=[39060], 00:19:32.194 | 99.00th=[50594], 99.50th=[51643], 99.90th=[56886], 99.95th=[56886], 00:19:32.194 | 99.99th=[56886] 00:19:32.194 bw ( KiB/s): min=12592, max=16351, per=23.27%, avg=14471.50, stdev=2658.01, samples=2 00:19:32.194 iops : min= 3148, max= 4087, avg=3617.50, stdev=663.97, samples=2 00:19:32.194 lat (msec) : 2=0.11%, 4=0.55%, 10=10.20%, 20=61.88%, 50=26.64% 00:19:32.194 lat (msec) : 100=0.61% 00:19:32.194 cpu : usr=3.87%, sys=5.56%, ctx=422, majf=0, minf=1 00:19:32.194 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:19:32.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.194 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:32.194 issued rwts: total=3584,3746,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:32.194 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:32.194 job1: (groupid=0, jobs=1): err= 0: pid=2497664: Thu Apr 18 11:55:22 2024 00:19:32.194 read: IOPS=3423, BW=13.4MiB/s (14.0MB/s)(13.5MiB/1008msec) 00:19:32.194 slat (nsec): min=1780, max=21644k, avg=123989.81, stdev=944790.27 00:19:32.194 clat (usec): min=1266, max=46371, avg=17129.48, stdev=6508.65 00:19:32.194 lat (usec): min=3905, max=49599, avg=17253.47, stdev=6592.79 00:19:32.194 clat percentiles (usec): 00:19:32.194 | 1.00th=[ 5604], 5.00th=[ 5997], 10.00th=[ 8717], 20.00th=[10814], 00:19:32.194 | 30.00th=[13698], 40.00th=[14877], 50.00th=[17171], 60.00th=[19268], 00:19:32.194 | 70.00th=[20841], 80.00th=[22152], 90.00th=[25297], 95.00th=[28967], 00:19:32.194 | 99.00th=[33162], 99.50th=[35390], 99.90th=[40109], 99.95th=[44303], 00:19:32.194 | 99.99th=[46400] 00:19:32.194 write: IOPS=3555, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1008msec); 0 zone resets 00:19:32.194 slat (usec): min=2, max=35775, avg=139.87, stdev=1019.12 00:19:32.194 clat (usec): min=1375, max=83970, avg=19000.06, stdev=15083.69 00:19:32.194 lat (usec): min=1381, max=83984, avg=19139.92, stdev=15178.18 00:19:32.194 clat percentiles (usec): 00:19:32.194 | 1.00th=[ 2311], 5.00th=[ 6259], 10.00th=[ 7373], 20.00th=[ 9765], 00:19:32.194 | 30.00th=[11994], 40.00th=[12780], 50.00th=[13435], 60.00th=[15139], 00:19:32.194 | 70.00th=[18744], 80.00th=[25297], 90.00th=[38536], 95.00th=[48497], 00:19:32.194 | 99.00th=[83362], 99.50th=[83362], 99.90th=[84411], 99.95th=[84411], 00:19:32.194 | 99.99th=[84411] 00:19:32.195 bw ( KiB/s): min=12288, max=16384, per=23.06%, avg=14336.00, stdev=2896.31, samples=2 00:19:32.195 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:19:32.195 lat (msec) : 2=0.31%, 4=0.78%, 10=17.53%, 20=49.84%, 50=29.18% 00:19:32.195 lat (msec) : 100=2.36% 00:19:32.195 cpu : usr=2.88%, sys=5.46%, ctx=292, majf=0, minf=1 00:19:32.195 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:19:32.195 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.195 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:32.195 issued rwts: total=3451,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:32.195 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:32.195 job2: (groupid=0, jobs=1): err= 0: pid=2497684: Thu Apr 18 11:55:22 2024 00:19:32.195 read: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec) 00:19:32.195 slat (nsec): min=1756, max=17624k, avg=100600.97, stdev=786803.14 00:19:32.195 clat (usec): min=3078, max=46897, avg=14513.47, stdev=5212.12 00:19:32.195 lat (usec): min=3088, max=46907, avg=14614.07, stdev=5267.26 00:19:32.195 clat percentiles (usec): 00:19:32.195 | 1.00th=[ 6718], 5.00th=[ 8717], 10.00th=[ 9241], 20.00th=[10945], 00:19:32.195 | 30.00th=[12125], 40.00th=[12649], 50.00th=[14353], 60.00th=[14877], 00:19:32.195 | 70.00th=[15270], 80.00th=[16450], 90.00th=[18220], 95.00th=[28443], 00:19:32.195 | 99.00th=[35914], 99.50th=[35914], 99.90th=[35914], 99.95th=[36439], 00:19:32.195 | 99.99th=[46924] 00:19:32.195 write: IOPS=4649, BW=18.2MiB/s (19.0MB/s)(18.3MiB/1006msec); 0 zone resets 00:19:32.195 slat (usec): min=2, max=11203, avg=89.07, stdev=596.68 00:19:32.195 clat (usec): min=445, max=93381, avg=12990.86, stdev=8864.19 00:19:32.195 lat (usec): min=1121, max=93387, avg=13079.93, stdev=8876.50 00:19:32.195 clat percentiles (usec): 00:19:32.195 | 1.00th=[ 3687], 5.00th=[ 5866], 10.00th=[ 6849], 20.00th=[ 8225], 00:19:32.195 | 30.00th=[ 9634], 40.00th=[10814], 50.00th=[11994], 60.00th=[13304], 00:19:32.195 | 70.00th=[14222], 80.00th=[15533], 90.00th=[16909], 95.00th=[18482], 00:19:32.195 | 99.00th=[63177], 99.50th=[79168], 99.90th=[93848], 99.95th=[93848], 00:19:32.195 | 99.99th=[93848] 00:19:32.195 bw ( KiB/s): min=16752, max=20112, per=29.64%, avg=18432.00, stdev=2375.88, samples=2 00:19:32.195 iops : min= 4188, max= 5028, avg=4608.00, stdev=593.97, samples=2 00:19:32.195 lat (usec) : 500=0.01% 00:19:32.195 lat (msec) : 2=0.06%, 4=0.85%, 10=21.30%, 20=72.03%, 50=4.91% 00:19:32.195 lat (msec) : 100=0.83% 00:19:32.195 cpu : usr=3.88%, sys=6.87%, ctx=414, majf=0, minf=1 00:19:32.195 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:19:32.195 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.195 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:32.195 issued rwts: total=4608,4677,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:32.195 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:32.195 job3: (groupid=0, jobs=1): err= 0: pid=2497695: Thu Apr 18 11:55:22 2024 00:19:32.195 read: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec) 00:19:32.195 slat (usec): min=2, max=26899, avg=144.07, stdev=1182.42 00:19:32.195 clat (usec): min=2149, max=61596, avg=18492.11, stdev=8904.04 00:19:32.195 lat (usec): min=5927, max=61606, avg=18636.18, stdev=8991.09 00:19:32.195 clat percentiles (usec): 00:19:32.195 | 1.00th=[ 8717], 5.00th=[10290], 10.00th=[11469], 20.00th=[12387], 00:19:32.195 | 30.00th=[12649], 40.00th=[13173], 50.00th=[14615], 60.00th=[16712], 00:19:32.195 | 70.00th=[20317], 80.00th=[25297], 90.00th=[34866], 95.00th=[38536], 00:19:32.195 | 99.00th=[46924], 99.50th=[46924], 99.90th=[52167], 99.95th=[61604], 00:19:32.195 | 99.99th=[61604] 00:19:32.195 write: IOPS=3636, BW=14.2MiB/s (14.9MB/s)(14.3MiB/1007msec); 0 zone resets 00:19:32.195 slat (usec): min=2, max=16507, avg=125.55, stdev=814.62 00:19:32.195 clat (usec): min=1578, max=64517, avg=16567.22, stdev=9953.23 00:19:32.195 lat (usec): min=3248, max=64537, avg=16692.77, stdev=10022.47 00:19:32.195 clat percentiles (usec): 00:19:32.195 | 1.00th=[ 6456], 5.00th=[ 8029], 10.00th=[ 8979], 20.00th=[10028], 00:19:32.195 | 30.00th=[10814], 40.00th=[11731], 50.00th=[12518], 60.00th=[14222], 00:19:32.195 | 70.00th=[17433], 80.00th=[21890], 90.00th=[28705], 95.00th=[39584], 00:19:32.195 | 99.00th=[56361], 99.50th=[60031], 99.90th=[64750], 99.95th=[64750], 00:19:32.195 | 99.99th=[64750] 00:19:32.195 bw ( KiB/s): min=11128, max=17592, per=23.09%, avg=14360.00, stdev=4570.74, samples=2 00:19:32.195 iops : min= 2782, max= 4398, avg=3590.00, stdev=1142.68, samples=2 00:19:32.195 lat (msec) : 2=0.01%, 4=0.18%, 10=11.44%, 20=59.95%, 50=27.26% 00:19:32.195 lat (msec) : 100=1.16% 00:19:32.195 cpu : usr=3.08%, sys=5.37%, ctx=374, majf=0, minf=1 00:19:32.195 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:19:32.195 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.195 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:32.195 issued rwts: total=3584,3662,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:32.195 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:32.195 00:19:32.195 Run status group 0 (all jobs): 00:19:32.195 READ: bw=59.0MiB/s (61.9MB/s), 13.4MiB/s-17.9MiB/s (14.0MB/s-18.8MB/s), io=59.5MiB (62.4MB), run=1006-1008msec 00:19:32.195 WRITE: bw=60.7MiB/s (63.7MB/s), 13.9MiB/s-18.2MiB/s (14.6MB/s-19.0MB/s), io=61.2MiB (64.2MB), run=1006-1008msec 00:19:32.195 00:19:32.195 Disk stats (read/write): 00:19:32.195 nvme0n1: ios=2824/3072, merge=0/0, ticks=29855/28017, in_queue=57872, util=95.29% 00:19:32.195 nvme0n2: ios=2872/3072, merge=0/0, ticks=27720/34720, in_queue=62440, util=93.76% 00:19:32.195 nvme0n3: ios=3608/3941, merge=0/0, ticks=43112/43853, in_queue=86965, util=88.17% 00:19:32.195 nvme0n4: ios=2560/3072, merge=0/0, ticks=34094/33885, in_queue=67979, util=89.30% 00:19:32.195 11:55:22 -- target/fio.sh@55 -- # sync 00:19:32.195 11:55:22 -- target/fio.sh@59 -- # fio_pid=2497905 00:19:32.195 11:55:22 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:19:32.195 11:55:22 -- target/fio.sh@61 -- # sleep 3 00:19:32.195 [global] 00:19:32.195 thread=1 00:19:32.195 invalidate=1 00:19:32.195 rw=read 00:19:32.195 time_based=1 00:19:32.195 runtime=10 00:19:32.195 ioengine=libaio 00:19:32.195 direct=1 00:19:32.195 bs=4096 00:19:32.195 iodepth=1 00:19:32.195 norandommap=1 00:19:32.195 numjobs=1 00:19:32.195 00:19:32.195 [job0] 00:19:32.195 filename=/dev/nvme0n1 00:19:32.195 [job1] 00:19:32.195 filename=/dev/nvme0n2 00:19:32.195 [job2] 00:19:32.195 filename=/dev/nvme0n3 00:19:32.195 [job3] 00:19:32.195 filename=/dev/nvme0n4 00:19:32.195 Could not set queue depth (nvme0n1) 00:19:32.195 Could not set queue depth (nvme0n2) 00:19:32.195 Could not set queue depth (nvme0n3) 00:19:32.195 Could not set queue depth (nvme0n4) 00:19:32.453 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:32.453 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:32.453 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:32.453 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:32.453 fio-3.35 00:19:32.453 Starting 4 threads 00:19:35.744 11:55:25 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:19:35.744 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=17559552, buflen=4096 00:19:35.744 fio: pid=2498122, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:35.744 11:55:25 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:19:35.744 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=9797632, buflen=4096 00:19:35.744 fio: pid=2498114, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:35.744 11:55:25 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:35.744 11:55:25 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:35.744 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=1388544, buflen=4096 00:19:35.744 fio: pid=2498080, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:35.744 11:55:26 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:35.744 11:55:26 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:19:36.005 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=1114112, buflen=4096 00:19:36.005 fio: pid=2498093, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:19:36.005 11:55:26 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:36.005 11:55:26 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:19:36.005 00:19:36.005 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2498080: Thu Apr 18 11:55:26 2024 00:19:36.005 read: IOPS=112, BW=450KiB/s (461kB/s)(1356KiB/3014msec) 00:19:36.005 slat (usec): min=7, max=17600, avg=64.68, stdev=953.84 00:19:36.005 clat (usec): min=386, max=42034, avg=8761.03, stdev=16381.56 00:19:36.005 lat (usec): min=395, max=58982, avg=8825.83, stdev=16518.04 00:19:36.005 clat percentiles (usec): 00:19:36.005 | 1.00th=[ 404], 5.00th=[ 424], 10.00th=[ 433], 20.00th=[ 453], 00:19:36.005 | 30.00th=[ 474], 40.00th=[ 486], 50.00th=[ 498], 60.00th=[ 510], 00:19:36.005 | 70.00th=[ 545], 80.00th=[40633], 90.00th=[41157], 95.00th=[41157], 00:19:36.005 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:36.005 | 99.99th=[42206] 00:19:36.005 bw ( KiB/s): min= 96, max= 2224, per=5.88%, avg=523.20, stdev=950.78, samples=5 00:19:36.005 iops : min= 24, max= 556, avg=130.80, stdev=237.70, samples=5 00:19:36.005 lat (usec) : 500=52.35%, 750=26.47%, 1000=0.59% 00:19:36.005 lat (msec) : 50=20.29% 00:19:36.005 cpu : usr=0.00%, sys=0.23%, ctx=341, majf=0, minf=1 00:19:36.005 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:36.005 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.005 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.005 issued rwts: total=340,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.005 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:36.005 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=2498093: Thu Apr 18 11:55:26 2024 00:19:36.005 read: IOPS=83, BW=332KiB/s (340kB/s)(1088KiB/3276msec) 00:19:36.005 slat (usec): min=8, max=18741, avg=107.23, stdev=1202.17 00:19:36.005 clat (usec): min=367, max=42334, avg=11845.88, stdev=18270.99 00:19:36.005 lat (usec): min=376, max=61015, avg=11928.86, stdev=18426.89 00:19:36.005 clat percentiles (usec): 00:19:36.005 | 1.00th=[ 379], 5.00th=[ 400], 10.00th=[ 424], 20.00th=[ 445], 00:19:36.005 | 30.00th=[ 461], 40.00th=[ 486], 50.00th=[ 510], 60.00th=[ 545], 00:19:36.005 | 70.00th=[ 693], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:36.005 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:36.005 | 99.99th=[42206] 00:19:36.005 bw ( KiB/s): min= 90, max= 1640, per=3.97%, avg=353.67, stdev=630.19, samples=6 00:19:36.005 iops : min= 22, max= 410, avg=88.33, stdev=157.59, samples=6 00:19:36.005 lat (usec) : 500=45.05%, 750=26.37%, 1000=0.37% 00:19:36.005 lat (msec) : 50=27.84% 00:19:36.005 cpu : usr=0.12%, sys=0.24%, ctx=276, majf=0, minf=1 00:19:36.005 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:36.005 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.005 complete : 0=0.4%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.005 issued rwts: total=273,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.005 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:36.005 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2498114: Thu Apr 18 11:55:26 2024 00:19:36.005 read: IOPS=853, BW=3411KiB/s (3493kB/s)(9568KiB/2805msec) 00:19:36.005 slat (usec): min=8, max=18211, avg=28.86, stdev=484.96 00:19:36.005 clat (usec): min=443, max=42007, avg=1132.19, stdev=4497.86 00:19:36.005 lat (usec): min=452, max=42032, avg=1161.06, stdev=4523.70 00:19:36.005 clat percentiles (usec): 00:19:36.005 | 1.00th=[ 469], 5.00th=[ 537], 10.00th=[ 562], 20.00th=[ 578], 00:19:36.005 | 30.00th=[ 594], 40.00th=[ 611], 50.00th=[ 635], 60.00th=[ 644], 00:19:36.005 | 70.00th=[ 652], 80.00th=[ 668], 90.00th=[ 701], 95.00th=[ 725], 00:19:36.005 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:19:36.005 | 99.99th=[42206] 00:19:36.005 bw ( KiB/s): min= 96, max= 6264, per=35.77%, avg=3184.00, stdev=3023.50, samples=5 00:19:36.005 iops : min= 24, max= 1566, avg=796.00, stdev=755.87, samples=5 00:19:36.005 lat (usec) : 500=2.59%, 750=95.32%, 1000=0.79% 00:19:36.005 lat (msec) : 50=1.25% 00:19:36.005 cpu : usr=0.53%, sys=1.50%, ctx=2395, majf=0, minf=1 00:19:36.005 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:36.005 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.005 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.005 issued rwts: total=2393,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.005 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:36.005 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2498122: Thu Apr 18 11:55:26 2024 00:19:36.005 read: IOPS=1634, BW=6535KiB/s (6692kB/s)(16.7MiB/2624msec) 00:19:36.005 slat (nsec): min=8750, max=45190, avg=9779.07, stdev=1751.78 00:19:36.005 clat (usec): min=422, max=1541, avg=594.38, stdev=49.40 00:19:36.005 lat (usec): min=432, max=1551, avg=604.16, stdev=49.50 00:19:36.005 clat percentiles (usec): 00:19:36.005 | 1.00th=[ 453], 5.00th=[ 519], 10.00th=[ 553], 20.00th=[ 570], 00:19:36.005 | 30.00th=[ 578], 40.00th=[ 586], 50.00th=[ 594], 60.00th=[ 603], 00:19:36.005 | 70.00th=[ 611], 80.00th=[ 619], 90.00th=[ 652], 95.00th=[ 668], 00:19:36.005 | 99.00th=[ 701], 99.50th=[ 717], 99.90th=[ 930], 99.95th=[ 1237], 00:19:36.005 | 99.99th=[ 1549] 00:19:36.005 bw ( KiB/s): min= 6472, max= 6784, per=74.42%, avg=6624.00, stdev=112.71, samples=5 00:19:36.005 iops : min= 1618, max= 1696, avg=1656.00, stdev=28.18, samples=5 00:19:36.005 lat (usec) : 500=3.78%, 750=95.92%, 1000=0.19% 00:19:36.005 lat (msec) : 2=0.09% 00:19:36.005 cpu : usr=1.18%, sys=2.82%, ctx=4288, majf=0, minf=2 00:19:36.005 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:36.005 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.005 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.005 issued rwts: total=4288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.005 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:36.005 00:19:36.005 Run status group 0 (all jobs): 00:19:36.005 READ: bw=8901KiB/s (9115kB/s), 332KiB/s-6535KiB/s (340kB/s-6692kB/s), io=28.5MiB (29.9MB), run=2624-3276msec 00:19:36.005 00:19:36.005 Disk stats (read/write): 00:19:36.005 nvme0n1: ios=334/0, merge=0/0, ticks=2764/0, in_queue=2764, util=93.82% 00:19:36.005 nvme0n2: ios=267/0, merge=0/0, ticks=3013/0, in_queue=3013, util=94.85% 00:19:36.005 nvme0n3: ios=2102/0, merge=0/0, ticks=2523/0, in_queue=2523, util=95.94% 00:19:36.005 nvme0n4: ios=4251/0, merge=0/0, ticks=2492/0, in_queue=2492, util=96.41% 00:19:36.265 11:55:26 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:36.265 11:55:26 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:19:36.524 11:55:26 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:36.524 11:55:26 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:19:36.783 11:55:27 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:36.783 11:55:27 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:19:37.042 11:55:27 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:37.042 11:55:27 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:19:37.042 11:55:27 -- target/fio.sh@69 -- # fio_status=0 00:19:37.301 11:55:27 -- target/fio.sh@70 -- # wait 2497905 00:19:37.301 11:55:27 -- target/fio.sh@70 -- # fio_status=4 00:19:37.301 11:55:27 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:38.258 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:38.258 11:55:28 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:38.258 11:55:28 -- common/autotest_common.sh@1205 -- # local i=0 00:19:38.258 11:55:28 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:19:38.258 11:55:28 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:38.258 11:55:28 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:38.258 11:55:28 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:19:38.258 11:55:28 -- common/autotest_common.sh@1217 -- # return 0 00:19:38.258 11:55:28 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:19:38.258 11:55:28 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:19:38.259 nvmf hotplug test: fio failed as expected 00:19:38.259 11:55:28 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:38.517 11:55:28 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:19:38.517 11:55:28 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:19:38.517 11:55:28 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:19:38.517 11:55:28 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:19:38.517 11:55:28 -- target/fio.sh@91 -- # nvmftestfini 00:19:38.517 11:55:28 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:38.517 11:55:28 -- nvmf/common.sh@117 -- # sync 00:19:38.517 11:55:28 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:38.517 11:55:28 -- nvmf/common.sh@120 -- # set +e 00:19:38.517 11:55:28 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:38.517 11:55:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:38.517 rmmod nvme_tcp 00:19:38.517 rmmod nvme_fabrics 00:19:38.517 rmmod nvme_keyring 00:19:38.517 11:55:28 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:38.517 11:55:28 -- nvmf/common.sh@124 -- # set -e 00:19:38.517 11:55:28 -- nvmf/common.sh@125 -- # return 0 00:19:38.517 11:55:28 -- nvmf/common.sh@478 -- # '[' -n 2494814 ']' 00:19:38.517 11:55:28 -- nvmf/common.sh@479 -- # killprocess 2494814 00:19:38.517 11:55:28 -- common/autotest_common.sh@936 -- # '[' -z 2494814 ']' 00:19:38.517 11:55:28 -- common/autotest_common.sh@940 -- # kill -0 2494814 00:19:38.517 11:55:28 -- common/autotest_common.sh@941 -- # uname 00:19:38.517 11:55:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:38.517 11:55:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2494814 00:19:38.517 11:55:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:38.517 11:55:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:38.517 11:55:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2494814' 00:19:38.517 killing process with pid 2494814 00:19:38.517 11:55:29 -- common/autotest_common.sh@955 -- # kill 2494814 00:19:38.517 11:55:29 -- common/autotest_common.sh@960 -- # wait 2494814 00:19:39.895 11:55:30 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:39.896 11:55:30 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:39.896 11:55:30 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:39.896 11:55:30 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:39.896 11:55:30 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:39.896 11:55:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:39.896 11:55:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:39.896 11:55:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:42.504 11:55:32 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:42.504 00:19:42.504 real 0m30.425s 00:19:42.504 user 2m11.321s 00:19:42.504 sys 0m9.447s 00:19:42.504 11:55:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:42.504 11:55:32 -- common/autotest_common.sh@10 -- # set +x 00:19:42.504 ************************************ 00:19:42.504 END TEST nvmf_fio_target 00:19:42.504 ************************************ 00:19:42.504 11:55:32 -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:42.504 11:55:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:42.504 11:55:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:42.504 11:55:32 -- common/autotest_common.sh@10 -- # set +x 00:19:42.504 ************************************ 00:19:42.504 START TEST nvmf_bdevio 00:19:42.504 ************************************ 00:19:42.504 11:55:32 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:42.504 * Looking for test storage... 00:19:42.504 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:42.504 11:55:32 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:42.504 11:55:32 -- nvmf/common.sh@7 -- # uname -s 00:19:42.504 11:55:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:42.504 11:55:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:42.504 11:55:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:42.504 11:55:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:42.504 11:55:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:42.504 11:55:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:42.504 11:55:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:42.504 11:55:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:42.504 11:55:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:42.504 11:55:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:42.504 11:55:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:42.504 11:55:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:19:42.505 11:55:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:42.505 11:55:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:42.505 11:55:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:42.505 11:55:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:42.505 11:55:32 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:42.505 11:55:32 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:42.505 11:55:32 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:42.505 11:55:32 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:42.505 11:55:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.505 11:55:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.505 11:55:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.505 11:55:32 -- paths/export.sh@5 -- # export PATH 00:19:42.505 11:55:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.505 11:55:32 -- nvmf/common.sh@47 -- # : 0 00:19:42.505 11:55:32 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:42.505 11:55:32 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:42.505 11:55:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:42.505 11:55:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:42.505 11:55:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:42.505 11:55:32 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:42.505 11:55:32 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:42.505 11:55:32 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:42.505 11:55:32 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:42.505 11:55:32 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:42.505 11:55:32 -- target/bdevio.sh@14 -- # nvmftestinit 00:19:42.505 11:55:32 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:42.505 11:55:32 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:42.505 11:55:32 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:42.505 11:55:32 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:42.505 11:55:32 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:42.505 11:55:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:42.505 11:55:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:42.505 11:55:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:42.505 11:55:32 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:42.505 11:55:32 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:42.505 11:55:32 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:42.505 11:55:32 -- common/autotest_common.sh@10 -- # set +x 00:19:49.077 11:55:39 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:49.077 11:55:39 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:49.077 11:55:39 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:49.077 11:55:39 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:49.077 11:55:39 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:49.077 11:55:39 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:49.077 11:55:39 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:49.077 11:55:39 -- nvmf/common.sh@295 -- # net_devs=() 00:19:49.077 11:55:39 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:49.077 11:55:39 -- nvmf/common.sh@296 -- # e810=() 00:19:49.077 11:55:39 -- nvmf/common.sh@296 -- # local -ga e810 00:19:49.077 11:55:39 -- nvmf/common.sh@297 -- # x722=() 00:19:49.077 11:55:39 -- nvmf/common.sh@297 -- # local -ga x722 00:19:49.077 11:55:39 -- nvmf/common.sh@298 -- # mlx=() 00:19:49.077 11:55:39 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:49.077 11:55:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:49.077 11:55:39 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:49.078 11:55:39 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:49.078 11:55:39 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:49.078 11:55:39 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:49.078 11:55:39 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:49.078 11:55:39 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:49.078 11:55:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:49.078 11:55:39 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:49.078 11:55:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:49.078 11:55:39 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:49.078 11:55:39 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:49.078 11:55:39 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:49.078 11:55:39 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:49.078 11:55:39 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:49.078 11:55:39 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:49.078 11:55:39 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:49.078 11:55:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:49.078 11:55:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:49.078 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:49.078 11:55:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:49.078 11:55:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:49.078 11:55:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:49.078 11:55:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:49.078 11:55:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:49.078 11:55:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:49.078 11:55:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:49.078 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:49.078 11:55:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:49.078 11:55:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:49.078 11:55:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:49.078 11:55:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:49.078 11:55:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:49.078 11:55:39 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:49.078 11:55:39 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:49.078 11:55:39 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:49.078 11:55:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:49.078 11:55:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:49.078 11:55:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:49.078 11:55:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:49.078 11:55:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:49.078 Found net devices under 0000:af:00.0: cvl_0_0 00:19:49.078 11:55:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:49.078 11:55:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:49.078 11:55:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:49.078 11:55:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:49.078 11:55:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:49.078 11:55:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:49.078 Found net devices under 0000:af:00.1: cvl_0_1 00:19:49.078 11:55:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:49.078 11:55:39 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:49.078 11:55:39 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:49.078 11:55:39 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:49.078 11:55:39 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:49.078 11:55:39 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:49.078 11:55:39 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:49.078 11:55:39 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:49.078 11:55:39 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:49.078 11:55:39 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:49.078 11:55:39 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:49.078 11:55:39 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:49.078 11:55:39 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:49.078 11:55:39 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:49.078 11:55:39 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:49.078 11:55:39 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:49.078 11:55:39 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:49.078 11:55:39 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:49.078 11:55:39 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:49.078 11:55:39 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:49.078 11:55:39 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:49.078 11:55:39 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:49.078 11:55:39 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:49.078 11:55:39 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:49.078 11:55:39 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:49.078 11:55:39 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:49.078 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:49.078 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:19:49.078 00:19:49.078 --- 10.0.0.2 ping statistics --- 00:19:49.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.078 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:19:49.078 11:55:39 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:49.078 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:49.078 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:19:49.078 00:19:49.078 --- 10.0.0.1 ping statistics --- 00:19:49.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.078 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:19:49.078 11:55:39 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:49.078 11:55:39 -- nvmf/common.sh@411 -- # return 0 00:19:49.078 11:55:39 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:49.078 11:55:39 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:49.078 11:55:39 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:49.078 11:55:39 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:49.078 11:55:39 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:49.078 11:55:39 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:49.078 11:55:39 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:49.078 11:55:39 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:49.078 11:55:39 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:49.078 11:55:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:49.078 11:55:39 -- common/autotest_common.sh@10 -- # set +x 00:19:49.078 11:55:39 -- nvmf/common.sh@470 -- # nvmfpid=2502861 00:19:49.078 11:55:39 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:19:49.078 11:55:39 -- nvmf/common.sh@471 -- # waitforlisten 2502861 00:19:49.078 11:55:39 -- common/autotest_common.sh@817 -- # '[' -z 2502861 ']' 00:19:49.078 11:55:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:49.078 11:55:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:49.078 11:55:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:49.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:49.078 11:55:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:49.078 11:55:39 -- common/autotest_common.sh@10 -- # set +x 00:19:49.338 [2024-04-18 11:55:39.640504] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:19:49.338 [2024-04-18 11:55:39.640595] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:49.338 EAL: No free 2048 kB hugepages reported on node 1 00:19:49.338 [2024-04-18 11:55:39.771115] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:49.597 [2024-04-18 11:55:39.991496] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:49.597 [2024-04-18 11:55:39.991541] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:49.597 [2024-04-18 11:55:39.991554] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:49.597 [2024-04-18 11:55:39.991566] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:49.597 [2024-04-18 11:55:39.991576] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:49.597 [2024-04-18 11:55:39.991760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:49.597 [2024-04-18 11:55:39.991854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:49.597 [2024-04-18 11:55:39.991926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:49.597 [2024-04-18 11:55:39.991951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:50.166 11:55:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:50.166 11:55:40 -- common/autotest_common.sh@850 -- # return 0 00:19:50.166 11:55:40 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:50.166 11:55:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:50.166 11:55:40 -- common/autotest_common.sh@10 -- # set +x 00:19:50.166 11:55:40 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:50.166 11:55:40 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:50.166 11:55:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:50.166 11:55:40 -- common/autotest_common.sh@10 -- # set +x 00:19:50.166 [2024-04-18 11:55:40.461639] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:50.166 11:55:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:50.166 11:55:40 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:50.166 11:55:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:50.166 11:55:40 -- common/autotest_common.sh@10 -- # set +x 00:19:50.166 Malloc0 00:19:50.166 11:55:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:50.166 11:55:40 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:50.166 11:55:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:50.166 11:55:40 -- common/autotest_common.sh@10 -- # set +x 00:19:50.166 11:55:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:50.166 11:55:40 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:50.166 11:55:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:50.166 11:55:40 -- common/autotest_common.sh@10 -- # set +x 00:19:50.166 11:55:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:50.166 11:55:40 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:50.166 11:55:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:50.166 11:55:40 -- common/autotest_common.sh@10 -- # set +x 00:19:50.166 [2024-04-18 11:55:40.581756] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:50.166 11:55:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:50.166 11:55:40 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:19:50.166 11:55:40 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:50.166 11:55:40 -- nvmf/common.sh@521 -- # config=() 00:19:50.166 11:55:40 -- nvmf/common.sh@521 -- # local subsystem config 00:19:50.166 11:55:40 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:50.166 11:55:40 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:50.166 { 00:19:50.166 "params": { 00:19:50.166 "name": "Nvme$subsystem", 00:19:50.166 "trtype": "$TEST_TRANSPORT", 00:19:50.166 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:50.166 "adrfam": "ipv4", 00:19:50.166 "trsvcid": "$NVMF_PORT", 00:19:50.166 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:50.166 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:50.166 "hdgst": ${hdgst:-false}, 00:19:50.166 "ddgst": ${ddgst:-false} 00:19:50.166 }, 00:19:50.166 "method": "bdev_nvme_attach_controller" 00:19:50.166 } 00:19:50.166 EOF 00:19:50.166 )") 00:19:50.166 11:55:40 -- nvmf/common.sh@543 -- # cat 00:19:50.166 11:55:40 -- nvmf/common.sh@545 -- # jq . 00:19:50.166 11:55:40 -- nvmf/common.sh@546 -- # IFS=, 00:19:50.166 11:55:40 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:19:50.166 "params": { 00:19:50.166 "name": "Nvme1", 00:19:50.166 "trtype": "tcp", 00:19:50.166 "traddr": "10.0.0.2", 00:19:50.166 "adrfam": "ipv4", 00:19:50.166 "trsvcid": "4420", 00:19:50.166 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:50.166 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:50.166 "hdgst": false, 00:19:50.166 "ddgst": false 00:19:50.166 }, 00:19:50.166 "method": "bdev_nvme_attach_controller" 00:19:50.166 }' 00:19:50.166 [2024-04-18 11:55:40.667703] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:19:50.166 [2024-04-18 11:55:40.667793] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2503144 ] 00:19:50.426 EAL: No free 2048 kB hugepages reported on node 1 00:19:50.426 [2024-04-18 11:55:40.791565] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:50.685 [2024-04-18 11:55:41.010122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:50.685 [2024-04-18 11:55:41.010206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:50.685 [2024-04-18 11:55:41.010209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:51.253 I/O targets: 00:19:51.253 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:51.253 00:19:51.253 00:19:51.253 CUnit - A unit testing framework for C - Version 2.1-3 00:19:51.253 http://cunit.sourceforge.net/ 00:19:51.253 00:19:51.253 00:19:51.253 Suite: bdevio tests on: Nvme1n1 00:19:51.253 Test: blockdev write read block ...passed 00:19:51.253 Test: blockdev write zeroes read block ...passed 00:19:51.253 Test: blockdev write zeroes read no split ...passed 00:19:51.253 Test: blockdev write zeroes read split ...passed 00:19:51.253 Test: blockdev write zeroes read split partial ...passed 00:19:51.253 Test: blockdev reset ...[2024-04-18 11:55:41.787557] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:51.253 [2024-04-18 11:55:41.787669] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:19:51.512 [2024-04-18 11:55:41.808547] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:51.512 passed 00:19:51.512 Test: blockdev write read 8 blocks ...passed 00:19:51.512 Test: blockdev write read size > 128k ...passed 00:19:51.512 Test: blockdev write read invalid size ...passed 00:19:51.512 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:51.512 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:51.512 Test: blockdev write read max offset ...passed 00:19:51.512 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:51.512 Test: blockdev writev readv 8 blocks ...passed 00:19:51.772 Test: blockdev writev readv 30 x 1block ...passed 00:19:51.772 Test: blockdev writev readv block ...passed 00:19:51.772 Test: blockdev writev readv size > 128k ...passed 00:19:51.772 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:51.772 Test: blockdev comparev and writev ...[2024-04-18 11:55:42.112688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:51.772 [2024-04-18 11:55:42.112737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:51.772 [2024-04-18 11:55:42.112766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:51.772 [2024-04-18 11:55:42.112779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:51.772 [2024-04-18 11:55:42.113186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:51.772 [2024-04-18 11:55:42.113205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:51.772 [2024-04-18 11:55:42.113228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:51.772 [2024-04-18 11:55:42.113242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:51.772 [2024-04-18 11:55:42.113647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:51.772 [2024-04-18 11:55:42.113667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:51.772 [2024-04-18 11:55:42.113685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:51.772 [2024-04-18 11:55:42.113698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:51.772 [2024-04-18 11:55:42.114088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:51.772 [2024-04-18 11:55:42.114106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:51.772 [2024-04-18 11:55:42.114124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:51.772 [2024-04-18 11:55:42.114138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:51.772 passed 00:19:51.772 Test: blockdev nvme passthru rw ...passed 00:19:51.772 Test: blockdev nvme passthru vendor specific ...[2024-04-18 11:55:42.196153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:51.772 [2024-04-18 11:55:42.196184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:51.772 [2024-04-18 11:55:42.196435] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:51.772 [2024-04-18 11:55:42.196456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:51.772 [2024-04-18 11:55:42.196707] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:51.772 [2024-04-18 11:55:42.196724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:51.772 [2024-04-18 11:55:42.196965] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:51.772 [2024-04-18 11:55:42.196983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:51.772 passed 00:19:51.772 Test: blockdev nvme admin passthru ...passed 00:19:51.772 Test: blockdev copy ...passed 00:19:51.772 00:19:51.772 Run Summary: Type Total Ran Passed Failed Inactive 00:19:51.772 suites 1 1 n/a 0 0 00:19:51.772 tests 23 23 23 0 0 00:19:51.772 asserts 152 152 152 0 n/a 00:19:51.772 00:19:51.772 Elapsed time = 1.560 seconds 00:19:53.151 11:55:43 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:53.151 11:55:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.151 11:55:43 -- common/autotest_common.sh@10 -- # set +x 00:19:53.151 11:55:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:53.151 11:55:43 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:53.151 11:55:43 -- target/bdevio.sh@30 -- # nvmftestfini 00:19:53.151 11:55:43 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:53.151 11:55:43 -- nvmf/common.sh@117 -- # sync 00:19:53.151 11:55:43 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:53.151 11:55:43 -- nvmf/common.sh@120 -- # set +e 00:19:53.151 11:55:43 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:53.151 11:55:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:53.151 rmmod nvme_tcp 00:19:53.151 rmmod nvme_fabrics 00:19:53.151 rmmod nvme_keyring 00:19:53.151 11:55:43 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:53.151 11:55:43 -- nvmf/common.sh@124 -- # set -e 00:19:53.151 11:55:43 -- nvmf/common.sh@125 -- # return 0 00:19:53.151 11:55:43 -- nvmf/common.sh@478 -- # '[' -n 2502861 ']' 00:19:53.151 11:55:43 -- nvmf/common.sh@479 -- # killprocess 2502861 00:19:53.151 11:55:43 -- common/autotest_common.sh@936 -- # '[' -z 2502861 ']' 00:19:53.151 11:55:43 -- common/autotest_common.sh@940 -- # kill -0 2502861 00:19:53.151 11:55:43 -- common/autotest_common.sh@941 -- # uname 00:19:53.151 11:55:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:53.151 11:55:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2502861 00:19:53.151 11:55:43 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:19:53.151 11:55:43 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:19:53.151 11:55:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2502861' 00:19:53.151 killing process with pid 2502861 00:19:53.151 11:55:43 -- common/autotest_common.sh@955 -- # kill 2502861 00:19:53.151 11:55:43 -- common/autotest_common.sh@960 -- # wait 2502861 00:19:54.530 11:55:44 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:54.530 11:55:44 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:54.530 11:55:44 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:54.530 11:55:44 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:54.530 11:55:44 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:54.530 11:55:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:54.530 11:55:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:54.530 11:55:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:56.439 11:55:46 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:56.439 00:19:56.439 real 0m14.351s 00:19:56.439 user 0m24.767s 00:19:56.439 sys 0m6.131s 00:19:56.439 11:55:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:56.439 11:55:46 -- common/autotest_common.sh@10 -- # set +x 00:19:56.439 ************************************ 00:19:56.439 END TEST nvmf_bdevio 00:19:56.439 ************************************ 00:19:56.439 11:55:46 -- nvmf/nvmf.sh@58 -- # '[' tcp = tcp ']' 00:19:56.439 11:55:46 -- nvmf/nvmf.sh@59 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:56.439 11:55:46 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:19:56.439 11:55:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:56.439 11:55:46 -- common/autotest_common.sh@10 -- # set +x 00:19:56.699 ************************************ 00:19:56.699 START TEST nvmf_bdevio_no_huge 00:19:56.699 ************************************ 00:19:56.699 11:55:47 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:56.699 * Looking for test storage... 00:19:56.958 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:56.958 11:55:47 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:56.958 11:55:47 -- nvmf/common.sh@7 -- # uname -s 00:19:56.958 11:55:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:56.958 11:55:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:56.958 11:55:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:56.958 11:55:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:56.958 11:55:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:56.958 11:55:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:56.958 11:55:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:56.958 11:55:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:56.958 11:55:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:56.958 11:55:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:56.958 11:55:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:56.958 11:55:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:19:56.958 11:55:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:56.958 11:55:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:56.958 11:55:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:56.958 11:55:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:56.958 11:55:47 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:56.958 11:55:47 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:56.958 11:55:47 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:56.959 11:55:47 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:56.959 11:55:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.959 11:55:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.959 11:55:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.959 11:55:47 -- paths/export.sh@5 -- # export PATH 00:19:56.959 11:55:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.959 11:55:47 -- nvmf/common.sh@47 -- # : 0 00:19:56.959 11:55:47 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:56.959 11:55:47 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:56.959 11:55:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:56.959 11:55:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:56.959 11:55:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:56.959 11:55:47 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:56.959 11:55:47 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:56.959 11:55:47 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:56.959 11:55:47 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:56.959 11:55:47 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:56.959 11:55:47 -- target/bdevio.sh@14 -- # nvmftestinit 00:19:56.959 11:55:47 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:56.959 11:55:47 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:56.959 11:55:47 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:56.959 11:55:47 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:56.959 11:55:47 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:56.959 11:55:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:56.959 11:55:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:56.959 11:55:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:56.959 11:55:47 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:56.959 11:55:47 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:56.959 11:55:47 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:56.959 11:55:47 -- common/autotest_common.sh@10 -- # set +x 00:20:03.532 11:55:53 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:03.532 11:55:53 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:03.532 11:55:53 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:03.532 11:55:53 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:03.532 11:55:53 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:03.532 11:55:53 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:03.532 11:55:53 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:03.532 11:55:53 -- nvmf/common.sh@295 -- # net_devs=() 00:20:03.532 11:55:53 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:03.532 11:55:53 -- nvmf/common.sh@296 -- # e810=() 00:20:03.532 11:55:53 -- nvmf/common.sh@296 -- # local -ga e810 00:20:03.532 11:55:53 -- nvmf/common.sh@297 -- # x722=() 00:20:03.532 11:55:53 -- nvmf/common.sh@297 -- # local -ga x722 00:20:03.532 11:55:53 -- nvmf/common.sh@298 -- # mlx=() 00:20:03.532 11:55:53 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:03.532 11:55:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:03.532 11:55:53 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:03.532 11:55:53 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:03.532 11:55:53 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:03.532 11:55:53 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:03.532 11:55:53 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:03.532 11:55:53 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:03.532 11:55:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:03.532 11:55:53 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:03.532 11:55:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:03.532 11:55:53 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:03.532 11:55:53 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:03.532 11:55:53 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:03.532 11:55:53 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:03.532 11:55:53 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:03.532 11:55:53 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:03.532 11:55:53 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:03.532 11:55:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:03.532 11:55:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:03.532 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:03.532 11:55:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:03.532 11:55:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:03.532 11:55:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:03.532 11:55:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:03.532 11:55:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:03.532 11:55:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:03.532 11:55:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:03.532 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:03.532 11:55:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:03.532 11:55:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:03.532 11:55:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:03.532 11:55:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:03.532 11:55:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:03.532 11:55:53 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:03.532 11:55:53 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:03.532 11:55:53 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:03.532 11:55:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:03.532 11:55:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:03.532 11:55:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:03.532 11:55:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:03.532 11:55:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:03.532 Found net devices under 0000:af:00.0: cvl_0_0 00:20:03.532 11:55:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:03.532 11:55:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:03.532 11:55:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:03.532 11:55:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:03.532 11:55:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:03.532 11:55:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:03.532 Found net devices under 0000:af:00.1: cvl_0_1 00:20:03.532 11:55:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:03.532 11:55:53 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:03.532 11:55:53 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:03.532 11:55:53 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:03.532 11:55:53 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:03.532 11:55:53 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:03.532 11:55:53 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:03.532 11:55:53 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:03.532 11:55:53 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:03.532 11:55:53 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:03.532 11:55:53 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:03.532 11:55:53 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:03.532 11:55:53 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:03.532 11:55:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:03.532 11:55:53 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:03.532 11:55:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:03.532 11:55:53 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:03.532 11:55:53 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:03.532 11:55:53 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:03.532 11:55:53 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:03.532 11:55:53 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:03.532 11:55:53 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:03.532 11:55:53 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:03.533 11:55:53 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:03.533 11:55:53 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:03.533 11:55:53 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:03.533 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:03.533 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:20:03.533 00:20:03.533 --- 10.0.0.2 ping statistics --- 00:20:03.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:03.533 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:20:03.533 11:55:53 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:03.533 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:03.533 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:20:03.533 00:20:03.533 --- 10.0.0.1 ping statistics --- 00:20:03.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:03.533 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:20:03.533 11:55:53 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:03.533 11:55:53 -- nvmf/common.sh@411 -- # return 0 00:20:03.533 11:55:53 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:03.533 11:55:53 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:03.533 11:55:53 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:03.533 11:55:53 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:03.533 11:55:53 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:03.533 11:55:53 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:03.533 11:55:53 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:03.533 11:55:53 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:03.533 11:55:53 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:03.533 11:55:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:03.533 11:55:53 -- common/autotest_common.sh@10 -- # set +x 00:20:03.533 11:55:53 -- nvmf/common.sh@470 -- # nvmfpid=2507391 00:20:03.533 11:55:53 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:20:03.533 11:55:53 -- nvmf/common.sh@471 -- # waitforlisten 2507391 00:20:03.533 11:55:53 -- common/autotest_common.sh@817 -- # '[' -z 2507391 ']' 00:20:03.533 11:55:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:03.533 11:55:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:03.533 11:55:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:03.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:03.533 11:55:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:03.533 11:55:53 -- common/autotest_common.sh@10 -- # set +x 00:20:03.533 [2024-04-18 11:55:53.742983] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:20:03.533 [2024-04-18 11:55:53.743070] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:20:03.533 [2024-04-18 11:55:53.888309] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:03.793 [2024-04-18 11:55:54.125506] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:03.793 [2024-04-18 11:55:54.125551] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:03.793 [2024-04-18 11:55:54.125564] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:03.793 [2024-04-18 11:55:54.125577] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:03.793 [2024-04-18 11:55:54.125586] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:03.793 [2024-04-18 11:55:54.125753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:03.793 [2024-04-18 11:55:54.125834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:20:03.793 [2024-04-18 11:55:54.125902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:03.793 [2024-04-18 11:55:54.125928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:20:04.052 11:55:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:04.052 11:55:54 -- common/autotest_common.sh@850 -- # return 0 00:20:04.052 11:55:54 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:04.052 11:55:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:04.052 11:55:54 -- common/autotest_common.sh@10 -- # set +x 00:20:04.052 11:55:54 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:04.052 11:55:54 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:04.052 11:55:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.052 11:55:54 -- common/autotest_common.sh@10 -- # set +x 00:20:04.052 [2024-04-18 11:55:54.581911] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:04.052 11:55:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.052 11:55:54 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:04.052 11:55:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.052 11:55:54 -- common/autotest_common.sh@10 -- # set +x 00:20:04.347 Malloc0 00:20:04.347 11:55:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.347 11:55:54 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:04.347 11:55:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.347 11:55:54 -- common/autotest_common.sh@10 -- # set +x 00:20:04.347 11:55:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.347 11:55:54 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:04.347 11:55:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.347 11:55:54 -- common/autotest_common.sh@10 -- # set +x 00:20:04.347 11:55:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.347 11:55:54 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:04.347 11:55:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.347 11:55:54 -- common/autotest_common.sh@10 -- # set +x 00:20:04.347 [2024-04-18 11:55:54.703940] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:04.347 11:55:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.347 11:55:54 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:20:04.347 11:55:54 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:04.347 11:55:54 -- nvmf/common.sh@521 -- # config=() 00:20:04.347 11:55:54 -- nvmf/common.sh@521 -- # local subsystem config 00:20:04.347 11:55:54 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:04.347 11:55:54 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:04.347 { 00:20:04.347 "params": { 00:20:04.347 "name": "Nvme$subsystem", 00:20:04.347 "trtype": "$TEST_TRANSPORT", 00:20:04.347 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:04.347 "adrfam": "ipv4", 00:20:04.347 "trsvcid": "$NVMF_PORT", 00:20:04.347 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:04.347 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:04.347 "hdgst": ${hdgst:-false}, 00:20:04.347 "ddgst": ${ddgst:-false} 00:20:04.347 }, 00:20:04.347 "method": "bdev_nvme_attach_controller" 00:20:04.347 } 00:20:04.347 EOF 00:20:04.347 )") 00:20:04.347 11:55:54 -- nvmf/common.sh@543 -- # cat 00:20:04.347 11:55:54 -- nvmf/common.sh@545 -- # jq . 00:20:04.347 11:55:54 -- nvmf/common.sh@546 -- # IFS=, 00:20:04.347 11:55:54 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:04.347 "params": { 00:20:04.347 "name": "Nvme1", 00:20:04.347 "trtype": "tcp", 00:20:04.347 "traddr": "10.0.0.2", 00:20:04.347 "adrfam": "ipv4", 00:20:04.347 "trsvcid": "4420", 00:20:04.347 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:04.347 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:04.347 "hdgst": false, 00:20:04.347 "ddgst": false 00:20:04.347 }, 00:20:04.347 "method": "bdev_nvme_attach_controller" 00:20:04.347 }' 00:20:04.347 [2024-04-18 11:55:54.789051] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:20:04.347 [2024-04-18 11:55:54.789132] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2507669 ] 00:20:04.607 [2024-04-18 11:55:54.927579] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:04.866 [2024-04-18 11:55:55.174807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:04.866 [2024-04-18 11:55:55.174872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:04.866 [2024-04-18 11:55:55.174879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:05.125 I/O targets: 00:20:05.125 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:05.125 00:20:05.125 00:20:05.125 CUnit - A unit testing framework for C - Version 2.1-3 00:20:05.125 http://cunit.sourceforge.net/ 00:20:05.125 00:20:05.125 00:20:05.125 Suite: bdevio tests on: Nvme1n1 00:20:05.384 Test: blockdev write read block ...passed 00:20:05.384 Test: blockdev write zeroes read block ...passed 00:20:05.384 Test: blockdev write zeroes read no split ...passed 00:20:05.384 Test: blockdev write zeroes read split ...passed 00:20:05.384 Test: blockdev write zeroes read split partial ...passed 00:20:05.384 Test: blockdev reset ...[2024-04-18 11:55:55.917407] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:05.384 [2024-04-18 11:55:55.917532] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:20:05.384 [2024-04-18 11:55:55.930581] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:05.384 passed 00:20:05.644 Test: blockdev write read 8 blocks ...passed 00:20:05.644 Test: blockdev write read size > 128k ...passed 00:20:05.644 Test: blockdev write read invalid size ...passed 00:20:05.644 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:05.644 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:05.644 Test: blockdev write read max offset ...passed 00:20:05.644 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:05.644 Test: blockdev writev readv 8 blocks ...passed 00:20:05.644 Test: blockdev writev readv 30 x 1block ...passed 00:20:05.644 Test: blockdev writev readv block ...passed 00:20:05.644 Test: blockdev writev readv size > 128k ...passed 00:20:05.644 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:05.644 Test: blockdev comparev and writev ...[2024-04-18 11:55:56.152419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:05.644 [2024-04-18 11:55:56.152468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:05.644 [2024-04-18 11:55:56.152491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:05.644 [2024-04-18 11:55:56.152505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:05.644 [2024-04-18 11:55:56.152932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:05.644 [2024-04-18 11:55:56.152959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:05.644 [2024-04-18 11:55:56.152978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:05.644 [2024-04-18 11:55:56.152991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:05.644 [2024-04-18 11:55:56.153396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:05.644 [2024-04-18 11:55:56.153414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:05.644 [2024-04-18 11:55:56.153435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:05.644 [2024-04-18 11:55:56.153454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:05.644 [2024-04-18 11:55:56.153840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:05.644 [2024-04-18 11:55:56.153859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:05.644 [2024-04-18 11:55:56.153877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:05.644 [2024-04-18 11:55:56.153896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:05.905 passed 00:20:05.905 Test: blockdev nvme passthru rw ...passed 00:20:05.905 Test: blockdev nvme passthru vendor specific ...[2024-04-18 11:55:56.236109] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:05.905 [2024-04-18 11:55:56.236141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:05.905 [2024-04-18 11:55:56.236390] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:05.905 [2024-04-18 11:55:56.236406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:05.905 [2024-04-18 11:55:56.236663] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:05.905 [2024-04-18 11:55:56.236680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:05.905 [2024-04-18 11:55:56.236923] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:05.905 [2024-04-18 11:55:56.236940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:05.905 passed 00:20:05.905 Test: blockdev nvme admin passthru ...passed 00:20:05.905 Test: blockdev copy ...passed 00:20:05.905 00:20:05.905 Run Summary: Type Total Ran Passed Failed Inactive 00:20:05.905 suites 1 1 n/a 0 0 00:20:05.905 tests 23 23 23 0 0 00:20:05.905 asserts 152 152 152 0 n/a 00:20:05.905 00:20:05.905 Elapsed time = 1.307 seconds 00:20:06.475 11:55:56 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:06.475 11:55:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:06.475 11:55:56 -- common/autotest_common.sh@10 -- # set +x 00:20:06.475 11:55:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:06.475 11:55:57 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:06.475 11:55:57 -- target/bdevio.sh@30 -- # nvmftestfini 00:20:06.475 11:55:57 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:06.475 11:55:57 -- nvmf/common.sh@117 -- # sync 00:20:06.475 11:55:57 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:06.475 11:55:57 -- nvmf/common.sh@120 -- # set +e 00:20:06.475 11:55:57 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:06.475 11:55:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:06.475 rmmod nvme_tcp 00:20:06.735 rmmod nvme_fabrics 00:20:06.735 rmmod nvme_keyring 00:20:06.735 11:55:57 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:06.735 11:55:57 -- nvmf/common.sh@124 -- # set -e 00:20:06.735 11:55:57 -- nvmf/common.sh@125 -- # return 0 00:20:06.735 11:55:57 -- nvmf/common.sh@478 -- # '[' -n 2507391 ']' 00:20:06.735 11:55:57 -- nvmf/common.sh@479 -- # killprocess 2507391 00:20:06.735 11:55:57 -- common/autotest_common.sh@936 -- # '[' -z 2507391 ']' 00:20:06.735 11:55:57 -- common/autotest_common.sh@940 -- # kill -0 2507391 00:20:06.735 11:55:57 -- common/autotest_common.sh@941 -- # uname 00:20:06.735 11:55:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:06.735 11:55:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2507391 00:20:06.735 11:55:57 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:20:06.735 11:55:57 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:20:06.735 11:55:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2507391' 00:20:06.735 killing process with pid 2507391 00:20:06.735 11:55:57 -- common/autotest_common.sh@955 -- # kill 2507391 00:20:06.735 11:55:57 -- common/autotest_common.sh@960 -- # wait 2507391 00:20:07.672 11:55:57 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:07.672 11:55:57 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:07.672 11:55:57 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:07.672 11:55:57 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:07.672 11:55:57 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:07.672 11:55:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:07.672 11:55:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:07.672 11:55:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:09.579 11:55:59 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:09.579 00:20:09.579 real 0m12.867s 00:20:09.579 user 0m20.335s 00:20:09.579 sys 0m6.098s 00:20:09.579 11:56:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:09.579 11:56:00 -- common/autotest_common.sh@10 -- # set +x 00:20:09.579 ************************************ 00:20:09.579 END TEST nvmf_bdevio_no_huge 00:20:09.579 ************************************ 00:20:09.579 11:56:00 -- nvmf/nvmf.sh@60 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:09.579 11:56:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:09.579 11:56:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:09.579 11:56:00 -- common/autotest_common.sh@10 -- # set +x 00:20:09.839 ************************************ 00:20:09.839 START TEST nvmf_tls 00:20:09.839 ************************************ 00:20:09.839 11:56:00 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:09.839 * Looking for test storage... 00:20:09.839 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:09.839 11:56:00 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:09.839 11:56:00 -- nvmf/common.sh@7 -- # uname -s 00:20:09.839 11:56:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:09.839 11:56:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:09.839 11:56:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:09.839 11:56:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:09.839 11:56:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:09.839 11:56:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:09.839 11:56:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:09.839 11:56:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:09.839 11:56:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:09.839 11:56:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:09.839 11:56:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:09.839 11:56:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:20:09.839 11:56:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:09.839 11:56:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:09.839 11:56:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:09.839 11:56:00 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:09.839 11:56:00 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:09.839 11:56:00 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:09.839 11:56:00 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:09.839 11:56:00 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:09.839 11:56:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.839 11:56:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.839 11:56:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.839 11:56:00 -- paths/export.sh@5 -- # export PATH 00:20:09.839 11:56:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.839 11:56:00 -- nvmf/common.sh@47 -- # : 0 00:20:09.839 11:56:00 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:09.839 11:56:00 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:09.839 11:56:00 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:09.839 11:56:00 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:09.839 11:56:00 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:09.839 11:56:00 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:09.839 11:56:00 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:09.839 11:56:00 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:09.839 11:56:00 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:09.839 11:56:00 -- target/tls.sh@62 -- # nvmftestinit 00:20:09.839 11:56:00 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:09.839 11:56:00 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:09.839 11:56:00 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:09.839 11:56:00 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:09.839 11:56:00 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:09.839 11:56:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:09.839 11:56:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:09.839 11:56:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:09.839 11:56:00 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:09.839 11:56:00 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:09.839 11:56:00 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:09.839 11:56:00 -- common/autotest_common.sh@10 -- # set +x 00:20:16.407 11:56:06 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:16.407 11:56:06 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:16.407 11:56:06 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:16.407 11:56:06 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:16.407 11:56:06 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:16.407 11:56:06 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:16.407 11:56:06 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:16.407 11:56:06 -- nvmf/common.sh@295 -- # net_devs=() 00:20:16.407 11:56:06 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:16.407 11:56:06 -- nvmf/common.sh@296 -- # e810=() 00:20:16.407 11:56:06 -- nvmf/common.sh@296 -- # local -ga e810 00:20:16.407 11:56:06 -- nvmf/common.sh@297 -- # x722=() 00:20:16.407 11:56:06 -- nvmf/common.sh@297 -- # local -ga x722 00:20:16.407 11:56:06 -- nvmf/common.sh@298 -- # mlx=() 00:20:16.407 11:56:06 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:16.407 11:56:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:16.407 11:56:06 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:16.407 11:56:06 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:16.407 11:56:06 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:16.407 11:56:06 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:16.407 11:56:06 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:16.407 11:56:06 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:16.407 11:56:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:16.407 11:56:06 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:16.407 11:56:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:16.407 11:56:06 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:16.407 11:56:06 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:16.407 11:56:06 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:16.407 11:56:06 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:16.407 11:56:06 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:16.407 11:56:06 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:16.407 11:56:06 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:16.407 11:56:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:16.407 11:56:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:16.407 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:16.407 11:56:06 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:16.407 11:56:06 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:16.407 11:56:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:16.407 11:56:06 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:16.407 11:56:06 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:16.407 11:56:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:16.407 11:56:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:16.407 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:16.407 11:56:06 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:16.407 11:56:06 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:16.407 11:56:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:16.407 11:56:06 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:16.407 11:56:06 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:16.407 11:56:06 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:16.407 11:56:06 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:16.407 11:56:06 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:16.407 11:56:06 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:16.407 11:56:06 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:16.407 11:56:06 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:16.407 11:56:06 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:16.407 11:56:06 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:16.407 Found net devices under 0000:af:00.0: cvl_0_0 00:20:16.407 11:56:06 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:16.407 11:56:06 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:16.407 11:56:06 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:16.407 11:56:06 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:16.407 11:56:06 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:16.407 11:56:06 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:16.407 Found net devices under 0000:af:00.1: cvl_0_1 00:20:16.407 11:56:06 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:16.407 11:56:06 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:16.407 11:56:06 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:16.407 11:56:06 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:16.407 11:56:06 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:16.407 11:56:06 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:16.407 11:56:06 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:16.407 11:56:06 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:16.407 11:56:06 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:16.407 11:56:06 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:16.407 11:56:06 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:16.407 11:56:06 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:16.407 11:56:06 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:16.407 11:56:06 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:16.407 11:56:06 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:16.407 11:56:06 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:16.407 11:56:06 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:16.407 11:56:06 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:16.407 11:56:06 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:16.407 11:56:06 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:16.407 11:56:06 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:16.407 11:56:06 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:16.407 11:56:06 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:16.407 11:56:06 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:16.407 11:56:06 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:16.666 11:56:06 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:16.666 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:16.666 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:20:16.666 00:20:16.666 --- 10.0.0.2 ping statistics --- 00:20:16.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:16.666 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:20:16.666 11:56:06 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:16.666 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:16.666 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:20:16.666 00:20:16.666 --- 10.0.0.1 ping statistics --- 00:20:16.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:16.666 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:20:16.666 11:56:06 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:16.666 11:56:06 -- nvmf/common.sh@411 -- # return 0 00:20:16.666 11:56:06 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:16.666 11:56:06 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:16.666 11:56:06 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:16.666 11:56:06 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:16.666 11:56:06 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:16.666 11:56:06 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:16.666 11:56:06 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:16.666 11:56:07 -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:16.666 11:56:07 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:16.666 11:56:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:16.666 11:56:07 -- common/autotest_common.sh@10 -- # set +x 00:20:16.666 11:56:07 -- nvmf/common.sh@470 -- # nvmfpid=2511900 00:20:16.666 11:56:07 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:16.666 11:56:07 -- nvmf/common.sh@471 -- # waitforlisten 2511900 00:20:16.666 11:56:07 -- common/autotest_common.sh@817 -- # '[' -z 2511900 ']' 00:20:16.666 11:56:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:16.666 11:56:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:16.666 11:56:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:16.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:16.666 11:56:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:16.666 11:56:07 -- common/autotest_common.sh@10 -- # set +x 00:20:16.666 [2024-04-18 11:56:07.111683] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:20:16.666 [2024-04-18 11:56:07.111770] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:16.666 EAL: No free 2048 kB hugepages reported on node 1 00:20:16.926 [2024-04-18 11:56:07.239136] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.926 [2024-04-18 11:56:07.454196] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:16.926 [2024-04-18 11:56:07.454241] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:16.926 [2024-04-18 11:56:07.454253] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:16.926 [2024-04-18 11:56:07.454266] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:16.926 [2024-04-18 11:56:07.454275] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:16.926 [2024-04-18 11:56:07.454317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:17.495 11:56:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:17.495 11:56:07 -- common/autotest_common.sh@850 -- # return 0 00:20:17.495 11:56:07 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:17.495 11:56:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:17.495 11:56:07 -- common/autotest_common.sh@10 -- # set +x 00:20:17.495 11:56:07 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:17.495 11:56:07 -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:20:17.495 11:56:07 -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:17.754 true 00:20:17.754 11:56:08 -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:17.754 11:56:08 -- target/tls.sh@73 -- # jq -r .tls_version 00:20:17.754 11:56:08 -- target/tls.sh@73 -- # version=0 00:20:17.754 11:56:08 -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:20:17.754 11:56:08 -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:18.014 11:56:08 -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:18.014 11:56:08 -- target/tls.sh@81 -- # jq -r .tls_version 00:20:18.273 11:56:08 -- target/tls.sh@81 -- # version=13 00:20:18.273 11:56:08 -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:20:18.273 11:56:08 -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:18.273 11:56:08 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:18.273 11:56:08 -- target/tls.sh@89 -- # jq -r .tls_version 00:20:18.532 11:56:08 -- target/tls.sh@89 -- # version=7 00:20:18.532 11:56:08 -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:20:18.532 11:56:08 -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:18.532 11:56:08 -- target/tls.sh@96 -- # jq -r .enable_ktls 00:20:18.791 11:56:09 -- target/tls.sh@96 -- # ktls=false 00:20:18.791 11:56:09 -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:20:18.791 11:56:09 -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:18.791 11:56:09 -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:18.791 11:56:09 -- target/tls.sh@104 -- # jq -r .enable_ktls 00:20:19.049 11:56:09 -- target/tls.sh@104 -- # ktls=true 00:20:19.049 11:56:09 -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:20:19.049 11:56:09 -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:19.308 11:56:09 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:19.308 11:56:09 -- target/tls.sh@112 -- # jq -r .enable_ktls 00:20:19.308 11:56:09 -- target/tls.sh@112 -- # ktls=false 00:20:19.308 11:56:09 -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:20:19.308 11:56:09 -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:19.308 11:56:09 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:19.308 11:56:09 -- nvmf/common.sh@691 -- # local prefix key digest 00:20:19.308 11:56:09 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:20:19.308 11:56:09 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:20:19.308 11:56:09 -- nvmf/common.sh@693 -- # digest=1 00:20:19.308 11:56:09 -- nvmf/common.sh@694 -- # python - 00:20:19.308 11:56:09 -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:19.308 11:56:09 -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:19.308 11:56:09 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:19.308 11:56:09 -- nvmf/common.sh@691 -- # local prefix key digest 00:20:19.308 11:56:09 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:20:19.308 11:56:09 -- nvmf/common.sh@693 -- # key=ffeeddccbbaa99887766554433221100 00:20:19.308 11:56:09 -- nvmf/common.sh@693 -- # digest=1 00:20:19.308 11:56:09 -- nvmf/common.sh@694 -- # python - 00:20:19.308 11:56:09 -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:19.308 11:56:09 -- target/tls.sh@121 -- # mktemp 00:20:19.308 11:56:09 -- target/tls.sh@121 -- # key_path=/tmp/tmp.IpcRtNukjS 00:20:19.308 11:56:09 -- target/tls.sh@122 -- # mktemp 00:20:19.308 11:56:09 -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.Vl3KezMFlB 00:20:19.308 11:56:09 -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:19.308 11:56:09 -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:19.308 11:56:09 -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.IpcRtNukjS 00:20:19.308 11:56:09 -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.Vl3KezMFlB 00:20:19.567 11:56:09 -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:19.567 11:56:10 -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:20.135 11:56:10 -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.IpcRtNukjS 00:20:20.135 11:56:10 -- target/tls.sh@49 -- # local key=/tmp/tmp.IpcRtNukjS 00:20:20.135 11:56:10 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:20.395 [2024-04-18 11:56:10.713512] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:20.395 11:56:10 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:20.395 11:56:10 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:20.654 [2024-04-18 11:56:11.042379] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:20.654 [2024-04-18 11:56:11.042672] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:20.654 11:56:11 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:20.913 malloc0 00:20:20.913 11:56:11 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:20.913 11:56:11 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IpcRtNukjS 00:20:21.173 [2024-04-18 11:56:11.569705] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:21.173 11:56:11 -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.IpcRtNukjS 00:20:21.173 EAL: No free 2048 kB hugepages reported on node 1 00:20:31.265 Initializing NVMe Controllers 00:20:31.265 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:31.265 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:31.265 Initialization complete. Launching workers. 00:20:31.265 ======================================================== 00:20:31.265 Latency(us) 00:20:31.265 Device Information : IOPS MiB/s Average min max 00:20:31.265 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12798.19 49.99 5001.47 1309.29 8930.87 00:20:31.265 ======================================================== 00:20:31.265 Total : 12798.19 49.99 5001.47 1309.29 8930.87 00:20:31.265 00:20:31.265 11:56:21 -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.IpcRtNukjS 00:20:31.265 11:56:21 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:31.265 11:56:21 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:31.265 11:56:21 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:31.265 11:56:21 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.IpcRtNukjS' 00:20:31.265 11:56:21 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:31.265 11:56:21 -- target/tls.sh@28 -- # bdevperf_pid=2514363 00:20:31.265 11:56:21 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:31.265 11:56:21 -- target/tls.sh@31 -- # waitforlisten 2514363 /var/tmp/bdevperf.sock 00:20:31.265 11:56:21 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:31.265 11:56:21 -- common/autotest_common.sh@817 -- # '[' -z 2514363 ']' 00:20:31.265 11:56:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:31.265 11:56:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:31.265 11:56:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:31.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:31.265 11:56:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:31.265 11:56:21 -- common/autotest_common.sh@10 -- # set +x 00:20:31.524 [2024-04-18 11:56:21.835104] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:20:31.524 [2024-04-18 11:56:21.835199] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2514363 ] 00:20:31.524 EAL: No free 2048 kB hugepages reported on node 1 00:20:31.524 [2024-04-18 11:56:21.955788] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.783 [2024-04-18 11:56:22.169247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:32.351 11:56:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:32.351 11:56:22 -- common/autotest_common.sh@850 -- # return 0 00:20:32.351 11:56:22 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IpcRtNukjS 00:20:32.352 [2024-04-18 11:56:22.758242] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:32.352 [2024-04-18 11:56:22.758347] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:32.352 TLSTESTn1 00:20:32.352 11:56:22 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:32.611 Running I/O for 10 seconds... 00:20:42.592 00:20:42.592 Latency(us) 00:20:42.592 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:42.592 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:42.592 Verification LBA range: start 0x0 length 0x2000 00:20:42.592 TLSTESTn1 : 10.03 3879.08 15.15 0.00 0.00 32927.11 6160.38 55784.24 00:20:42.592 =================================================================================================================== 00:20:42.592 Total : 3879.08 15.15 0.00 0.00 32927.11 6160.38 55784.24 00:20:42.592 0 00:20:42.592 11:56:33 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:42.592 11:56:33 -- target/tls.sh@45 -- # killprocess 2514363 00:20:42.592 11:56:33 -- common/autotest_common.sh@936 -- # '[' -z 2514363 ']' 00:20:42.592 11:56:33 -- common/autotest_common.sh@940 -- # kill -0 2514363 00:20:42.592 11:56:33 -- common/autotest_common.sh@941 -- # uname 00:20:42.592 11:56:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:42.592 11:56:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2514363 00:20:42.592 11:56:33 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:42.592 11:56:33 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:42.592 11:56:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2514363' 00:20:42.592 killing process with pid 2514363 00:20:42.592 11:56:33 -- common/autotest_common.sh@955 -- # kill 2514363 00:20:42.592 Received shutdown signal, test time was about 10.000000 seconds 00:20:42.592 00:20:42.592 Latency(us) 00:20:42.592 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:42.592 =================================================================================================================== 00:20:42.592 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:42.592 [2024-04-18 11:56:33.082260] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:42.592 11:56:33 -- common/autotest_common.sh@960 -- # wait 2514363 00:20:43.971 11:56:34 -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Vl3KezMFlB 00:20:43.971 11:56:34 -- common/autotest_common.sh@638 -- # local es=0 00:20:43.971 11:56:34 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Vl3KezMFlB 00:20:43.971 11:56:34 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:20:43.971 11:56:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:43.971 11:56:34 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:20:43.971 11:56:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:43.971 11:56:34 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Vl3KezMFlB 00:20:43.971 11:56:34 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:43.971 11:56:34 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:43.971 11:56:34 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:43.971 11:56:34 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Vl3KezMFlB' 00:20:43.971 11:56:34 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:43.971 11:56:34 -- target/tls.sh@28 -- # bdevperf_pid=2516488 00:20:43.971 11:56:34 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:43.971 11:56:34 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:43.971 11:56:34 -- target/tls.sh@31 -- # waitforlisten 2516488 /var/tmp/bdevperf.sock 00:20:43.971 11:56:34 -- common/autotest_common.sh@817 -- # '[' -z 2516488 ']' 00:20:43.971 11:56:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:43.971 11:56:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:43.971 11:56:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:43.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:43.971 11:56:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:43.971 11:56:34 -- common/autotest_common.sh@10 -- # set +x 00:20:43.971 [2024-04-18 11:56:34.194016] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:20:43.971 [2024-04-18 11:56:34.194109] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2516488 ] 00:20:43.971 EAL: No free 2048 kB hugepages reported on node 1 00:20:43.971 [2024-04-18 11:56:34.313993] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.971 [2024-04-18 11:56:34.519281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:44.540 11:56:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:44.540 11:56:34 -- common/autotest_common.sh@850 -- # return 0 00:20:44.540 11:56:34 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Vl3KezMFlB 00:20:44.799 [2024-04-18 11:56:35.119068] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:44.799 [2024-04-18 11:56:35.119174] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:44.799 [2024-04-18 11:56:35.127296] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:44.799 [2024-04-18 11:56:35.128390] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (107): Transport endpoint is not connected 00:20:44.800 [2024-04-18 11:56:35.129365] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (9): Bad file descriptor 00:20:44.800 [2024-04-18 11:56:35.130365] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:44.800 [2024-04-18 11:56:35.130387] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:44.800 [2024-04-18 11:56:35.130403] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:44.800 request: 00:20:44.800 { 00:20:44.800 "name": "TLSTEST", 00:20:44.800 "trtype": "tcp", 00:20:44.800 "traddr": "10.0.0.2", 00:20:44.800 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:44.800 "adrfam": "ipv4", 00:20:44.800 "trsvcid": "4420", 00:20:44.800 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:44.800 "psk": "/tmp/tmp.Vl3KezMFlB", 00:20:44.800 "method": "bdev_nvme_attach_controller", 00:20:44.800 "req_id": 1 00:20:44.800 } 00:20:44.800 Got JSON-RPC error response 00:20:44.800 response: 00:20:44.800 { 00:20:44.800 "code": -32602, 00:20:44.800 "message": "Invalid parameters" 00:20:44.800 } 00:20:44.800 11:56:35 -- target/tls.sh@36 -- # killprocess 2516488 00:20:44.800 11:56:35 -- common/autotest_common.sh@936 -- # '[' -z 2516488 ']' 00:20:44.800 11:56:35 -- common/autotest_common.sh@940 -- # kill -0 2516488 00:20:44.800 11:56:35 -- common/autotest_common.sh@941 -- # uname 00:20:44.800 11:56:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:44.800 11:56:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2516488 00:20:44.800 11:56:35 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:44.800 11:56:35 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:44.800 11:56:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2516488' 00:20:44.800 killing process with pid 2516488 00:20:44.800 11:56:35 -- common/autotest_common.sh@955 -- # kill 2516488 00:20:44.800 Received shutdown signal, test time was about 10.000000 seconds 00:20:44.800 00:20:44.800 Latency(us) 00:20:44.800 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:44.800 =================================================================================================================== 00:20:44.800 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:44.800 [2024-04-18 11:56:35.205461] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:44.800 11:56:35 -- common/autotest_common.sh@960 -- # wait 2516488 00:20:45.738 11:56:36 -- target/tls.sh@37 -- # return 1 00:20:45.738 11:56:36 -- common/autotest_common.sh@641 -- # es=1 00:20:45.738 11:56:36 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:45.738 11:56:36 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:45.738 11:56:36 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:45.738 11:56:36 -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.IpcRtNukjS 00:20:45.738 11:56:36 -- common/autotest_common.sh@638 -- # local es=0 00:20:45.738 11:56:36 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.IpcRtNukjS 00:20:45.738 11:56:36 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:20:45.738 11:56:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:45.738 11:56:36 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:20:45.738 11:56:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:45.738 11:56:36 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.IpcRtNukjS 00:20:45.738 11:56:36 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:45.738 11:56:36 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:45.738 11:56:36 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:45.738 11:56:36 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.IpcRtNukjS' 00:20:45.738 11:56:36 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:45.738 11:56:36 -- target/tls.sh@28 -- # bdevperf_pid=2516772 00:20:45.738 11:56:36 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:45.738 11:56:36 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:45.738 11:56:36 -- target/tls.sh@31 -- # waitforlisten 2516772 /var/tmp/bdevperf.sock 00:20:45.738 11:56:36 -- common/autotest_common.sh@817 -- # '[' -z 2516772 ']' 00:20:45.738 11:56:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:45.738 11:56:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:45.738 11:56:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:45.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:45.738 11:56:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:45.738 11:56:36 -- common/autotest_common.sh@10 -- # set +x 00:20:45.997 [2024-04-18 11:56:36.287379] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:20:45.997 [2024-04-18 11:56:36.287484] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2516772 ] 00:20:45.997 EAL: No free 2048 kB hugepages reported on node 1 00:20:45.997 [2024-04-18 11:56:36.408129] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.256 [2024-04-18 11:56:36.619749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:46.515 11:56:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:46.515 11:56:37 -- common/autotest_common.sh@850 -- # return 0 00:20:46.515 11:56:37 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.IpcRtNukjS 00:20:46.773 [2024-04-18 11:56:37.205404] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:46.773 [2024-04-18 11:56:37.205538] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:46.773 [2024-04-18 11:56:37.219736] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:46.773 [2024-04-18 11:56:37.219772] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:46.773 [2024-04-18 11:56:37.219814] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:46.773 [2024-04-18 11:56:37.220742] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (107): Transport endpoint is not connected 00:20:46.773 [2024-04-18 11:56:37.221717] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (9): Bad file descriptor 00:20:46.773 [2024-04-18 11:56:37.222716] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:46.773 [2024-04-18 11:56:37.222737] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:46.773 [2024-04-18 11:56:37.222754] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:46.773 request: 00:20:46.773 { 00:20:46.774 "name": "TLSTEST", 00:20:46.774 "trtype": "tcp", 00:20:46.774 "traddr": "10.0.0.2", 00:20:46.774 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:46.774 "adrfam": "ipv4", 00:20:46.774 "trsvcid": "4420", 00:20:46.774 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.774 "psk": "/tmp/tmp.IpcRtNukjS", 00:20:46.774 "method": "bdev_nvme_attach_controller", 00:20:46.774 "req_id": 1 00:20:46.774 } 00:20:46.774 Got JSON-RPC error response 00:20:46.774 response: 00:20:46.774 { 00:20:46.774 "code": -32602, 00:20:46.774 "message": "Invalid parameters" 00:20:46.774 } 00:20:46.774 11:56:37 -- target/tls.sh@36 -- # killprocess 2516772 00:20:46.774 11:56:37 -- common/autotest_common.sh@936 -- # '[' -z 2516772 ']' 00:20:46.774 11:56:37 -- common/autotest_common.sh@940 -- # kill -0 2516772 00:20:46.774 11:56:37 -- common/autotest_common.sh@941 -- # uname 00:20:46.774 11:56:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:46.774 11:56:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2516772 00:20:46.774 11:56:37 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:46.774 11:56:37 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:46.774 11:56:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2516772' 00:20:46.774 killing process with pid 2516772 00:20:46.774 11:56:37 -- common/autotest_common.sh@955 -- # kill 2516772 00:20:46.774 Received shutdown signal, test time was about 10.000000 seconds 00:20:46.774 00:20:46.774 Latency(us) 00:20:46.774 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:46.774 =================================================================================================================== 00:20:46.774 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:46.774 [2024-04-18 11:56:37.298903] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:46.774 11:56:37 -- common/autotest_common.sh@960 -- # wait 2516772 00:20:48.153 11:56:38 -- target/tls.sh@37 -- # return 1 00:20:48.153 11:56:38 -- common/autotest_common.sh@641 -- # es=1 00:20:48.153 11:56:38 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:48.153 11:56:38 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:48.153 11:56:38 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:48.153 11:56:38 -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.IpcRtNukjS 00:20:48.153 11:56:38 -- common/autotest_common.sh@638 -- # local es=0 00:20:48.153 11:56:38 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.IpcRtNukjS 00:20:48.153 11:56:38 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:20:48.153 11:56:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:48.153 11:56:38 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:20:48.153 11:56:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:48.153 11:56:38 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.IpcRtNukjS 00:20:48.153 11:56:38 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:48.153 11:56:38 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:48.153 11:56:38 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:48.153 11:56:38 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.IpcRtNukjS' 00:20:48.153 11:56:38 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:48.153 11:56:38 -- target/tls.sh@28 -- # bdevperf_pid=2517071 00:20:48.153 11:56:38 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:48.153 11:56:38 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:48.153 11:56:38 -- target/tls.sh@31 -- # waitforlisten 2517071 /var/tmp/bdevperf.sock 00:20:48.153 11:56:38 -- common/autotest_common.sh@817 -- # '[' -z 2517071 ']' 00:20:48.153 11:56:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:48.153 11:56:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:48.153 11:56:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:48.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:48.153 11:56:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:48.153 11:56:38 -- common/autotest_common.sh@10 -- # set +x 00:20:48.153 [2024-04-18 11:56:38.374183] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:20:48.153 [2024-04-18 11:56:38.374279] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2517071 ] 00:20:48.153 EAL: No free 2048 kB hugepages reported on node 1 00:20:48.153 [2024-04-18 11:56:38.496064] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.412 [2024-04-18 11:56:38.710261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:48.671 11:56:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:48.671 11:56:39 -- common/autotest_common.sh@850 -- # return 0 00:20:48.671 11:56:39 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IpcRtNukjS 00:20:48.930 [2024-04-18 11:56:39.304097] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:48.930 [2024-04-18 11:56:39.304206] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:48.930 [2024-04-18 11:56:39.315081] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:48.930 [2024-04-18 11:56:39.315114] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:48.930 [2024-04-18 11:56:39.315154] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:48.930 [2024-04-18 11:56:39.315399] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (107): Transport endpoint is not connected 00:20:48.930 [2024-04-18 11:56:39.316374] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (9): Bad file descriptor 00:20:48.930 [2024-04-18 11:56:39.317374] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:48.930 [2024-04-18 11:56:39.317395] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:48.930 [2024-04-18 11:56:39.317414] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:48.930 request: 00:20:48.930 { 00:20:48.930 "name": "TLSTEST", 00:20:48.930 "trtype": "tcp", 00:20:48.930 "traddr": "10.0.0.2", 00:20:48.930 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:48.930 "adrfam": "ipv4", 00:20:48.930 "trsvcid": "4420", 00:20:48.930 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:48.930 "psk": "/tmp/tmp.IpcRtNukjS", 00:20:48.930 "method": "bdev_nvme_attach_controller", 00:20:48.930 "req_id": 1 00:20:48.930 } 00:20:48.930 Got JSON-RPC error response 00:20:48.930 response: 00:20:48.930 { 00:20:48.930 "code": -32602, 00:20:48.930 "message": "Invalid parameters" 00:20:48.930 } 00:20:48.930 11:56:39 -- target/tls.sh@36 -- # killprocess 2517071 00:20:48.930 11:56:39 -- common/autotest_common.sh@936 -- # '[' -z 2517071 ']' 00:20:48.930 11:56:39 -- common/autotest_common.sh@940 -- # kill -0 2517071 00:20:48.930 11:56:39 -- common/autotest_common.sh@941 -- # uname 00:20:48.930 11:56:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:48.930 11:56:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2517071 00:20:48.930 11:56:39 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:48.930 11:56:39 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:48.930 11:56:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2517071' 00:20:48.930 killing process with pid 2517071 00:20:48.930 11:56:39 -- common/autotest_common.sh@955 -- # kill 2517071 00:20:48.930 Received shutdown signal, test time was about 10.000000 seconds 00:20:48.930 00:20:48.930 Latency(us) 00:20:48.930 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:48.930 =================================================================================================================== 00:20:48.930 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:48.930 [2024-04-18 11:56:39.385440] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:48.930 11:56:39 -- common/autotest_common.sh@960 -- # wait 2517071 00:20:49.931 11:56:40 -- target/tls.sh@37 -- # return 1 00:20:49.931 11:56:40 -- common/autotest_common.sh@641 -- # es=1 00:20:49.931 11:56:40 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:49.931 11:56:40 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:49.931 11:56:40 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:49.931 11:56:40 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:49.931 11:56:40 -- common/autotest_common.sh@638 -- # local es=0 00:20:49.931 11:56:40 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:49.931 11:56:40 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:20:49.931 11:56:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:49.931 11:56:40 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:20:49.931 11:56:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:49.931 11:56:40 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:49.931 11:56:40 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:49.931 11:56:40 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:49.931 11:56:40 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:49.931 11:56:40 -- target/tls.sh@23 -- # psk= 00:20:49.931 11:56:40 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:49.931 11:56:40 -- target/tls.sh@28 -- # bdevperf_pid=2517574 00:20:49.931 11:56:40 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:49.931 11:56:40 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:49.931 11:56:40 -- target/tls.sh@31 -- # waitforlisten 2517574 /var/tmp/bdevperf.sock 00:20:49.931 11:56:40 -- common/autotest_common.sh@817 -- # '[' -z 2517574 ']' 00:20:49.931 11:56:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:49.931 11:56:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:49.931 11:56:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:49.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:49.931 11:56:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:49.931 11:56:40 -- common/autotest_common.sh@10 -- # set +x 00:20:49.931 [2024-04-18 11:56:40.472394] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:20:49.931 [2024-04-18 11:56:40.472498] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2517574 ] 00:20:50.190 EAL: No free 2048 kB hugepages reported on node 1 00:20:50.190 [2024-04-18 11:56:40.594621] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.449 [2024-04-18 11:56:40.813853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:50.707 11:56:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:50.707 11:56:41 -- common/autotest_common.sh@850 -- # return 0 00:20:50.707 11:56:41 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:50.967 [2024-04-18 11:56:41.390439] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:50.967 [2024-04-18 11:56:41.392536] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (9): Bad file descriptor 00:20:50.967 [2024-04-18 11:56:41.393533] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:50.967 [2024-04-18 11:56:41.393557] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:50.967 [2024-04-18 11:56:41.393576] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:50.967 request: 00:20:50.967 { 00:20:50.967 "name": "TLSTEST", 00:20:50.967 "trtype": "tcp", 00:20:50.967 "traddr": "10.0.0.2", 00:20:50.967 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:50.967 "adrfam": "ipv4", 00:20:50.967 "trsvcid": "4420", 00:20:50.967 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:50.967 "method": "bdev_nvme_attach_controller", 00:20:50.967 "req_id": 1 00:20:50.967 } 00:20:50.967 Got JSON-RPC error response 00:20:50.967 response: 00:20:50.967 { 00:20:50.967 "code": -32602, 00:20:50.967 "message": "Invalid parameters" 00:20:50.967 } 00:20:50.967 11:56:41 -- target/tls.sh@36 -- # killprocess 2517574 00:20:50.967 11:56:41 -- common/autotest_common.sh@936 -- # '[' -z 2517574 ']' 00:20:50.967 11:56:41 -- common/autotest_common.sh@940 -- # kill -0 2517574 00:20:50.967 11:56:41 -- common/autotest_common.sh@941 -- # uname 00:20:50.967 11:56:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:50.967 11:56:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2517574 00:20:50.967 11:56:41 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:50.967 11:56:41 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:50.967 11:56:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2517574' 00:20:50.967 killing process with pid 2517574 00:20:50.967 11:56:41 -- common/autotest_common.sh@955 -- # kill 2517574 00:20:50.967 Received shutdown signal, test time was about 10.000000 seconds 00:20:50.967 00:20:50.967 Latency(us) 00:20:50.967 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:50.967 =================================================================================================================== 00:20:50.967 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:50.967 11:56:41 -- common/autotest_common.sh@960 -- # wait 2517574 00:20:51.903 11:56:42 -- target/tls.sh@37 -- # return 1 00:20:51.903 11:56:42 -- common/autotest_common.sh@641 -- # es=1 00:20:51.903 11:56:42 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:51.903 11:56:42 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:51.903 11:56:42 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:52.162 11:56:42 -- target/tls.sh@158 -- # killprocess 2511900 00:20:52.162 11:56:42 -- common/autotest_common.sh@936 -- # '[' -z 2511900 ']' 00:20:52.162 11:56:42 -- common/autotest_common.sh@940 -- # kill -0 2511900 00:20:52.162 11:56:42 -- common/autotest_common.sh@941 -- # uname 00:20:52.162 11:56:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:52.162 11:56:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2511900 00:20:52.162 11:56:42 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:52.162 11:56:42 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:52.162 11:56:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2511900' 00:20:52.162 killing process with pid 2511900 00:20:52.162 11:56:42 -- common/autotest_common.sh@955 -- # kill 2511900 00:20:52.162 [2024-04-18 11:56:42.509722] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:52.162 11:56:42 -- common/autotest_common.sh@960 -- # wait 2511900 00:20:53.542 11:56:43 -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:53.542 11:56:43 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:53.542 11:56:43 -- nvmf/common.sh@691 -- # local prefix key digest 00:20:53.542 11:56:43 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:20:53.542 11:56:43 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:53.542 11:56:43 -- nvmf/common.sh@693 -- # digest=2 00:20:53.542 11:56:43 -- nvmf/common.sh@694 -- # python - 00:20:53.542 11:56:43 -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:53.542 11:56:43 -- target/tls.sh@160 -- # mktemp 00:20:53.542 11:56:43 -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.BJdxnMKX0m 00:20:53.542 11:56:43 -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:53.542 11:56:43 -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.BJdxnMKX0m 00:20:53.542 11:56:43 -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:20:53.542 11:56:43 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:53.542 11:56:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:53.542 11:56:43 -- common/autotest_common.sh@10 -- # set +x 00:20:53.542 11:56:43 -- nvmf/common.sh@470 -- # nvmfpid=2518134 00:20:53.542 11:56:43 -- nvmf/common.sh@471 -- # waitforlisten 2518134 00:20:53.542 11:56:43 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:53.542 11:56:43 -- common/autotest_common.sh@817 -- # '[' -z 2518134 ']' 00:20:53.542 11:56:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:53.542 11:56:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:53.542 11:56:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:53.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:53.542 11:56:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:53.542 11:56:43 -- common/autotest_common.sh@10 -- # set +x 00:20:53.542 [2024-04-18 11:56:44.022065] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:20:53.542 [2024-04-18 11:56:44.022161] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:53.800 EAL: No free 2048 kB hugepages reported on node 1 00:20:53.800 [2024-04-18 11:56:44.152294] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.059 [2024-04-18 11:56:44.356101] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:54.059 [2024-04-18 11:56:44.356147] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:54.059 [2024-04-18 11:56:44.356159] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:54.059 [2024-04-18 11:56:44.356171] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:54.059 [2024-04-18 11:56:44.356181] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:54.059 [2024-04-18 11:56:44.356216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:54.318 11:56:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:54.318 11:56:44 -- common/autotest_common.sh@850 -- # return 0 00:20:54.318 11:56:44 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:54.318 11:56:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:54.318 11:56:44 -- common/autotest_common.sh@10 -- # set +x 00:20:54.318 11:56:44 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:54.318 11:56:44 -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.BJdxnMKX0m 00:20:54.318 11:56:44 -- target/tls.sh@49 -- # local key=/tmp/tmp.BJdxnMKX0m 00:20:54.318 11:56:44 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:54.577 [2024-04-18 11:56:44.980751] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:54.577 11:56:44 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:54.836 11:56:45 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:54.836 [2024-04-18 11:56:45.317659] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:54.836 [2024-04-18 11:56:45.317914] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:54.836 11:56:45 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:55.095 malloc0 00:20:55.095 11:56:45 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:55.355 11:56:45 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.BJdxnMKX0m 00:20:55.355 [2024-04-18 11:56:45.849939] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:55.355 11:56:45 -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BJdxnMKX0m 00:20:55.355 11:56:45 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:55.355 11:56:45 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:55.355 11:56:45 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:55.355 11:56:45 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.BJdxnMKX0m' 00:20:55.355 11:56:45 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:55.355 11:56:45 -- target/tls.sh@28 -- # bdevperf_pid=2518431 00:20:55.355 11:56:45 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:55.355 11:56:45 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:55.355 11:56:45 -- target/tls.sh@31 -- # waitforlisten 2518431 /var/tmp/bdevperf.sock 00:20:55.355 11:56:45 -- common/autotest_common.sh@817 -- # '[' -z 2518431 ']' 00:20:55.355 11:56:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:55.355 11:56:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:55.355 11:56:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:55.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:55.355 11:56:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:55.355 11:56:45 -- common/autotest_common.sh@10 -- # set +x 00:20:55.614 [2024-04-18 11:56:45.943901] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:20:55.614 [2024-04-18 11:56:45.943993] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2518431 ] 00:20:55.614 EAL: No free 2048 kB hugepages reported on node 1 00:20:55.614 [2024-04-18 11:56:46.061195] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:55.873 [2024-04-18 11:56:46.275058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:56.442 11:56:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:56.442 11:56:46 -- common/autotest_common.sh@850 -- # return 0 00:20:56.442 11:56:46 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.BJdxnMKX0m 00:20:56.442 [2024-04-18 11:56:46.837690] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:56.442 [2024-04-18 11:56:46.837796] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:56.442 TLSTESTn1 00:20:56.442 11:56:46 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:56.702 Running I/O for 10 seconds... 00:21:06.681 00:21:06.681 Latency(us) 00:21:06.681 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:06.681 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:06.681 Verification LBA range: start 0x0 length 0x2000 00:21:06.681 TLSTESTn1 : 10.03 3859.76 15.08 0.00 0.00 33094.91 8441.04 67947.72 00:21:06.681 =================================================================================================================== 00:21:06.682 Total : 3859.76 15.08 0.00 0.00 33094.91 8441.04 67947.72 00:21:06.682 0 00:21:06.682 11:56:57 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:06.682 11:56:57 -- target/tls.sh@45 -- # killprocess 2518431 00:21:06.682 11:56:57 -- common/autotest_common.sh@936 -- # '[' -z 2518431 ']' 00:21:06.682 11:56:57 -- common/autotest_common.sh@940 -- # kill -0 2518431 00:21:06.682 11:56:57 -- common/autotest_common.sh@941 -- # uname 00:21:06.682 11:56:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:06.682 11:56:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2518431 00:21:06.682 11:56:57 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:21:06.682 11:56:57 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:21:06.682 11:56:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2518431' 00:21:06.682 killing process with pid 2518431 00:21:06.682 11:56:57 -- common/autotest_common.sh@955 -- # kill 2518431 00:21:06.682 Received shutdown signal, test time was about 10.000000 seconds 00:21:06.682 00:21:06.682 Latency(us) 00:21:06.682 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:06.682 =================================================================================================================== 00:21:06.682 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:06.682 [2024-04-18 11:56:57.164535] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:06.682 11:56:57 -- common/autotest_common.sh@960 -- # wait 2518431 00:21:08.060 11:56:58 -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.BJdxnMKX0m 00:21:08.060 11:56:58 -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BJdxnMKX0m 00:21:08.060 11:56:58 -- common/autotest_common.sh@638 -- # local es=0 00:21:08.060 11:56:58 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BJdxnMKX0m 00:21:08.060 11:56:58 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:21:08.060 11:56:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:08.060 11:56:58 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:21:08.060 11:56:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:08.060 11:56:58 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BJdxnMKX0m 00:21:08.060 11:56:58 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:08.060 11:56:58 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:08.060 11:56:58 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:08.060 11:56:58 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.BJdxnMKX0m' 00:21:08.060 11:56:58 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:08.060 11:56:58 -- target/tls.sh@28 -- # bdevperf_pid=2520555 00:21:08.060 11:56:58 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:08.060 11:56:58 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:08.060 11:56:58 -- target/tls.sh@31 -- # waitforlisten 2520555 /var/tmp/bdevperf.sock 00:21:08.060 11:56:58 -- common/autotest_common.sh@817 -- # '[' -z 2520555 ']' 00:21:08.060 11:56:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:08.060 11:56:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:08.060 11:56:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:08.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:08.060 11:56:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:08.060 11:56:58 -- common/autotest_common.sh@10 -- # set +x 00:21:08.060 [2024-04-18 11:56:58.284189] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:21:08.060 [2024-04-18 11:56:58.284305] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2520555 ] 00:21:08.060 EAL: No free 2048 kB hugepages reported on node 1 00:21:08.060 [2024-04-18 11:56:58.403640] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:08.319 [2024-04-18 11:56:58.617353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:08.577 11:56:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:08.577 11:56:59 -- common/autotest_common.sh@850 -- # return 0 00:21:08.577 11:56:59 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.BJdxnMKX0m 00:21:08.836 [2024-04-18 11:56:59.201434] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:08.836 [2024-04-18 11:56:59.201518] bdev_nvme.c:6054:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:08.836 [2024-04-18 11:56:59.201533] bdev_nvme.c:6163:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.BJdxnMKX0m 00:21:08.836 request: 00:21:08.836 { 00:21:08.836 "name": "TLSTEST", 00:21:08.836 "trtype": "tcp", 00:21:08.836 "traddr": "10.0.0.2", 00:21:08.836 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:08.836 "adrfam": "ipv4", 00:21:08.836 "trsvcid": "4420", 00:21:08.836 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:08.836 "psk": "/tmp/tmp.BJdxnMKX0m", 00:21:08.836 "method": "bdev_nvme_attach_controller", 00:21:08.836 "req_id": 1 00:21:08.836 } 00:21:08.836 Got JSON-RPC error response 00:21:08.836 response: 00:21:08.836 { 00:21:08.836 "code": -1, 00:21:08.836 "message": "Operation not permitted" 00:21:08.836 } 00:21:08.836 11:56:59 -- target/tls.sh@36 -- # killprocess 2520555 00:21:08.836 11:56:59 -- common/autotest_common.sh@936 -- # '[' -z 2520555 ']' 00:21:08.836 11:56:59 -- common/autotest_common.sh@940 -- # kill -0 2520555 00:21:08.836 11:56:59 -- common/autotest_common.sh@941 -- # uname 00:21:08.836 11:56:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:08.836 11:56:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2520555 00:21:08.836 11:56:59 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:21:08.836 11:56:59 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:21:08.836 11:56:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2520555' 00:21:08.836 killing process with pid 2520555 00:21:08.836 11:56:59 -- common/autotest_common.sh@955 -- # kill 2520555 00:21:08.836 Received shutdown signal, test time was about 10.000000 seconds 00:21:08.836 00:21:08.836 Latency(us) 00:21:08.836 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:08.836 =================================================================================================================== 00:21:08.836 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:08.836 11:56:59 -- common/autotest_common.sh@960 -- # wait 2520555 00:21:09.771 11:57:00 -- target/tls.sh@37 -- # return 1 00:21:09.771 11:57:00 -- common/autotest_common.sh@641 -- # es=1 00:21:09.771 11:57:00 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:09.771 11:57:00 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:09.771 11:57:00 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:09.771 11:57:00 -- target/tls.sh@174 -- # killprocess 2518134 00:21:09.771 11:57:00 -- common/autotest_common.sh@936 -- # '[' -z 2518134 ']' 00:21:09.771 11:57:00 -- common/autotest_common.sh@940 -- # kill -0 2518134 00:21:09.771 11:57:00 -- common/autotest_common.sh@941 -- # uname 00:21:09.771 11:57:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:09.771 11:57:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2518134 00:21:10.030 11:57:00 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:10.030 11:57:00 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:10.030 11:57:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2518134' 00:21:10.030 killing process with pid 2518134 00:21:10.030 11:57:00 -- common/autotest_common.sh@955 -- # kill 2518134 00:21:10.030 [2024-04-18 11:57:00.345135] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:10.030 11:57:00 -- common/autotest_common.sh@960 -- # wait 2518134 00:21:11.430 11:57:01 -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:21:11.430 11:57:01 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:11.430 11:57:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:11.430 11:57:01 -- common/autotest_common.sh@10 -- # set +x 00:21:11.430 11:57:01 -- nvmf/common.sh@470 -- # nvmfpid=2521232 00:21:11.430 11:57:01 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:11.430 11:57:01 -- nvmf/common.sh@471 -- # waitforlisten 2521232 00:21:11.430 11:57:01 -- common/autotest_common.sh@817 -- # '[' -z 2521232 ']' 00:21:11.430 11:57:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:11.430 11:57:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:11.430 11:57:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:11.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:11.430 11:57:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:11.430 11:57:01 -- common/autotest_common.sh@10 -- # set +x 00:21:11.430 [2024-04-18 11:57:01.809727] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:21:11.430 [2024-04-18 11:57:01.809818] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:11.430 EAL: No free 2048 kB hugepages reported on node 1 00:21:11.430 [2024-04-18 11:57:01.938152] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.689 [2024-04-18 11:57:02.137863] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:11.689 [2024-04-18 11:57:02.137911] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:11.689 [2024-04-18 11:57:02.137923] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:11.689 [2024-04-18 11:57:02.137936] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:11.689 [2024-04-18 11:57:02.137945] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:11.689 [2024-04-18 11:57:02.137977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:12.258 11:57:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:12.258 11:57:02 -- common/autotest_common.sh@850 -- # return 0 00:21:12.258 11:57:02 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:12.258 11:57:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:12.258 11:57:02 -- common/autotest_common.sh@10 -- # set +x 00:21:12.258 11:57:02 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:12.258 11:57:02 -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.BJdxnMKX0m 00:21:12.258 11:57:02 -- common/autotest_common.sh@638 -- # local es=0 00:21:12.258 11:57:02 -- common/autotest_common.sh@640 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.BJdxnMKX0m 00:21:12.258 11:57:02 -- common/autotest_common.sh@626 -- # local arg=setup_nvmf_tgt 00:21:12.258 11:57:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:12.258 11:57:02 -- common/autotest_common.sh@630 -- # type -t setup_nvmf_tgt 00:21:12.258 11:57:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:12.258 11:57:02 -- common/autotest_common.sh@641 -- # setup_nvmf_tgt /tmp/tmp.BJdxnMKX0m 00:21:12.258 11:57:02 -- target/tls.sh@49 -- # local key=/tmp/tmp.BJdxnMKX0m 00:21:12.258 11:57:02 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:12.258 [2024-04-18 11:57:02.779706] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:12.258 11:57:02 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:12.516 11:57:02 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:12.775 [2024-04-18 11:57:03.100576] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:12.775 [2024-04-18 11:57:03.100833] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:12.775 11:57:03 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:13.034 malloc0 00:21:13.034 11:57:03 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:13.034 11:57:03 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.BJdxnMKX0m 00:21:13.293 [2024-04-18 11:57:03.644172] tcp.c:3562:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:13.293 [2024-04-18 11:57:03.644209] tcp.c:3648:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:21:13.293 [2024-04-18 11:57:03.644236] subsystem.c: 967:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:21:13.293 request: 00:21:13.293 { 00:21:13.293 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:13.293 "host": "nqn.2016-06.io.spdk:host1", 00:21:13.293 "psk": "/tmp/tmp.BJdxnMKX0m", 00:21:13.293 "method": "nvmf_subsystem_add_host", 00:21:13.293 "req_id": 1 00:21:13.293 } 00:21:13.293 Got JSON-RPC error response 00:21:13.293 response: 00:21:13.293 { 00:21:13.293 "code": -32603, 00:21:13.293 "message": "Internal error" 00:21:13.293 } 00:21:13.293 11:57:03 -- common/autotest_common.sh@641 -- # es=1 00:21:13.293 11:57:03 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:13.293 11:57:03 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:13.293 11:57:03 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:13.293 11:57:03 -- target/tls.sh@180 -- # killprocess 2521232 00:21:13.293 11:57:03 -- common/autotest_common.sh@936 -- # '[' -z 2521232 ']' 00:21:13.293 11:57:03 -- common/autotest_common.sh@940 -- # kill -0 2521232 00:21:13.293 11:57:03 -- common/autotest_common.sh@941 -- # uname 00:21:13.293 11:57:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:13.293 11:57:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2521232 00:21:13.293 11:57:03 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:13.293 11:57:03 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:13.293 11:57:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2521232' 00:21:13.293 killing process with pid 2521232 00:21:13.293 11:57:03 -- common/autotest_common.sh@955 -- # kill 2521232 00:21:13.293 11:57:03 -- common/autotest_common.sh@960 -- # wait 2521232 00:21:14.684 11:57:05 -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.BJdxnMKX0m 00:21:14.684 11:57:05 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:21:14.684 11:57:05 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:14.684 11:57:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:14.684 11:57:05 -- common/autotest_common.sh@10 -- # set +x 00:21:14.684 11:57:05 -- nvmf/common.sh@470 -- # nvmfpid=2522069 00:21:14.684 11:57:05 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:14.684 11:57:05 -- nvmf/common.sh@471 -- # waitforlisten 2522069 00:21:14.684 11:57:05 -- common/autotest_common.sh@817 -- # '[' -z 2522069 ']' 00:21:14.684 11:57:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:14.684 11:57:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:14.684 11:57:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:14.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:14.684 11:57:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:14.684 11:57:05 -- common/autotest_common.sh@10 -- # set +x 00:21:14.684 [2024-04-18 11:57:05.145395] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:21:14.684 [2024-04-18 11:57:05.145491] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:14.684 EAL: No free 2048 kB hugepages reported on node 1 00:21:14.943 [2024-04-18 11:57:05.275079] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.943 [2024-04-18 11:57:05.482540] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:14.943 [2024-04-18 11:57:05.482594] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:14.943 [2024-04-18 11:57:05.482608] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:14.943 [2024-04-18 11:57:05.482621] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:14.943 [2024-04-18 11:57:05.482630] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:14.943 [2024-04-18 11:57:05.482664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:15.510 11:57:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:15.510 11:57:05 -- common/autotest_common.sh@850 -- # return 0 00:21:15.510 11:57:05 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:15.510 11:57:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:15.510 11:57:05 -- common/autotest_common.sh@10 -- # set +x 00:21:15.510 11:57:05 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:15.510 11:57:05 -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.BJdxnMKX0m 00:21:15.510 11:57:05 -- target/tls.sh@49 -- # local key=/tmp/tmp.BJdxnMKX0m 00:21:15.510 11:57:05 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:15.769 [2024-04-18 11:57:06.098117] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:15.769 11:57:06 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:15.769 11:57:06 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:16.027 [2024-04-18 11:57:06.418966] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:16.027 [2024-04-18 11:57:06.419228] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:16.027 11:57:06 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:16.286 malloc0 00:21:16.286 11:57:06 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:16.286 11:57:06 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.BJdxnMKX0m 00:21:16.545 [2024-04-18 11:57:06.932382] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:16.545 11:57:06 -- target/tls.sh@188 -- # bdevperf_pid=2522526 00:21:16.545 11:57:06 -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:16.545 11:57:06 -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:16.545 11:57:06 -- target/tls.sh@191 -- # waitforlisten 2522526 /var/tmp/bdevperf.sock 00:21:16.545 11:57:06 -- common/autotest_common.sh@817 -- # '[' -z 2522526 ']' 00:21:16.545 11:57:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:16.545 11:57:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:16.545 11:57:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:16.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:16.545 11:57:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:16.545 11:57:06 -- common/autotest_common.sh@10 -- # set +x 00:21:16.545 [2024-04-18 11:57:07.020199] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:21:16.545 [2024-04-18 11:57:07.020296] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2522526 ] 00:21:16.545 EAL: No free 2048 kB hugepages reported on node 1 00:21:16.805 [2024-04-18 11:57:07.142922] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.063 [2024-04-18 11:57:07.354909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:17.322 11:57:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:17.322 11:57:07 -- common/autotest_common.sh@850 -- # return 0 00:21:17.322 11:57:07 -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.BJdxnMKX0m 00:21:17.581 [2024-04-18 11:57:07.930223] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:17.581 [2024-04-18 11:57:07.930325] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:17.581 TLSTESTn1 00:21:17.581 11:57:08 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:21:17.841 11:57:08 -- target/tls.sh@196 -- # tgtconf='{ 00:21:17.841 "subsystems": [ 00:21:17.841 { 00:21:17.841 "subsystem": "keyring", 00:21:17.841 "config": [] 00:21:17.841 }, 00:21:17.841 { 00:21:17.841 "subsystem": "iobuf", 00:21:17.841 "config": [ 00:21:17.841 { 00:21:17.841 "method": "iobuf_set_options", 00:21:17.841 "params": { 00:21:17.841 "small_pool_count": 8192, 00:21:17.841 "large_pool_count": 1024, 00:21:17.841 "small_bufsize": 8192, 00:21:17.841 "large_bufsize": 135168 00:21:17.841 } 00:21:17.841 } 00:21:17.841 ] 00:21:17.841 }, 00:21:17.841 { 00:21:17.841 "subsystem": "sock", 00:21:17.841 "config": [ 00:21:17.841 { 00:21:17.841 "method": "sock_impl_set_options", 00:21:17.841 "params": { 00:21:17.841 "impl_name": "posix", 00:21:17.841 "recv_buf_size": 2097152, 00:21:17.841 "send_buf_size": 2097152, 00:21:17.841 "enable_recv_pipe": true, 00:21:17.841 "enable_quickack": false, 00:21:17.841 "enable_placement_id": 0, 00:21:17.841 "enable_zerocopy_send_server": true, 00:21:17.841 "enable_zerocopy_send_client": false, 00:21:17.841 "zerocopy_threshold": 0, 00:21:17.841 "tls_version": 0, 00:21:17.841 "enable_ktls": false 00:21:17.841 } 00:21:17.841 }, 00:21:17.841 { 00:21:17.841 "method": "sock_impl_set_options", 00:21:17.841 "params": { 00:21:17.841 "impl_name": "ssl", 00:21:17.841 "recv_buf_size": 4096, 00:21:17.841 "send_buf_size": 4096, 00:21:17.841 "enable_recv_pipe": true, 00:21:17.841 "enable_quickack": false, 00:21:17.841 "enable_placement_id": 0, 00:21:17.841 "enable_zerocopy_send_server": true, 00:21:17.841 "enable_zerocopy_send_client": false, 00:21:17.841 "zerocopy_threshold": 0, 00:21:17.841 "tls_version": 0, 00:21:17.841 "enable_ktls": false 00:21:17.841 } 00:21:17.841 } 00:21:17.841 ] 00:21:17.841 }, 00:21:17.841 { 00:21:17.841 "subsystem": "vmd", 00:21:17.841 "config": [] 00:21:17.841 }, 00:21:17.841 { 00:21:17.841 "subsystem": "accel", 00:21:17.841 "config": [ 00:21:17.841 { 00:21:17.841 "method": "accel_set_options", 00:21:17.841 "params": { 00:21:17.841 "small_cache_size": 128, 00:21:17.841 "large_cache_size": 16, 00:21:17.841 "task_count": 2048, 00:21:17.841 "sequence_count": 2048, 00:21:17.841 "buf_count": 2048 00:21:17.841 } 00:21:17.841 } 00:21:17.841 ] 00:21:17.841 }, 00:21:17.841 { 00:21:17.841 "subsystem": "bdev", 00:21:17.841 "config": [ 00:21:17.841 { 00:21:17.841 "method": "bdev_set_options", 00:21:17.841 "params": { 00:21:17.841 "bdev_io_pool_size": 65535, 00:21:17.841 "bdev_io_cache_size": 256, 00:21:17.841 "bdev_auto_examine": true, 00:21:17.841 "iobuf_small_cache_size": 128, 00:21:17.841 "iobuf_large_cache_size": 16 00:21:17.841 } 00:21:17.841 }, 00:21:17.841 { 00:21:17.841 "method": "bdev_raid_set_options", 00:21:17.841 "params": { 00:21:17.841 "process_window_size_kb": 1024 00:21:17.841 } 00:21:17.841 }, 00:21:17.841 { 00:21:17.841 "method": "bdev_iscsi_set_options", 00:21:17.841 "params": { 00:21:17.841 "timeout_sec": 30 00:21:17.841 } 00:21:17.841 }, 00:21:17.841 { 00:21:17.841 "method": "bdev_nvme_set_options", 00:21:17.841 "params": { 00:21:17.841 "action_on_timeout": "none", 00:21:17.841 "timeout_us": 0, 00:21:17.841 "timeout_admin_us": 0, 00:21:17.841 "keep_alive_timeout_ms": 10000, 00:21:17.841 "arbitration_burst": 0, 00:21:17.841 "low_priority_weight": 0, 00:21:17.841 "medium_priority_weight": 0, 00:21:17.841 "high_priority_weight": 0, 00:21:17.841 "nvme_adminq_poll_period_us": 10000, 00:21:17.841 "nvme_ioq_poll_period_us": 0, 00:21:17.841 "io_queue_requests": 0, 00:21:17.841 "delay_cmd_submit": true, 00:21:17.841 "transport_retry_count": 4, 00:21:17.841 "bdev_retry_count": 3, 00:21:17.841 "transport_ack_timeout": 0, 00:21:17.841 "ctrlr_loss_timeout_sec": 0, 00:21:17.841 "reconnect_delay_sec": 0, 00:21:17.841 "fast_io_fail_timeout_sec": 0, 00:21:17.841 "disable_auto_failback": false, 00:21:17.841 "generate_uuids": false, 00:21:17.841 "transport_tos": 0, 00:21:17.841 "nvme_error_stat": false, 00:21:17.841 "rdma_srq_size": 0, 00:21:17.841 "io_path_stat": false, 00:21:17.841 "allow_accel_sequence": false, 00:21:17.841 "rdma_max_cq_size": 0, 00:21:17.841 "rdma_cm_event_timeout_ms": 0, 00:21:17.841 "dhchap_digests": [ 00:21:17.841 "sha256", 00:21:17.841 "sha384", 00:21:17.841 "sha512" 00:21:17.841 ], 00:21:17.841 "dhchap_dhgroups": [ 00:21:17.841 "null", 00:21:17.841 "ffdhe2048", 00:21:17.841 "ffdhe3072", 00:21:17.841 "ffdhe4096", 00:21:17.841 "ffdhe6144", 00:21:17.841 "ffdhe8192" 00:21:17.841 ] 00:21:17.841 } 00:21:17.841 }, 00:21:17.841 { 00:21:17.841 "method": "bdev_nvme_set_hotplug", 00:21:17.842 "params": { 00:21:17.842 "period_us": 100000, 00:21:17.842 "enable": false 00:21:17.842 } 00:21:17.842 }, 00:21:17.842 { 00:21:17.842 "method": "bdev_malloc_create", 00:21:17.842 "params": { 00:21:17.842 "name": "malloc0", 00:21:17.842 "num_blocks": 8192, 00:21:17.842 "block_size": 4096, 00:21:17.842 "physical_block_size": 4096, 00:21:17.842 "uuid": "32bd876e-ebc7-403a-83c7-6146cf54dde4", 00:21:17.842 "optimal_io_boundary": 0 00:21:17.842 } 00:21:17.842 }, 00:21:17.842 { 00:21:17.842 "method": "bdev_wait_for_examine" 00:21:17.842 } 00:21:17.842 ] 00:21:17.842 }, 00:21:17.842 { 00:21:17.842 "subsystem": "nbd", 00:21:17.842 "config": [] 00:21:17.842 }, 00:21:17.842 { 00:21:17.842 "subsystem": "scheduler", 00:21:17.842 "config": [ 00:21:17.842 { 00:21:17.842 "method": "framework_set_scheduler", 00:21:17.842 "params": { 00:21:17.842 "name": "static" 00:21:17.842 } 00:21:17.842 } 00:21:17.842 ] 00:21:17.842 }, 00:21:17.842 { 00:21:17.842 "subsystem": "nvmf", 00:21:17.842 "config": [ 00:21:17.842 { 00:21:17.842 "method": "nvmf_set_config", 00:21:17.842 "params": { 00:21:17.842 "discovery_filter": "match_any", 00:21:17.842 "admin_cmd_passthru": { 00:21:17.842 "identify_ctrlr": false 00:21:17.842 } 00:21:17.842 } 00:21:17.842 }, 00:21:17.842 { 00:21:17.842 "method": "nvmf_set_max_subsystems", 00:21:17.842 "params": { 00:21:17.842 "max_subsystems": 1024 00:21:17.842 } 00:21:17.842 }, 00:21:17.842 { 00:21:17.842 "method": "nvmf_set_crdt", 00:21:17.842 "params": { 00:21:17.842 "crdt1": 0, 00:21:17.842 "crdt2": 0, 00:21:17.842 "crdt3": 0 00:21:17.842 } 00:21:17.842 }, 00:21:17.842 { 00:21:17.842 "method": "nvmf_create_transport", 00:21:17.842 "params": { 00:21:17.842 "trtype": "TCP", 00:21:17.842 "max_queue_depth": 128, 00:21:17.842 "max_io_qpairs_per_ctrlr": 127, 00:21:17.842 "in_capsule_data_size": 4096, 00:21:17.842 "max_io_size": 131072, 00:21:17.842 "io_unit_size": 131072, 00:21:17.842 "max_aq_depth": 128, 00:21:17.842 "num_shared_buffers": 511, 00:21:17.842 "buf_cache_size": 4294967295, 00:21:17.842 "dif_insert_or_strip": false, 00:21:17.842 "zcopy": false, 00:21:17.842 "c2h_success": false, 00:21:17.842 "sock_priority": 0, 00:21:17.842 "abort_timeout_sec": 1, 00:21:17.842 "ack_timeout": 0 00:21:17.842 } 00:21:17.842 }, 00:21:17.842 { 00:21:17.842 "method": "nvmf_create_subsystem", 00:21:17.842 "params": { 00:21:17.842 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:17.842 "allow_any_host": false, 00:21:17.842 "serial_number": "SPDK00000000000001", 00:21:17.842 "model_number": "SPDK bdev Controller", 00:21:17.842 "max_namespaces": 10, 00:21:17.842 "min_cntlid": 1, 00:21:17.842 "max_cntlid": 65519, 00:21:17.842 "ana_reporting": false 00:21:17.842 } 00:21:17.842 }, 00:21:17.842 { 00:21:17.842 "method": "nvmf_subsystem_add_host", 00:21:17.842 "params": { 00:21:17.842 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:17.842 "host": "nqn.2016-06.io.spdk:host1", 00:21:17.842 "psk": "/tmp/tmp.BJdxnMKX0m" 00:21:17.842 } 00:21:17.842 }, 00:21:17.842 { 00:21:17.842 "method": "nvmf_subsystem_add_ns", 00:21:17.842 "params": { 00:21:17.842 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:17.842 "namespace": { 00:21:17.842 "nsid": 1, 00:21:17.842 "bdev_name": "malloc0", 00:21:17.842 "nguid": "32BD876EEBC7403A83C76146CF54DDE4", 00:21:17.842 "uuid": "32bd876e-ebc7-403a-83c7-6146cf54dde4", 00:21:17.842 "no_auto_visible": false 00:21:17.842 } 00:21:17.842 } 00:21:17.842 }, 00:21:17.842 { 00:21:17.842 "method": "nvmf_subsystem_add_listener", 00:21:17.842 "params": { 00:21:17.842 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:17.842 "listen_address": { 00:21:17.842 "trtype": "TCP", 00:21:17.842 "adrfam": "IPv4", 00:21:17.842 "traddr": "10.0.0.2", 00:21:17.842 "trsvcid": "4420" 00:21:17.842 }, 00:21:17.842 "secure_channel": true 00:21:17.842 } 00:21:17.842 } 00:21:17.842 ] 00:21:17.842 } 00:21:17.842 ] 00:21:17.842 }' 00:21:17.842 11:57:08 -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:18.102 11:57:08 -- target/tls.sh@197 -- # bdevperfconf='{ 00:21:18.102 "subsystems": [ 00:21:18.102 { 00:21:18.102 "subsystem": "keyring", 00:21:18.102 "config": [] 00:21:18.102 }, 00:21:18.102 { 00:21:18.102 "subsystem": "iobuf", 00:21:18.102 "config": [ 00:21:18.102 { 00:21:18.102 "method": "iobuf_set_options", 00:21:18.102 "params": { 00:21:18.102 "small_pool_count": 8192, 00:21:18.102 "large_pool_count": 1024, 00:21:18.102 "small_bufsize": 8192, 00:21:18.102 "large_bufsize": 135168 00:21:18.102 } 00:21:18.102 } 00:21:18.102 ] 00:21:18.102 }, 00:21:18.102 { 00:21:18.102 "subsystem": "sock", 00:21:18.102 "config": [ 00:21:18.102 { 00:21:18.102 "method": "sock_impl_set_options", 00:21:18.102 "params": { 00:21:18.102 "impl_name": "posix", 00:21:18.102 "recv_buf_size": 2097152, 00:21:18.102 "send_buf_size": 2097152, 00:21:18.102 "enable_recv_pipe": true, 00:21:18.102 "enable_quickack": false, 00:21:18.102 "enable_placement_id": 0, 00:21:18.102 "enable_zerocopy_send_server": true, 00:21:18.102 "enable_zerocopy_send_client": false, 00:21:18.102 "zerocopy_threshold": 0, 00:21:18.102 "tls_version": 0, 00:21:18.102 "enable_ktls": false 00:21:18.102 } 00:21:18.102 }, 00:21:18.102 { 00:21:18.102 "method": "sock_impl_set_options", 00:21:18.102 "params": { 00:21:18.102 "impl_name": "ssl", 00:21:18.102 "recv_buf_size": 4096, 00:21:18.102 "send_buf_size": 4096, 00:21:18.102 "enable_recv_pipe": true, 00:21:18.102 "enable_quickack": false, 00:21:18.102 "enable_placement_id": 0, 00:21:18.102 "enable_zerocopy_send_server": true, 00:21:18.102 "enable_zerocopy_send_client": false, 00:21:18.102 "zerocopy_threshold": 0, 00:21:18.102 "tls_version": 0, 00:21:18.102 "enable_ktls": false 00:21:18.102 } 00:21:18.102 } 00:21:18.102 ] 00:21:18.102 }, 00:21:18.102 { 00:21:18.102 "subsystem": "vmd", 00:21:18.102 "config": [] 00:21:18.102 }, 00:21:18.102 { 00:21:18.102 "subsystem": "accel", 00:21:18.102 "config": [ 00:21:18.102 { 00:21:18.102 "method": "accel_set_options", 00:21:18.102 "params": { 00:21:18.102 "small_cache_size": 128, 00:21:18.102 "large_cache_size": 16, 00:21:18.102 "task_count": 2048, 00:21:18.102 "sequence_count": 2048, 00:21:18.102 "buf_count": 2048 00:21:18.102 } 00:21:18.102 } 00:21:18.102 ] 00:21:18.102 }, 00:21:18.102 { 00:21:18.102 "subsystem": "bdev", 00:21:18.102 "config": [ 00:21:18.102 { 00:21:18.102 "method": "bdev_set_options", 00:21:18.102 "params": { 00:21:18.102 "bdev_io_pool_size": 65535, 00:21:18.102 "bdev_io_cache_size": 256, 00:21:18.102 "bdev_auto_examine": true, 00:21:18.102 "iobuf_small_cache_size": 128, 00:21:18.102 "iobuf_large_cache_size": 16 00:21:18.102 } 00:21:18.102 }, 00:21:18.102 { 00:21:18.102 "method": "bdev_raid_set_options", 00:21:18.102 "params": { 00:21:18.102 "process_window_size_kb": 1024 00:21:18.102 } 00:21:18.102 }, 00:21:18.102 { 00:21:18.102 "method": "bdev_iscsi_set_options", 00:21:18.102 "params": { 00:21:18.102 "timeout_sec": 30 00:21:18.102 } 00:21:18.102 }, 00:21:18.102 { 00:21:18.102 "method": "bdev_nvme_set_options", 00:21:18.102 "params": { 00:21:18.102 "action_on_timeout": "none", 00:21:18.102 "timeout_us": 0, 00:21:18.102 "timeout_admin_us": 0, 00:21:18.102 "keep_alive_timeout_ms": 10000, 00:21:18.102 "arbitration_burst": 0, 00:21:18.102 "low_priority_weight": 0, 00:21:18.102 "medium_priority_weight": 0, 00:21:18.102 "high_priority_weight": 0, 00:21:18.102 "nvme_adminq_poll_period_us": 10000, 00:21:18.102 "nvme_ioq_poll_period_us": 0, 00:21:18.102 "io_queue_requests": 512, 00:21:18.102 "delay_cmd_submit": true, 00:21:18.102 "transport_retry_count": 4, 00:21:18.102 "bdev_retry_count": 3, 00:21:18.102 "transport_ack_timeout": 0, 00:21:18.102 "ctrlr_loss_timeout_sec": 0, 00:21:18.102 "reconnect_delay_sec": 0, 00:21:18.102 "fast_io_fail_timeout_sec": 0, 00:21:18.102 "disable_auto_failback": false, 00:21:18.102 "generate_uuids": false, 00:21:18.102 "transport_tos": 0, 00:21:18.102 "nvme_error_stat": false, 00:21:18.102 "rdma_srq_size": 0, 00:21:18.102 "io_path_stat": false, 00:21:18.102 "allow_accel_sequence": false, 00:21:18.102 "rdma_max_cq_size": 0, 00:21:18.102 "rdma_cm_event_timeout_ms": 0, 00:21:18.102 "dhchap_digests": [ 00:21:18.102 "sha256", 00:21:18.102 "sha384", 00:21:18.102 "sha512" 00:21:18.102 ], 00:21:18.102 "dhchap_dhgroups": [ 00:21:18.102 "null", 00:21:18.102 "ffdhe2048", 00:21:18.102 "ffdhe3072", 00:21:18.102 "ffdhe4096", 00:21:18.102 "ffdhe6144", 00:21:18.102 "ffdhe8192" 00:21:18.102 ] 00:21:18.102 } 00:21:18.102 }, 00:21:18.102 { 00:21:18.102 "method": "bdev_nvme_attach_controller", 00:21:18.102 "params": { 00:21:18.102 "name": "TLSTEST", 00:21:18.102 "trtype": "TCP", 00:21:18.102 "adrfam": "IPv4", 00:21:18.102 "traddr": "10.0.0.2", 00:21:18.102 "trsvcid": "4420", 00:21:18.102 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:18.102 "prchk_reftag": false, 00:21:18.102 "prchk_guard": false, 00:21:18.102 "ctrlr_loss_timeout_sec": 0, 00:21:18.102 "reconnect_delay_sec": 0, 00:21:18.102 "fast_io_fail_timeout_sec": 0, 00:21:18.102 "psk": "/tmp/tmp.BJdxnMKX0m", 00:21:18.102 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:18.102 "hdgst": false, 00:21:18.102 "ddgst": false 00:21:18.102 } 00:21:18.102 }, 00:21:18.102 { 00:21:18.102 "method": "bdev_nvme_set_hotplug", 00:21:18.102 "params": { 00:21:18.102 "period_us": 100000, 00:21:18.102 "enable": false 00:21:18.102 } 00:21:18.102 }, 00:21:18.102 { 00:21:18.102 "method": "bdev_wait_for_examine" 00:21:18.102 } 00:21:18.102 ] 00:21:18.102 }, 00:21:18.102 { 00:21:18.102 "subsystem": "nbd", 00:21:18.102 "config": [] 00:21:18.102 } 00:21:18.102 ] 00:21:18.102 }' 00:21:18.102 11:57:08 -- target/tls.sh@199 -- # killprocess 2522526 00:21:18.102 11:57:08 -- common/autotest_common.sh@936 -- # '[' -z 2522526 ']' 00:21:18.102 11:57:08 -- common/autotest_common.sh@940 -- # kill -0 2522526 00:21:18.102 11:57:08 -- common/autotest_common.sh@941 -- # uname 00:21:18.102 11:57:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:18.102 11:57:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2522526 00:21:18.102 11:57:08 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:21:18.102 11:57:08 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:21:18.102 11:57:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2522526' 00:21:18.102 killing process with pid 2522526 00:21:18.102 11:57:08 -- common/autotest_common.sh@955 -- # kill 2522526 00:21:18.102 Received shutdown signal, test time was about 10.000000 seconds 00:21:18.102 00:21:18.103 Latency(us) 00:21:18.103 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:18.103 =================================================================================================================== 00:21:18.103 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:18.103 [2024-04-18 11:57:08.605556] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:18.103 11:57:08 -- common/autotest_common.sh@960 -- # wait 2522526 00:21:19.480 11:57:09 -- target/tls.sh@200 -- # killprocess 2522069 00:21:19.480 11:57:09 -- common/autotest_common.sh@936 -- # '[' -z 2522069 ']' 00:21:19.480 11:57:09 -- common/autotest_common.sh@940 -- # kill -0 2522069 00:21:19.480 11:57:09 -- common/autotest_common.sh@941 -- # uname 00:21:19.480 11:57:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:19.480 11:57:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2522069 00:21:19.480 11:57:09 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:19.480 11:57:09 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:19.480 11:57:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2522069' 00:21:19.480 killing process with pid 2522069 00:21:19.480 11:57:09 -- common/autotest_common.sh@955 -- # kill 2522069 00:21:19.480 [2024-04-18 11:57:09.659643] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:19.480 11:57:09 -- common/autotest_common.sh@960 -- # wait 2522069 00:21:20.859 11:57:10 -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:21:20.859 11:57:10 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:20.860 11:57:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:20.860 11:57:10 -- target/tls.sh@203 -- # echo '{ 00:21:20.860 "subsystems": [ 00:21:20.860 { 00:21:20.860 "subsystem": "keyring", 00:21:20.860 "config": [] 00:21:20.860 }, 00:21:20.860 { 00:21:20.860 "subsystem": "iobuf", 00:21:20.860 "config": [ 00:21:20.860 { 00:21:20.860 "method": "iobuf_set_options", 00:21:20.860 "params": { 00:21:20.860 "small_pool_count": 8192, 00:21:20.860 "large_pool_count": 1024, 00:21:20.860 "small_bufsize": 8192, 00:21:20.860 "large_bufsize": 135168 00:21:20.860 } 00:21:20.860 } 00:21:20.860 ] 00:21:20.860 }, 00:21:20.860 { 00:21:20.860 "subsystem": "sock", 00:21:20.860 "config": [ 00:21:20.860 { 00:21:20.860 "method": "sock_impl_set_options", 00:21:20.860 "params": { 00:21:20.860 "impl_name": "posix", 00:21:20.860 "recv_buf_size": 2097152, 00:21:20.860 "send_buf_size": 2097152, 00:21:20.860 "enable_recv_pipe": true, 00:21:20.860 "enable_quickack": false, 00:21:20.860 "enable_placement_id": 0, 00:21:20.860 "enable_zerocopy_send_server": true, 00:21:20.860 "enable_zerocopy_send_client": false, 00:21:20.860 "zerocopy_threshold": 0, 00:21:20.860 "tls_version": 0, 00:21:20.860 "enable_ktls": false 00:21:20.860 } 00:21:20.860 }, 00:21:20.860 { 00:21:20.860 "method": "sock_impl_set_options", 00:21:20.860 "params": { 00:21:20.860 "impl_name": "ssl", 00:21:20.860 "recv_buf_size": 4096, 00:21:20.860 "send_buf_size": 4096, 00:21:20.860 "enable_recv_pipe": true, 00:21:20.860 "enable_quickack": false, 00:21:20.860 "enable_placement_id": 0, 00:21:20.860 "enable_zerocopy_send_server": true, 00:21:20.860 "enable_zerocopy_send_client": false, 00:21:20.860 "zerocopy_threshold": 0, 00:21:20.860 "tls_version": 0, 00:21:20.860 "enable_ktls": false 00:21:20.860 } 00:21:20.860 } 00:21:20.860 ] 00:21:20.860 }, 00:21:20.860 { 00:21:20.860 "subsystem": "vmd", 00:21:20.860 "config": [] 00:21:20.860 }, 00:21:20.860 { 00:21:20.860 "subsystem": "accel", 00:21:20.860 "config": [ 00:21:20.860 { 00:21:20.860 "method": "accel_set_options", 00:21:20.860 "params": { 00:21:20.860 "small_cache_size": 128, 00:21:20.860 "large_cache_size": 16, 00:21:20.860 "task_count": 2048, 00:21:20.860 "sequence_count": 2048, 00:21:20.860 "buf_count": 2048 00:21:20.860 } 00:21:20.860 } 00:21:20.860 ] 00:21:20.860 }, 00:21:20.860 { 00:21:20.860 "subsystem": "bdev", 00:21:20.860 "config": [ 00:21:20.860 { 00:21:20.860 "method": "bdev_set_options", 00:21:20.860 "params": { 00:21:20.860 "bdev_io_pool_size": 65535, 00:21:20.860 "bdev_io_cache_size": 256, 00:21:20.860 "bdev_auto_examine": true, 00:21:20.860 "iobuf_small_cache_size": 128, 00:21:20.860 "iobuf_large_cache_size": 16 00:21:20.860 } 00:21:20.860 }, 00:21:20.860 { 00:21:20.860 "method": "bdev_raid_set_options", 00:21:20.860 "params": { 00:21:20.860 "process_window_size_kb": 1024 00:21:20.860 } 00:21:20.860 }, 00:21:20.860 { 00:21:20.860 "method": "bdev_iscsi_set_options", 00:21:20.860 "params": { 00:21:20.860 "timeout_sec": 30 00:21:20.860 } 00:21:20.860 }, 00:21:20.860 { 00:21:20.860 "method": "bdev_nvme_set_options", 00:21:20.860 "params": { 00:21:20.860 "action_on_timeout": "none", 00:21:20.860 "timeout_us": 0, 00:21:20.860 "timeout_admin_us": 0, 00:21:20.860 "keep_alive_timeout_ms": 10000, 00:21:20.860 "arbitration_burst": 0, 00:21:20.860 "low_priority_weight": 0, 00:21:20.860 "medium_priority_weight": 0, 00:21:20.860 "high_priority_weight": 0, 00:21:20.860 "nvme_adminq_poll_period_us": 10000, 00:21:20.860 "nvme_ioq_poll_period_us": 0, 00:21:20.860 "io_queue_requests": 0, 00:21:20.860 "delay_cmd_submit": true, 00:21:20.860 "transport_retry_count": 4, 00:21:20.860 "bdev_retry_count": 3, 00:21:20.860 "transport_ack_timeout": 0, 00:21:20.860 "ctrlr_loss_timeout_sec": 0, 00:21:20.860 "reconnect_delay_sec": 0, 00:21:20.860 "fast_io_fail_timeout_sec": 0, 00:21:20.860 "disable_auto_failback": false, 00:21:20.860 "generate_uuids": false, 00:21:20.860 "transport_tos": 0, 00:21:20.860 "nvme_error_stat": false, 00:21:20.860 "rdma_srq_size": 0, 00:21:20.860 "io_path_stat": false, 00:21:20.860 "allow_accel_sequence": false, 00:21:20.860 "rdma_max_cq_size": 0, 00:21:20.860 "rdma_cm_event_timeout_ms": 0, 00:21:20.860 "dhchap_digests": [ 00:21:20.860 "sha256", 00:21:20.860 "sha384", 00:21:20.860 "sha512" 00:21:20.860 ], 00:21:20.860 "dhchap_dhgroups": [ 00:21:20.860 "null", 00:21:20.860 "ffdhe2048", 00:21:20.860 "ffdhe3072", 00:21:20.860 "ffdhe4096", 00:21:20.860 "ffdhe6144", 00:21:20.860 "ffdhe8192" 00:21:20.860 ] 00:21:20.860 } 00:21:20.860 }, 00:21:20.860 { 00:21:20.860 "method": "bdev_nvme_set_hotplug", 00:21:20.860 "params": { 00:21:20.860 "period_us": 100000, 00:21:20.860 "enable": false 00:21:20.860 } 00:21:20.860 }, 00:21:20.860 { 00:21:20.860 "method": "bdev_malloc_create", 00:21:20.860 "params": { 00:21:20.860 "name": "malloc0", 00:21:20.860 "num_blocks": 8192, 00:21:20.860 "block_size": 4096, 00:21:20.860 "physical_block_size": 4096, 00:21:20.860 "uuid": "32bd876e-ebc7-403a-83c7-6146cf54dde4", 00:21:20.860 "optimal_io_boundary": 0 00:21:20.860 } 00:21:20.860 }, 00:21:20.860 { 00:21:20.860 "method": "bdev_wait_for_examine" 00:21:20.860 } 00:21:20.860 ] 00:21:20.860 }, 00:21:20.860 { 00:21:20.860 "subsystem": "nbd", 00:21:20.860 "config": [] 00:21:20.860 }, 00:21:20.860 { 00:21:20.860 "subsystem": "scheduler", 00:21:20.860 "config": [ 00:21:20.860 { 00:21:20.860 "method": "framework_set_scheduler", 00:21:20.860 "params": { 00:21:20.860 "name": "static" 00:21:20.860 } 00:21:20.860 } 00:21:20.860 ] 00:21:20.860 }, 00:21:20.860 { 00:21:20.860 "subsystem": "nvmf", 00:21:20.860 "config": [ 00:21:20.860 { 00:21:20.860 "method": "nvmf_set_config", 00:21:20.860 "params": { 00:21:20.860 "discovery_filter": "match_any", 00:21:20.860 "admin_cmd_passthru": { 00:21:20.860 "identify_ctrlr": false 00:21:20.860 } 00:21:20.860 } 00:21:20.860 }, 00:21:20.860 { 00:21:20.860 "method": "nvmf_set_max_subsystems", 00:21:20.860 "params": { 00:21:20.860 "max_subsystems": 1024 00:21:20.860 } 00:21:20.860 }, 00:21:20.860 { 00:21:20.860 "method": "nvmf_set_crdt", 00:21:20.860 "params": { 00:21:20.860 "crdt1": 0, 00:21:20.860 "crdt2": 0, 00:21:20.860 "crdt3": 0 00:21:20.860 } 00:21:20.860 }, 00:21:20.860 { 00:21:20.860 "method": "nvmf_create_transport", 00:21:20.860 "params": { 00:21:20.860 "trtype": "TCP", 00:21:20.860 "max_queue_depth": 128, 00:21:20.860 "max_io_qpairs_per_ctrlr": 127, 00:21:20.860 "in_capsule_data_size": 4096, 00:21:20.860 "max_io_size": 131072, 00:21:20.860 "io_unit_size": 131072, 00:21:20.860 "max_aq_depth": 128, 00:21:20.860 "num_shared_buffers": 511, 00:21:20.860 "buf_cache_size": 4294967295, 00:21:20.860 "dif_insert_or_strip": false, 00:21:20.860 "zcopy": false, 00:21:20.860 "c2h_success": false, 00:21:20.860 "sock_priority": 0, 00:21:20.860 "abort_timeout_sec": 1, 00:21:20.860 "ack_timeout": 0 00:21:20.860 } 00:21:20.860 }, 00:21:20.860 { 00:21:20.860 "method": "nvmf_create_subsystem", 00:21:20.860 "params": { 00:21:20.860 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:20.860 "allow_any_host": false, 00:21:20.860 "serial_number": "SPDK00000000000001", 00:21:20.860 "model_number": "SPDK bdev Controller", 00:21:20.860 "max_namespaces": 10, 00:21:20.860 "min_cntlid": 1, 00:21:20.860 "max_cntlid": 65519, 00:21:20.860 "ana_reporting": false 00:21:20.860 } 00:21:20.860 }, 00:21:20.860 { 00:21:20.861 "method": "nvmf_subsystem_add_host", 00:21:20.861 "params": { 00:21:20.861 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:20.861 "host": "nqn.2016-06.io.spdk:host1", 00:21:20.861 "psk": "/tmp/tmp.BJdxnMKX0m" 00:21:20.861 } 00:21:20.861 }, 00:21:20.861 { 00:21:20.861 "method": "nvmf_subsystem_add_ns", 00:21:20.861 "params": { 00:21:20.861 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:20.861 "namespace": { 00:21:20.861 "nsid": 1, 00:21:20.861 "bdev_name": "malloc0", 00:21:20.861 "nguid": "32BD876EEBC7403A83C76146CF54DDE4", 00:21:20.861 "uuid": "32bd876e-ebc7-403a-83c7-6146cf54dde4", 00:21:20.861 "no_auto_visible": false 00:21:20.861 } 00:21:20.861 } 00:21:20.861 }, 00:21:20.861 { 00:21:20.861 "method": "nvmf_subsystem_add_listener", 00:21:20.861 "params": { 00:21:20.861 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:20.861 "listen_address": { 00:21:20.861 "trtype": "TCP", 00:21:20.861 "adrfam": "IPv4", 00:21:20.861 "traddr": "10.0.0.2", 00:21:20.861 "trsvcid": "4420" 00:21:20.861 }, 00:21:20.861 "secure_channel": true 00:21:20.861 } 00:21:20.861 } 00:21:20.861 ] 00:21:20.861 } 00:21:20.861 ] 00:21:20.861 }' 00:21:20.861 11:57:10 -- common/autotest_common.sh@10 -- # set +x 00:21:20.861 11:57:11 -- nvmf/common.sh@470 -- # nvmfpid=2523337 00:21:20.861 11:57:11 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:21:20.861 11:57:11 -- nvmf/common.sh@471 -- # waitforlisten 2523337 00:21:20.861 11:57:11 -- common/autotest_common.sh@817 -- # '[' -z 2523337 ']' 00:21:20.861 11:57:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:20.861 11:57:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:20.861 11:57:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:20.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:20.861 11:57:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:20.861 11:57:11 -- common/autotest_common.sh@10 -- # set +x 00:21:20.861 [2024-04-18 11:57:11.093517] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:21:20.861 [2024-04-18 11:57:11.093610] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:20.861 EAL: No free 2048 kB hugepages reported on node 1 00:21:20.861 [2024-04-18 11:57:11.220699] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.120 [2024-04-18 11:57:11.421322] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:21.120 [2024-04-18 11:57:11.421366] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:21.120 [2024-04-18 11:57:11.421378] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:21.120 [2024-04-18 11:57:11.421391] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:21.120 [2024-04-18 11:57:11.421400] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:21.120 [2024-04-18 11:57:11.421505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:21.687 [2024-04-18 11:57:11.953834] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:21.687 [2024-04-18 11:57:11.969792] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:21.687 [2024-04-18 11:57:11.985851] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:21.687 [2024-04-18 11:57:11.986087] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:21.687 11:57:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:21.687 11:57:12 -- common/autotest_common.sh@850 -- # return 0 00:21:21.687 11:57:12 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:21.687 11:57:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:21.687 11:57:12 -- common/autotest_common.sh@10 -- # set +x 00:21:21.687 11:57:12 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:21.687 11:57:12 -- target/tls.sh@207 -- # bdevperf_pid=2523459 00:21:21.687 11:57:12 -- target/tls.sh@208 -- # waitforlisten 2523459 /var/tmp/bdevperf.sock 00:21:21.687 11:57:12 -- common/autotest_common.sh@817 -- # '[' -z 2523459 ']' 00:21:21.687 11:57:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:21.687 11:57:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:21.687 11:57:12 -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:21.687 11:57:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:21.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:21.687 11:57:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:21.687 11:57:12 -- target/tls.sh@204 -- # echo '{ 00:21:21.687 "subsystems": [ 00:21:21.687 { 00:21:21.687 "subsystem": "keyring", 00:21:21.687 "config": [] 00:21:21.687 }, 00:21:21.687 { 00:21:21.687 "subsystem": "iobuf", 00:21:21.687 "config": [ 00:21:21.687 { 00:21:21.687 "method": "iobuf_set_options", 00:21:21.687 "params": { 00:21:21.687 "small_pool_count": 8192, 00:21:21.687 "large_pool_count": 1024, 00:21:21.687 "small_bufsize": 8192, 00:21:21.687 "large_bufsize": 135168 00:21:21.687 } 00:21:21.687 } 00:21:21.687 ] 00:21:21.687 }, 00:21:21.687 { 00:21:21.687 "subsystem": "sock", 00:21:21.687 "config": [ 00:21:21.687 { 00:21:21.687 "method": "sock_impl_set_options", 00:21:21.687 "params": { 00:21:21.687 "impl_name": "posix", 00:21:21.687 "recv_buf_size": 2097152, 00:21:21.687 "send_buf_size": 2097152, 00:21:21.687 "enable_recv_pipe": true, 00:21:21.687 "enable_quickack": false, 00:21:21.687 "enable_placement_id": 0, 00:21:21.687 "enable_zerocopy_send_server": true, 00:21:21.687 "enable_zerocopy_send_client": false, 00:21:21.687 "zerocopy_threshold": 0, 00:21:21.687 "tls_version": 0, 00:21:21.687 "enable_ktls": false 00:21:21.687 } 00:21:21.687 }, 00:21:21.687 { 00:21:21.687 "method": "sock_impl_set_options", 00:21:21.687 "params": { 00:21:21.687 "impl_name": "ssl", 00:21:21.687 "recv_buf_size": 4096, 00:21:21.687 "send_buf_size": 4096, 00:21:21.687 "enable_recv_pipe": true, 00:21:21.687 "enable_quickack": false, 00:21:21.687 "enable_placement_id": 0, 00:21:21.687 "enable_zerocopy_send_server": true, 00:21:21.687 "enable_zerocopy_send_client": false, 00:21:21.687 "zerocopy_threshold": 0, 00:21:21.687 "tls_version": 0, 00:21:21.687 "enable_ktls": false 00:21:21.687 } 00:21:21.687 } 00:21:21.687 ] 00:21:21.687 }, 00:21:21.687 { 00:21:21.687 "subsystem": "vmd", 00:21:21.687 "config": [] 00:21:21.687 }, 00:21:21.687 { 00:21:21.687 "subsystem": "accel", 00:21:21.687 "config": [ 00:21:21.687 { 00:21:21.687 "method": "accel_set_options", 00:21:21.687 "params": { 00:21:21.688 "small_cache_size": 128, 00:21:21.688 "large_cache_size": 16, 00:21:21.688 "task_count": 2048, 00:21:21.688 "sequence_count": 2048, 00:21:21.688 "buf_count": 2048 00:21:21.688 } 00:21:21.688 } 00:21:21.688 ] 00:21:21.688 }, 00:21:21.688 { 00:21:21.688 "subsystem": "bdev", 00:21:21.688 "config": [ 00:21:21.688 { 00:21:21.688 "method": "bdev_set_options", 00:21:21.688 "params": { 00:21:21.688 "bdev_io_pool_size": 65535, 00:21:21.688 "bdev_io_cache_size": 256, 00:21:21.688 "bdev_auto_examine": true, 00:21:21.688 "iobuf_small_cache_size": 128, 00:21:21.688 "iobuf_large_cache_size": 16 00:21:21.688 } 00:21:21.688 }, 00:21:21.688 { 00:21:21.688 "method": "bdev_raid_set_options", 00:21:21.688 "params": { 00:21:21.688 "process_window_size_kb": 1024 00:21:21.688 } 00:21:21.688 }, 00:21:21.688 { 00:21:21.688 "method": "bdev_iscsi_set_options", 00:21:21.688 "params": { 00:21:21.688 "timeout_sec": 30 00:21:21.688 } 00:21:21.688 }, 00:21:21.688 { 00:21:21.688 "method": "bdev_nvme_set_options", 00:21:21.688 "params": { 00:21:21.688 "action_on_timeout": "none", 00:21:21.688 "timeout_us": 0, 00:21:21.688 "timeout_admin_us": 0, 00:21:21.688 "keep_alive_timeout_ms": 10000, 00:21:21.688 "arbitration_burst": 0, 00:21:21.688 "low_priority_weight": 0, 00:21:21.688 "medium_priority_weight": 0, 00:21:21.688 "high_priority_weight": 0, 00:21:21.688 "nvme_adminq_poll_period_us": 10000, 00:21:21.688 "nvme_ioq_poll_period_us": 0, 00:21:21.688 "io_queue_requests": 512, 00:21:21.688 "delay_cmd_submit": true, 00:21:21.688 "transport_retry_count": 4, 00:21:21.688 "bdev_retry_count": 3, 00:21:21.688 "transport_ack_timeout": 0, 00:21:21.688 "ctrlr_loss_timeout_sec": 0, 00:21:21.688 "reconnect_delay_sec": 0, 00:21:21.688 "fast_io_fail_timeout_sec": 0, 00:21:21.688 "disable_auto_failback": false, 00:21:21.688 "generate_uuids": false, 00:21:21.688 "transport_tos": 0, 00:21:21.688 "nvme_error_stat": false, 00:21:21.688 "rdma_srq_size": 0, 00:21:21.688 "io_path_stat": false, 00:21:21.688 "allow_accel_sequence": false, 00:21:21.688 "rdma_max_cq_size": 0, 00:21:21.688 "rdma_cm_event_timeout_ms": 0, 00:21:21.688 "dhchap_digests": [ 00:21:21.688 "sha256", 00:21:21.688 "sha384", 00:21:21.688 "sha512" 00:21:21.688 ], 00:21:21.688 "dhchap_dhgroups": [ 00:21:21.688 "null", 00:21:21.688 "ffdhe2048", 00:21:21.688 "ffdhe3072", 00:21:21.688 "ffdhe4096", 00:21:21.688 "ffdhe6144", 00:21:21.688 "ffdhe8192" 00:21:21.688 ] 00:21:21.688 } 00:21:21.688 }, 00:21:21.688 { 00:21:21.688 "method": "bdev_nvme_attach_controller", 00:21:21.688 "params": { 00:21:21.688 "name": "TLSTEST", 00:21:21.688 "trtype": "TCP", 00:21:21.688 "adrfam": "IPv4", 00:21:21.688 "traddr": "10.0.0.2", 00:21:21.688 "trsvcid": "4420", 00:21:21.688 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:21.688 "prchk_reftag": false, 00:21:21.688 "prchk_guard": false, 00:21:21.688 "ctrlr_loss_timeout_sec": 0, 00:21:21.688 "reconnect_delay_sec": 0, 00:21:21.688 "fast_io_fail_timeout_sec": 0, 00:21:21.688 "psk": "/tmp/tmp.BJdxnMKX0m", 00:21:21.688 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:21.688 "hdgst": false, 00:21:21.688 "ddgst": false 00:21:21.688 } 00:21:21.688 }, 00:21:21.688 { 00:21:21.688 "method": "bdev_nvme_set_hotplug", 00:21:21.688 "params": { 00:21:21.688 "period_us": 100000, 00:21:21.688 "enable": false 00:21:21.688 } 00:21:21.688 }, 00:21:21.688 { 00:21:21.688 "method": "bdev_wait_for_examine" 00:21:21.688 } 00:21:21.688 ] 00:21:21.688 }, 00:21:21.688 { 00:21:21.688 "subsystem": "nbd", 00:21:21.688 "config": [] 00:21:21.688 } 00:21:21.688 ] 00:21:21.688 }' 00:21:21.688 11:57:12 -- common/autotest_common.sh@10 -- # set +x 00:21:21.688 [2024-04-18 11:57:12.144210] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:21:21.688 [2024-04-18 11:57:12.144303] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2523459 ] 00:21:21.688 EAL: No free 2048 kB hugepages reported on node 1 00:21:21.948 [2024-04-18 11:57:12.263898] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.948 [2024-04-18 11:57:12.477262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:22.516 [2024-04-18 11:57:12.912628] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:22.516 [2024-04-18 11:57:12.912755] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:22.516 11:57:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:22.516 11:57:13 -- common/autotest_common.sh@850 -- # return 0 00:21:22.516 11:57:13 -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:22.784 Running I/O for 10 seconds... 00:21:32.808 00:21:32.808 Latency(us) 00:21:32.808 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:32.808 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:32.808 Verification LBA range: start 0x0 length 0x2000 00:21:32.808 TLSTESTn1 : 10.03 3828.35 14.95 0.00 0.00 33367.68 5609.88 114085.07 00:21:32.808 =================================================================================================================== 00:21:32.808 Total : 3828.35 14.95 0.00 0.00 33367.68 5609.88 114085.07 00:21:32.808 0 00:21:32.808 11:57:23 -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:32.808 11:57:23 -- target/tls.sh@214 -- # killprocess 2523459 00:21:32.808 11:57:23 -- common/autotest_common.sh@936 -- # '[' -z 2523459 ']' 00:21:32.808 11:57:23 -- common/autotest_common.sh@940 -- # kill -0 2523459 00:21:32.808 11:57:23 -- common/autotest_common.sh@941 -- # uname 00:21:32.808 11:57:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:32.808 11:57:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2523459 00:21:32.808 11:57:23 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:21:32.808 11:57:23 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:21:32.808 11:57:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2523459' 00:21:32.808 killing process with pid 2523459 00:21:32.808 11:57:23 -- common/autotest_common.sh@955 -- # kill 2523459 00:21:32.808 Received shutdown signal, test time was about 10.000000 seconds 00:21:32.808 00:21:32.808 Latency(us) 00:21:32.808 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:32.808 =================================================================================================================== 00:21:32.808 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:32.808 [2024-04-18 11:57:23.256918] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:32.808 11:57:23 -- common/autotest_common.sh@960 -- # wait 2523459 00:21:33.745 11:57:24 -- target/tls.sh@215 -- # killprocess 2523337 00:21:33.745 11:57:24 -- common/autotest_common.sh@936 -- # '[' -z 2523337 ']' 00:21:33.745 11:57:24 -- common/autotest_common.sh@940 -- # kill -0 2523337 00:21:33.745 11:57:24 -- common/autotest_common.sh@941 -- # uname 00:21:34.004 11:57:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:34.004 11:57:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2523337 00:21:34.004 11:57:24 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:34.005 11:57:24 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:34.005 11:57:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2523337' 00:21:34.005 killing process with pid 2523337 00:21:34.005 11:57:24 -- common/autotest_common.sh@955 -- # kill 2523337 00:21:34.005 [2024-04-18 11:57:24.340931] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:34.005 11:57:24 -- common/autotest_common.sh@960 -- # wait 2523337 00:21:35.383 11:57:25 -- target/tls.sh@218 -- # nvmfappstart 00:21:35.383 11:57:25 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:35.383 11:57:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:35.383 11:57:25 -- common/autotest_common.sh@10 -- # set +x 00:21:35.383 11:57:25 -- nvmf/common.sh@470 -- # nvmfpid=2525763 00:21:35.383 11:57:25 -- nvmf/common.sh@471 -- # waitforlisten 2525763 00:21:35.383 11:57:25 -- common/autotest_common.sh@817 -- # '[' -z 2525763 ']' 00:21:35.383 11:57:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:35.383 11:57:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:35.384 11:57:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:35.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:35.384 11:57:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:35.384 11:57:25 -- common/autotest_common.sh@10 -- # set +x 00:21:35.384 11:57:25 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:35.384 [2024-04-18 11:57:25.763406] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:21:35.384 [2024-04-18 11:57:25.763501] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:35.384 EAL: No free 2048 kB hugepages reported on node 1 00:21:35.384 [2024-04-18 11:57:25.891561] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.643 [2024-04-18 11:57:26.091127] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:35.643 [2024-04-18 11:57:26.091173] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:35.643 [2024-04-18 11:57:26.091185] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:35.643 [2024-04-18 11:57:26.091198] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:35.643 [2024-04-18 11:57:26.091207] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:35.643 [2024-04-18 11:57:26.091241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:36.212 11:57:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:36.212 11:57:26 -- common/autotest_common.sh@850 -- # return 0 00:21:36.212 11:57:26 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:36.212 11:57:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:36.212 11:57:26 -- common/autotest_common.sh@10 -- # set +x 00:21:36.212 11:57:26 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:36.212 11:57:26 -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.BJdxnMKX0m 00:21:36.212 11:57:26 -- target/tls.sh@49 -- # local key=/tmp/tmp.BJdxnMKX0m 00:21:36.212 11:57:26 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:36.212 [2024-04-18 11:57:26.725812] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:36.212 11:57:26 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:36.470 11:57:26 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:36.729 [2024-04-18 11:57:27.066728] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:36.729 [2024-04-18 11:57:27.066993] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:36.729 11:57:27 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:36.988 malloc0 00:21:36.988 11:57:27 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:36.988 11:57:27 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.BJdxnMKX0m 00:21:37.247 [2024-04-18 11:57:27.597603] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:37.247 11:57:27 -- target/tls.sh@222 -- # bdevperf_pid=2526059 00:21:37.247 11:57:27 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:37.247 11:57:27 -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:37.247 11:57:27 -- target/tls.sh@225 -- # waitforlisten 2526059 /var/tmp/bdevperf.sock 00:21:37.247 11:57:27 -- common/autotest_common.sh@817 -- # '[' -z 2526059 ']' 00:21:37.247 11:57:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:37.247 11:57:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:37.247 11:57:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:37.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:37.247 11:57:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:37.247 11:57:27 -- common/autotest_common.sh@10 -- # set +x 00:21:37.247 [2024-04-18 11:57:27.687522] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:21:37.247 [2024-04-18 11:57:27.687642] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2526059 ] 00:21:37.247 EAL: No free 2048 kB hugepages reported on node 1 00:21:37.506 [2024-04-18 11:57:27.811313] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:37.506 [2024-04-18 11:57:28.031117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:38.075 11:57:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:38.075 11:57:28 -- common/autotest_common.sh@850 -- # return 0 00:21:38.075 11:57:28 -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.BJdxnMKX0m 00:21:38.334 11:57:28 -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:38.334 [2024-04-18 11:57:28.792896] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:38.593 nvme0n1 00:21:38.593 11:57:28 -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:38.593 Running I/O for 1 seconds... 00:21:39.530 00:21:39.530 Latency(us) 00:21:39.530 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:39.530 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:39.530 Verification LBA range: start 0x0 length 0x2000 00:21:39.530 nvme0n1 : 1.04 3221.17 12.58 0.00 0.00 39122.53 8598.32 104018.74 00:21:39.530 =================================================================================================================== 00:21:39.530 Total : 3221.17 12.58 0.00 0.00 39122.53 8598.32 104018.74 00:21:39.530 0 00:21:39.530 11:57:30 -- target/tls.sh@234 -- # killprocess 2526059 00:21:39.530 11:57:30 -- common/autotest_common.sh@936 -- # '[' -z 2526059 ']' 00:21:39.530 11:57:30 -- common/autotest_common.sh@940 -- # kill -0 2526059 00:21:39.530 11:57:30 -- common/autotest_common.sh@941 -- # uname 00:21:39.530 11:57:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:39.530 11:57:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2526059 00:21:39.789 11:57:30 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:39.789 11:57:30 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:39.789 11:57:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2526059' 00:21:39.789 killing process with pid 2526059 00:21:39.789 11:57:30 -- common/autotest_common.sh@955 -- # kill 2526059 00:21:39.789 Received shutdown signal, test time was about 1.000000 seconds 00:21:39.789 00:21:39.789 Latency(us) 00:21:39.789 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:39.789 =================================================================================================================== 00:21:39.789 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:39.789 11:57:30 -- common/autotest_common.sh@960 -- # wait 2526059 00:21:40.725 11:57:31 -- target/tls.sh@235 -- # killprocess 2525763 00:21:40.725 11:57:31 -- common/autotest_common.sh@936 -- # '[' -z 2525763 ']' 00:21:40.725 11:57:31 -- common/autotest_common.sh@940 -- # kill -0 2525763 00:21:40.725 11:57:31 -- common/autotest_common.sh@941 -- # uname 00:21:40.725 11:57:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:40.725 11:57:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2525763 00:21:40.725 11:57:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:40.725 11:57:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:40.725 11:57:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2525763' 00:21:40.725 killing process with pid 2525763 00:21:40.725 11:57:31 -- common/autotest_common.sh@955 -- # kill 2525763 00:21:40.725 [2024-04-18 11:57:31.198409] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:40.725 11:57:31 -- common/autotest_common.sh@960 -- # wait 2525763 00:21:42.104 11:57:32 -- target/tls.sh@238 -- # nvmfappstart 00:21:42.104 11:57:32 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:42.104 11:57:32 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:42.104 11:57:32 -- common/autotest_common.sh@10 -- # set +x 00:21:42.104 11:57:32 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:42.104 11:57:32 -- nvmf/common.sh@470 -- # nvmfpid=2526892 00:21:42.104 11:57:32 -- nvmf/common.sh@471 -- # waitforlisten 2526892 00:21:42.104 11:57:32 -- common/autotest_common.sh@817 -- # '[' -z 2526892 ']' 00:21:42.104 11:57:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:42.104 11:57:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:42.105 11:57:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:42.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:42.105 11:57:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:42.105 11:57:32 -- common/autotest_common.sh@10 -- # set +x 00:21:42.105 [2024-04-18 11:57:32.616353] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:21:42.105 [2024-04-18 11:57:32.616466] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:42.364 EAL: No free 2048 kB hugepages reported on node 1 00:21:42.364 [2024-04-18 11:57:32.745275] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:42.623 [2024-04-18 11:57:32.955620] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:42.623 [2024-04-18 11:57:32.955666] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:42.623 [2024-04-18 11:57:32.955680] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:42.623 [2024-04-18 11:57:32.955694] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:42.623 [2024-04-18 11:57:32.955704] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:42.623 [2024-04-18 11:57:32.955741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:42.881 11:57:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:42.881 11:57:33 -- common/autotest_common.sh@850 -- # return 0 00:21:42.881 11:57:33 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:42.881 11:57:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:42.881 11:57:33 -- common/autotest_common.sh@10 -- # set +x 00:21:42.881 11:57:33 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:42.881 11:57:33 -- target/tls.sh@239 -- # rpc_cmd 00:21:42.881 11:57:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:42.881 11:57:33 -- common/autotest_common.sh@10 -- # set +x 00:21:42.881 [2024-04-18 11:57:33.425051] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:43.140 malloc0 00:21:43.140 [2024-04-18 11:57:33.498526] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:43.140 [2024-04-18 11:57:33.498796] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:43.140 11:57:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:43.140 11:57:33 -- target/tls.sh@252 -- # bdevperf_pid=2527168 00:21:43.140 11:57:33 -- target/tls.sh@254 -- # waitforlisten 2527168 /var/tmp/bdevperf.sock 00:21:43.140 11:57:33 -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:43.140 11:57:33 -- common/autotest_common.sh@817 -- # '[' -z 2527168 ']' 00:21:43.140 11:57:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:43.140 11:57:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:43.140 11:57:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:43.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:43.140 11:57:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:43.140 11:57:33 -- common/autotest_common.sh@10 -- # set +x 00:21:43.140 [2024-04-18 11:57:33.606341] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:21:43.140 [2024-04-18 11:57:33.606427] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2527168 ] 00:21:43.140 EAL: No free 2048 kB hugepages reported on node 1 00:21:43.399 [2024-04-18 11:57:33.726813] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.399 [2024-04-18 11:57:33.936224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:43.966 11:57:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:43.966 11:57:34 -- common/autotest_common.sh@850 -- # return 0 00:21:43.966 11:57:34 -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.BJdxnMKX0m 00:21:44.225 11:57:34 -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:44.225 [2024-04-18 11:57:34.670801] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:44.225 nvme0n1 00:21:44.483 11:57:34 -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:44.483 Running I/O for 1 seconds... 00:21:45.471 00:21:45.471 Latency(us) 00:21:45.471 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:45.471 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:45.471 Verification LBA range: start 0x0 length 0x2000 00:21:45.471 nvme0n1 : 1.03 3237.23 12.65 0.00 0.00 38994.47 5583.67 68367.16 00:21:45.471 =================================================================================================================== 00:21:45.471 Total : 3237.23 12.65 0.00 0.00 38994.47 5583.67 68367.16 00:21:45.471 0 00:21:45.471 11:57:35 -- target/tls.sh@263 -- # rpc_cmd save_config 00:21:45.471 11:57:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:45.471 11:57:35 -- common/autotest_common.sh@10 -- # set +x 00:21:45.729 11:57:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:45.729 11:57:36 -- target/tls.sh@263 -- # tgtcfg='{ 00:21:45.729 "subsystems": [ 00:21:45.729 { 00:21:45.729 "subsystem": "keyring", 00:21:45.729 "config": [ 00:21:45.729 { 00:21:45.729 "method": "keyring_file_add_key", 00:21:45.729 "params": { 00:21:45.729 "name": "key0", 00:21:45.729 "path": "/tmp/tmp.BJdxnMKX0m" 00:21:45.729 } 00:21:45.729 } 00:21:45.729 ] 00:21:45.729 }, 00:21:45.729 { 00:21:45.729 "subsystem": "iobuf", 00:21:45.729 "config": [ 00:21:45.729 { 00:21:45.729 "method": "iobuf_set_options", 00:21:45.729 "params": { 00:21:45.729 "small_pool_count": 8192, 00:21:45.729 "large_pool_count": 1024, 00:21:45.729 "small_bufsize": 8192, 00:21:45.729 "large_bufsize": 135168 00:21:45.729 } 00:21:45.729 } 00:21:45.729 ] 00:21:45.729 }, 00:21:45.729 { 00:21:45.729 "subsystem": "sock", 00:21:45.729 "config": [ 00:21:45.729 { 00:21:45.729 "method": "sock_impl_set_options", 00:21:45.729 "params": { 00:21:45.729 "impl_name": "posix", 00:21:45.729 "recv_buf_size": 2097152, 00:21:45.729 "send_buf_size": 2097152, 00:21:45.729 "enable_recv_pipe": true, 00:21:45.729 "enable_quickack": false, 00:21:45.729 "enable_placement_id": 0, 00:21:45.729 "enable_zerocopy_send_server": true, 00:21:45.729 "enable_zerocopy_send_client": false, 00:21:45.729 "zerocopy_threshold": 0, 00:21:45.729 "tls_version": 0, 00:21:45.729 "enable_ktls": false 00:21:45.729 } 00:21:45.729 }, 00:21:45.729 { 00:21:45.729 "method": "sock_impl_set_options", 00:21:45.729 "params": { 00:21:45.729 "impl_name": "ssl", 00:21:45.729 "recv_buf_size": 4096, 00:21:45.729 "send_buf_size": 4096, 00:21:45.729 "enable_recv_pipe": true, 00:21:45.729 "enable_quickack": false, 00:21:45.729 "enable_placement_id": 0, 00:21:45.729 "enable_zerocopy_send_server": true, 00:21:45.729 "enable_zerocopy_send_client": false, 00:21:45.729 "zerocopy_threshold": 0, 00:21:45.729 "tls_version": 0, 00:21:45.729 "enable_ktls": false 00:21:45.729 } 00:21:45.729 } 00:21:45.729 ] 00:21:45.729 }, 00:21:45.729 { 00:21:45.729 "subsystem": "vmd", 00:21:45.729 "config": [] 00:21:45.729 }, 00:21:45.729 { 00:21:45.730 "subsystem": "accel", 00:21:45.730 "config": [ 00:21:45.730 { 00:21:45.730 "method": "accel_set_options", 00:21:45.730 "params": { 00:21:45.730 "small_cache_size": 128, 00:21:45.730 "large_cache_size": 16, 00:21:45.730 "task_count": 2048, 00:21:45.730 "sequence_count": 2048, 00:21:45.730 "buf_count": 2048 00:21:45.730 } 00:21:45.730 } 00:21:45.730 ] 00:21:45.730 }, 00:21:45.730 { 00:21:45.730 "subsystem": "bdev", 00:21:45.730 "config": [ 00:21:45.730 { 00:21:45.730 "method": "bdev_set_options", 00:21:45.730 "params": { 00:21:45.730 "bdev_io_pool_size": 65535, 00:21:45.730 "bdev_io_cache_size": 256, 00:21:45.730 "bdev_auto_examine": true, 00:21:45.730 "iobuf_small_cache_size": 128, 00:21:45.730 "iobuf_large_cache_size": 16 00:21:45.730 } 00:21:45.730 }, 00:21:45.730 { 00:21:45.730 "method": "bdev_raid_set_options", 00:21:45.730 "params": { 00:21:45.730 "process_window_size_kb": 1024 00:21:45.730 } 00:21:45.730 }, 00:21:45.730 { 00:21:45.730 "method": "bdev_iscsi_set_options", 00:21:45.730 "params": { 00:21:45.730 "timeout_sec": 30 00:21:45.730 } 00:21:45.730 }, 00:21:45.730 { 00:21:45.730 "method": "bdev_nvme_set_options", 00:21:45.730 "params": { 00:21:45.730 "action_on_timeout": "none", 00:21:45.730 "timeout_us": 0, 00:21:45.730 "timeout_admin_us": 0, 00:21:45.730 "keep_alive_timeout_ms": 10000, 00:21:45.730 "arbitration_burst": 0, 00:21:45.730 "low_priority_weight": 0, 00:21:45.730 "medium_priority_weight": 0, 00:21:45.730 "high_priority_weight": 0, 00:21:45.730 "nvme_adminq_poll_period_us": 10000, 00:21:45.730 "nvme_ioq_poll_period_us": 0, 00:21:45.730 "io_queue_requests": 0, 00:21:45.730 "delay_cmd_submit": true, 00:21:45.730 "transport_retry_count": 4, 00:21:45.730 "bdev_retry_count": 3, 00:21:45.730 "transport_ack_timeout": 0, 00:21:45.730 "ctrlr_loss_timeout_sec": 0, 00:21:45.730 "reconnect_delay_sec": 0, 00:21:45.730 "fast_io_fail_timeout_sec": 0, 00:21:45.730 "disable_auto_failback": false, 00:21:45.730 "generate_uuids": false, 00:21:45.730 "transport_tos": 0, 00:21:45.730 "nvme_error_stat": false, 00:21:45.730 "rdma_srq_size": 0, 00:21:45.730 "io_path_stat": false, 00:21:45.730 "allow_accel_sequence": false, 00:21:45.730 "rdma_max_cq_size": 0, 00:21:45.730 "rdma_cm_event_timeout_ms": 0, 00:21:45.730 "dhchap_digests": [ 00:21:45.730 "sha256", 00:21:45.730 "sha384", 00:21:45.730 "sha512" 00:21:45.730 ], 00:21:45.730 "dhchap_dhgroups": [ 00:21:45.730 "null", 00:21:45.730 "ffdhe2048", 00:21:45.730 "ffdhe3072", 00:21:45.730 "ffdhe4096", 00:21:45.730 "ffdhe6144", 00:21:45.730 "ffdhe8192" 00:21:45.730 ] 00:21:45.730 } 00:21:45.730 }, 00:21:45.730 { 00:21:45.730 "method": "bdev_nvme_set_hotplug", 00:21:45.730 "params": { 00:21:45.730 "period_us": 100000, 00:21:45.730 "enable": false 00:21:45.730 } 00:21:45.730 }, 00:21:45.730 { 00:21:45.730 "method": "bdev_malloc_create", 00:21:45.730 "params": { 00:21:45.730 "name": "malloc0", 00:21:45.730 "num_blocks": 8192, 00:21:45.730 "block_size": 4096, 00:21:45.730 "physical_block_size": 4096, 00:21:45.730 "uuid": "06ce996c-6183-4211-9784-c01aa5a2536d", 00:21:45.730 "optimal_io_boundary": 0 00:21:45.730 } 00:21:45.730 }, 00:21:45.730 { 00:21:45.730 "method": "bdev_wait_for_examine" 00:21:45.730 } 00:21:45.730 ] 00:21:45.730 }, 00:21:45.730 { 00:21:45.730 "subsystem": "nbd", 00:21:45.730 "config": [] 00:21:45.730 }, 00:21:45.730 { 00:21:45.730 "subsystem": "scheduler", 00:21:45.730 "config": [ 00:21:45.730 { 00:21:45.730 "method": "framework_set_scheduler", 00:21:45.730 "params": { 00:21:45.730 "name": "static" 00:21:45.730 } 00:21:45.730 } 00:21:45.730 ] 00:21:45.730 }, 00:21:45.730 { 00:21:45.730 "subsystem": "nvmf", 00:21:45.730 "config": [ 00:21:45.730 { 00:21:45.730 "method": "nvmf_set_config", 00:21:45.730 "params": { 00:21:45.730 "discovery_filter": "match_any", 00:21:45.730 "admin_cmd_passthru": { 00:21:45.730 "identify_ctrlr": false 00:21:45.730 } 00:21:45.730 } 00:21:45.730 }, 00:21:45.730 { 00:21:45.730 "method": "nvmf_set_max_subsystems", 00:21:45.730 "params": { 00:21:45.730 "max_subsystems": 1024 00:21:45.730 } 00:21:45.730 }, 00:21:45.730 { 00:21:45.730 "method": "nvmf_set_crdt", 00:21:45.730 "params": { 00:21:45.730 "crdt1": 0, 00:21:45.730 "crdt2": 0, 00:21:45.730 "crdt3": 0 00:21:45.730 } 00:21:45.730 }, 00:21:45.730 { 00:21:45.730 "method": "nvmf_create_transport", 00:21:45.730 "params": { 00:21:45.730 "trtype": "TCP", 00:21:45.730 "max_queue_depth": 128, 00:21:45.730 "max_io_qpairs_per_ctrlr": 127, 00:21:45.730 "in_capsule_data_size": 4096, 00:21:45.730 "max_io_size": 131072, 00:21:45.730 "io_unit_size": 131072, 00:21:45.730 "max_aq_depth": 128, 00:21:45.730 "num_shared_buffers": 511, 00:21:45.730 "buf_cache_size": 4294967295, 00:21:45.730 "dif_insert_or_strip": false, 00:21:45.730 "zcopy": false, 00:21:45.730 "c2h_success": false, 00:21:45.730 "sock_priority": 0, 00:21:45.730 "abort_timeout_sec": 1, 00:21:45.730 "ack_timeout": 0 00:21:45.730 } 00:21:45.730 }, 00:21:45.730 { 00:21:45.730 "method": "nvmf_create_subsystem", 00:21:45.730 "params": { 00:21:45.730 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:45.730 "allow_any_host": false, 00:21:45.730 "serial_number": "00000000000000000000", 00:21:45.730 "model_number": "SPDK bdev Controller", 00:21:45.730 "max_namespaces": 32, 00:21:45.730 "min_cntlid": 1, 00:21:45.730 "max_cntlid": 65519, 00:21:45.730 "ana_reporting": false 00:21:45.730 } 00:21:45.730 }, 00:21:45.730 { 00:21:45.730 "method": "nvmf_subsystem_add_host", 00:21:45.730 "params": { 00:21:45.730 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:45.730 "host": "nqn.2016-06.io.spdk:host1", 00:21:45.730 "psk": "key0" 00:21:45.730 } 00:21:45.730 }, 00:21:45.730 { 00:21:45.730 "method": "nvmf_subsystem_add_ns", 00:21:45.730 "params": { 00:21:45.730 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:45.730 "namespace": { 00:21:45.730 "nsid": 1, 00:21:45.730 "bdev_name": "malloc0", 00:21:45.730 "nguid": "06CE996C618342119784C01AA5A2536D", 00:21:45.730 "uuid": "06ce996c-6183-4211-9784-c01aa5a2536d", 00:21:45.730 "no_auto_visible": false 00:21:45.730 } 00:21:45.730 } 00:21:45.730 }, 00:21:45.730 { 00:21:45.730 "method": "nvmf_subsystem_add_listener", 00:21:45.730 "params": { 00:21:45.730 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:45.730 "listen_address": { 00:21:45.730 "trtype": "TCP", 00:21:45.730 "adrfam": "IPv4", 00:21:45.730 "traddr": "10.0.0.2", 00:21:45.730 "trsvcid": "4420" 00:21:45.730 }, 00:21:45.730 "secure_channel": true 00:21:45.730 } 00:21:45.730 } 00:21:45.730 ] 00:21:45.730 } 00:21:45.730 ] 00:21:45.730 }' 00:21:45.730 11:57:36 -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:45.989 11:57:36 -- target/tls.sh@264 -- # bperfcfg='{ 00:21:45.989 "subsystems": [ 00:21:45.989 { 00:21:45.989 "subsystem": "keyring", 00:21:45.989 "config": [ 00:21:45.989 { 00:21:45.989 "method": "keyring_file_add_key", 00:21:45.989 "params": { 00:21:45.989 "name": "key0", 00:21:45.989 "path": "/tmp/tmp.BJdxnMKX0m" 00:21:45.989 } 00:21:45.989 } 00:21:45.989 ] 00:21:45.989 }, 00:21:45.989 { 00:21:45.989 "subsystem": "iobuf", 00:21:45.989 "config": [ 00:21:45.989 { 00:21:45.989 "method": "iobuf_set_options", 00:21:45.989 "params": { 00:21:45.989 "small_pool_count": 8192, 00:21:45.989 "large_pool_count": 1024, 00:21:45.989 "small_bufsize": 8192, 00:21:45.989 "large_bufsize": 135168 00:21:45.989 } 00:21:45.989 } 00:21:45.989 ] 00:21:45.989 }, 00:21:45.989 { 00:21:45.989 "subsystem": "sock", 00:21:45.989 "config": [ 00:21:45.989 { 00:21:45.989 "method": "sock_impl_set_options", 00:21:45.989 "params": { 00:21:45.989 "impl_name": "posix", 00:21:45.989 "recv_buf_size": 2097152, 00:21:45.989 "send_buf_size": 2097152, 00:21:45.989 "enable_recv_pipe": true, 00:21:45.989 "enable_quickack": false, 00:21:45.989 "enable_placement_id": 0, 00:21:45.989 "enable_zerocopy_send_server": true, 00:21:45.989 "enable_zerocopy_send_client": false, 00:21:45.989 "zerocopy_threshold": 0, 00:21:45.989 "tls_version": 0, 00:21:45.989 "enable_ktls": false 00:21:45.989 } 00:21:45.989 }, 00:21:45.989 { 00:21:45.989 "method": "sock_impl_set_options", 00:21:45.989 "params": { 00:21:45.989 "impl_name": "ssl", 00:21:45.989 "recv_buf_size": 4096, 00:21:45.989 "send_buf_size": 4096, 00:21:45.989 "enable_recv_pipe": true, 00:21:45.989 "enable_quickack": false, 00:21:45.989 "enable_placement_id": 0, 00:21:45.989 "enable_zerocopy_send_server": true, 00:21:45.989 "enable_zerocopy_send_client": false, 00:21:45.989 "zerocopy_threshold": 0, 00:21:45.989 "tls_version": 0, 00:21:45.989 "enable_ktls": false 00:21:45.989 } 00:21:45.989 } 00:21:45.989 ] 00:21:45.989 }, 00:21:45.989 { 00:21:45.989 "subsystem": "vmd", 00:21:45.989 "config": [] 00:21:45.989 }, 00:21:45.989 { 00:21:45.989 "subsystem": "accel", 00:21:45.989 "config": [ 00:21:45.989 { 00:21:45.989 "method": "accel_set_options", 00:21:45.989 "params": { 00:21:45.989 "small_cache_size": 128, 00:21:45.989 "large_cache_size": 16, 00:21:45.989 "task_count": 2048, 00:21:45.989 "sequence_count": 2048, 00:21:45.989 "buf_count": 2048 00:21:45.989 } 00:21:45.989 } 00:21:45.989 ] 00:21:45.989 }, 00:21:45.989 { 00:21:45.989 "subsystem": "bdev", 00:21:45.989 "config": [ 00:21:45.989 { 00:21:45.989 "method": "bdev_set_options", 00:21:45.989 "params": { 00:21:45.989 "bdev_io_pool_size": 65535, 00:21:45.989 "bdev_io_cache_size": 256, 00:21:45.989 "bdev_auto_examine": true, 00:21:45.989 "iobuf_small_cache_size": 128, 00:21:45.989 "iobuf_large_cache_size": 16 00:21:45.989 } 00:21:45.989 }, 00:21:45.989 { 00:21:45.989 "method": "bdev_raid_set_options", 00:21:45.989 "params": { 00:21:45.989 "process_window_size_kb": 1024 00:21:45.989 } 00:21:45.989 }, 00:21:45.989 { 00:21:45.989 "method": "bdev_iscsi_set_options", 00:21:45.989 "params": { 00:21:45.989 "timeout_sec": 30 00:21:45.989 } 00:21:45.989 }, 00:21:45.989 { 00:21:45.989 "method": "bdev_nvme_set_options", 00:21:45.989 "params": { 00:21:45.989 "action_on_timeout": "none", 00:21:45.989 "timeout_us": 0, 00:21:45.989 "timeout_admin_us": 0, 00:21:45.989 "keep_alive_timeout_ms": 10000, 00:21:45.989 "arbitration_burst": 0, 00:21:45.989 "low_priority_weight": 0, 00:21:45.989 "medium_priority_weight": 0, 00:21:45.989 "high_priority_weight": 0, 00:21:45.989 "nvme_adminq_poll_period_us": 10000, 00:21:45.989 "nvme_ioq_poll_period_us": 0, 00:21:45.989 "io_queue_requests": 512, 00:21:45.989 "delay_cmd_submit": true, 00:21:45.989 "transport_retry_count": 4, 00:21:45.989 "bdev_retry_count": 3, 00:21:45.989 "transport_ack_timeout": 0, 00:21:45.989 "ctrlr_loss_timeout_sec": 0, 00:21:45.989 "reconnect_delay_sec": 0, 00:21:45.989 "fast_io_fail_timeout_sec": 0, 00:21:45.989 "disable_auto_failback": false, 00:21:45.989 "generate_uuids": false, 00:21:45.989 "transport_tos": 0, 00:21:45.989 "nvme_error_stat": false, 00:21:45.989 "rdma_srq_size": 0, 00:21:45.989 "io_path_stat": false, 00:21:45.989 "allow_accel_sequence": false, 00:21:45.989 "rdma_max_cq_size": 0, 00:21:45.989 "rdma_cm_event_timeout_ms": 0, 00:21:45.989 "dhchap_digests": [ 00:21:45.989 "sha256", 00:21:45.989 "sha384", 00:21:45.989 "sha512" 00:21:45.989 ], 00:21:45.989 "dhchap_dhgroups": [ 00:21:45.989 "null", 00:21:45.989 "ffdhe2048", 00:21:45.989 "ffdhe3072", 00:21:45.989 "ffdhe4096", 00:21:45.989 "ffdhe6144", 00:21:45.989 "ffdhe8192" 00:21:45.989 ] 00:21:45.989 } 00:21:45.989 }, 00:21:45.989 { 00:21:45.989 "method": "bdev_nvme_attach_controller", 00:21:45.989 "params": { 00:21:45.989 "name": "nvme0", 00:21:45.989 "trtype": "TCP", 00:21:45.989 "adrfam": "IPv4", 00:21:45.989 "traddr": "10.0.0.2", 00:21:45.989 "trsvcid": "4420", 00:21:45.989 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:45.989 "prchk_reftag": false, 00:21:45.989 "prchk_guard": false, 00:21:45.989 "ctrlr_loss_timeout_sec": 0, 00:21:45.989 "reconnect_delay_sec": 0, 00:21:45.989 "fast_io_fail_timeout_sec": 0, 00:21:45.989 "psk": "key0", 00:21:45.989 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:45.989 "hdgst": false, 00:21:45.990 "ddgst": false 00:21:45.990 } 00:21:45.990 }, 00:21:45.990 { 00:21:45.990 "method": "bdev_nvme_set_hotplug", 00:21:45.990 "params": { 00:21:45.990 "period_us": 100000, 00:21:45.990 "enable": false 00:21:45.990 } 00:21:45.990 }, 00:21:45.990 { 00:21:45.990 "method": "bdev_enable_histogram", 00:21:45.990 "params": { 00:21:45.990 "name": "nvme0n1", 00:21:45.990 "enable": true 00:21:45.990 } 00:21:45.990 }, 00:21:45.990 { 00:21:45.990 "method": "bdev_wait_for_examine" 00:21:45.990 } 00:21:45.990 ] 00:21:45.990 }, 00:21:45.990 { 00:21:45.990 "subsystem": "nbd", 00:21:45.990 "config": [] 00:21:45.990 } 00:21:45.990 ] 00:21:45.990 }' 00:21:45.990 11:57:36 -- target/tls.sh@266 -- # killprocess 2527168 00:21:45.990 11:57:36 -- common/autotest_common.sh@936 -- # '[' -z 2527168 ']' 00:21:45.990 11:57:36 -- common/autotest_common.sh@940 -- # kill -0 2527168 00:21:45.990 11:57:36 -- common/autotest_common.sh@941 -- # uname 00:21:45.990 11:57:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:45.990 11:57:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2527168 00:21:45.990 11:57:36 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:45.990 11:57:36 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:45.990 11:57:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2527168' 00:21:45.990 killing process with pid 2527168 00:21:45.990 11:57:36 -- common/autotest_common.sh@955 -- # kill 2527168 00:21:45.990 Received shutdown signal, test time was about 1.000000 seconds 00:21:45.990 00:21:45.990 Latency(us) 00:21:45.990 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:45.990 =================================================================================================================== 00:21:45.990 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:45.990 11:57:36 -- common/autotest_common.sh@960 -- # wait 2527168 00:21:46.925 11:57:37 -- target/tls.sh@267 -- # killprocess 2526892 00:21:46.925 11:57:37 -- common/autotest_common.sh@936 -- # '[' -z 2526892 ']' 00:21:46.925 11:57:37 -- common/autotest_common.sh@940 -- # kill -0 2526892 00:21:46.925 11:57:37 -- common/autotest_common.sh@941 -- # uname 00:21:46.925 11:57:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:46.925 11:57:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2526892 00:21:46.925 11:57:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:46.925 11:57:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:46.925 11:57:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2526892' 00:21:46.925 killing process with pid 2526892 00:21:46.925 11:57:37 -- common/autotest_common.sh@955 -- # kill 2526892 00:21:46.925 11:57:37 -- common/autotest_common.sh@960 -- # wait 2526892 00:21:48.309 11:57:38 -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:21:48.309 11:57:38 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:48.309 11:57:38 -- target/tls.sh@269 -- # echo '{ 00:21:48.309 "subsystems": [ 00:21:48.309 { 00:21:48.309 "subsystem": "keyring", 00:21:48.309 "config": [ 00:21:48.309 { 00:21:48.309 "method": "keyring_file_add_key", 00:21:48.309 "params": { 00:21:48.309 "name": "key0", 00:21:48.309 "path": "/tmp/tmp.BJdxnMKX0m" 00:21:48.309 } 00:21:48.309 } 00:21:48.309 ] 00:21:48.309 }, 00:21:48.309 { 00:21:48.309 "subsystem": "iobuf", 00:21:48.309 "config": [ 00:21:48.309 { 00:21:48.309 "method": "iobuf_set_options", 00:21:48.309 "params": { 00:21:48.309 "small_pool_count": 8192, 00:21:48.309 "large_pool_count": 1024, 00:21:48.309 "small_bufsize": 8192, 00:21:48.309 "large_bufsize": 135168 00:21:48.309 } 00:21:48.309 } 00:21:48.309 ] 00:21:48.309 }, 00:21:48.309 { 00:21:48.309 "subsystem": "sock", 00:21:48.309 "config": [ 00:21:48.309 { 00:21:48.309 "method": "sock_impl_set_options", 00:21:48.309 "params": { 00:21:48.309 "impl_name": "posix", 00:21:48.309 "recv_buf_size": 2097152, 00:21:48.309 "send_buf_size": 2097152, 00:21:48.309 "enable_recv_pipe": true, 00:21:48.309 "enable_quickack": false, 00:21:48.309 "enable_placement_id": 0, 00:21:48.309 "enable_zerocopy_send_server": true, 00:21:48.309 "enable_zerocopy_send_client": false, 00:21:48.309 "zerocopy_threshold": 0, 00:21:48.309 "tls_version": 0, 00:21:48.309 "enable_ktls": false 00:21:48.309 } 00:21:48.309 }, 00:21:48.309 { 00:21:48.309 "method": "sock_impl_set_options", 00:21:48.309 "params": { 00:21:48.309 "impl_name": "ssl", 00:21:48.309 "recv_buf_size": 4096, 00:21:48.309 "send_buf_size": 4096, 00:21:48.309 "enable_recv_pipe": true, 00:21:48.309 "enable_quickack": false, 00:21:48.309 "enable_placement_id": 0, 00:21:48.309 "enable_zerocopy_send_server": true, 00:21:48.309 "enable_zerocopy_send_client": false, 00:21:48.309 "zerocopy_threshold": 0, 00:21:48.309 "tls_version": 0, 00:21:48.309 "enable_ktls": false 00:21:48.309 } 00:21:48.309 } 00:21:48.309 ] 00:21:48.309 }, 00:21:48.309 { 00:21:48.309 "subsystem": "vmd", 00:21:48.309 "config": [] 00:21:48.309 }, 00:21:48.309 { 00:21:48.309 "subsystem": "accel", 00:21:48.309 "config": [ 00:21:48.309 { 00:21:48.309 "method": "accel_set_options", 00:21:48.309 "params": { 00:21:48.309 "small_cache_size": 128, 00:21:48.309 "large_cache_size": 16, 00:21:48.309 "task_count": 2048, 00:21:48.309 "sequence_count": 2048, 00:21:48.309 "buf_count": 2048 00:21:48.309 } 00:21:48.309 } 00:21:48.309 ] 00:21:48.309 }, 00:21:48.309 { 00:21:48.309 "subsystem": "bdev", 00:21:48.309 "config": [ 00:21:48.310 { 00:21:48.310 "method": "bdev_set_options", 00:21:48.310 "params": { 00:21:48.310 "bdev_io_pool_size": 65535, 00:21:48.310 "bdev_io_cache_size": 256, 00:21:48.310 "bdev_auto_examine": true, 00:21:48.310 "iobuf_small_cache_size": 128, 00:21:48.310 "iobuf_large_cache_size": 16 00:21:48.310 } 00:21:48.310 }, 00:21:48.310 { 00:21:48.310 "method": "bdev_raid_set_options", 00:21:48.310 "params": { 00:21:48.310 "process_window_size_kb": 1024 00:21:48.310 } 00:21:48.310 }, 00:21:48.310 { 00:21:48.310 "method": "bdev_iscsi_set_options", 00:21:48.310 "params": { 00:21:48.310 "timeout_sec": 30 00:21:48.310 } 00:21:48.310 }, 00:21:48.310 { 00:21:48.310 "method": "bdev_nvme_set_options", 00:21:48.310 "params": { 00:21:48.310 "action_on_timeout": "none", 00:21:48.310 "timeout_us": 0, 00:21:48.310 "timeout_admin_us": 0, 00:21:48.310 "keep_alive_timeout_ms": 10000, 00:21:48.310 "arbitration_burst": 0, 00:21:48.310 "low_priority_weight": 0, 00:21:48.310 "medium_priority_weight": 0, 00:21:48.310 "high_priority_weight": 0, 00:21:48.310 "nvme_adminq_poll_period_us": 10000, 00:21:48.310 "nvme_ioq_poll_period_us": 0, 00:21:48.310 "io_queue_requests": 0, 00:21:48.310 "delay_cmd_submit": true, 00:21:48.310 "transport_retry_count": 4, 00:21:48.310 "bdev_retry_count": 3, 00:21:48.310 "transport_ack_timeout": 0, 00:21:48.310 "ctrlr_loss_timeout_sec": 0, 00:21:48.310 "reconnect_delay_sec": 0, 00:21:48.310 "fast_io_fail_timeout_sec": 0, 00:21:48.310 "disable_auto_failback": false, 00:21:48.310 "generate_uuids": false, 00:21:48.310 "transport_tos": 0, 00:21:48.310 "nvme_error_stat": false, 00:21:48.310 "rdma_srq_size": 0, 00:21:48.310 "io_path_stat": false, 00:21:48.310 "allow_accel_sequence": false, 00:21:48.310 "rdma_max_cq_size": 0, 00:21:48.310 "rdma_cm_event_timeout_ms": 0, 00:21:48.310 "dhchap_digests": [ 00:21:48.310 "sha256", 00:21:48.310 "sha384", 00:21:48.310 "sha512" 00:21:48.310 ], 00:21:48.310 "dhchap_dhgroups": [ 00:21:48.310 "null", 00:21:48.310 "ffdhe2048", 00:21:48.310 "ffdhe3072", 00:21:48.310 "ffdhe4096", 00:21:48.310 "ffdhe6144", 00:21:48.310 "ffdhe8192" 00:21:48.310 ] 00:21:48.310 } 00:21:48.310 }, 00:21:48.310 { 00:21:48.310 "method": "bdev_nvme_set_hotplug", 00:21:48.310 "params": { 00:21:48.310 "period_us": 100000, 00:21:48.310 "enable": false 00:21:48.310 } 00:21:48.310 }, 00:21:48.310 { 00:21:48.310 "method": "bdev_malloc_create", 00:21:48.310 "params": { 00:21:48.310 "name": "malloc0", 00:21:48.310 "num_blocks": 8192, 00:21:48.310 "block_size": 4096, 00:21:48.310 "physical_block_size": 4096, 00:21:48.310 "uuid": "06ce996c-6183-4211-9784-c01aa5a2536d", 00:21:48.310 "optimal_io_boundary": 0 00:21:48.310 } 00:21:48.310 }, 00:21:48.310 { 00:21:48.310 "method": "bdev_wait_for_examine" 00:21:48.310 } 00:21:48.310 ] 00:21:48.310 }, 00:21:48.310 { 00:21:48.310 "subsystem": "nbd", 00:21:48.310 "config": [] 00:21:48.310 }, 00:21:48.310 { 00:21:48.310 "subsystem": "scheduler", 00:21:48.310 "config": [ 00:21:48.310 { 00:21:48.310 "method": "framework_set_scheduler", 00:21:48.310 "params": { 00:21:48.310 "name": "static" 00:21:48.310 } 00:21:48.310 } 00:21:48.310 ] 00:21:48.310 }, 00:21:48.310 { 00:21:48.310 "subsystem": "nvmf", 00:21:48.310 "config": [ 00:21:48.310 { 00:21:48.310 "method": "nvmf_set_config", 00:21:48.310 "params": { 00:21:48.310 "discovery_filter": "match_any", 00:21:48.310 "admin_cmd_passthru": { 00:21:48.310 "identify_ctrlr": false 00:21:48.310 } 00:21:48.310 } 00:21:48.310 }, 00:21:48.310 { 00:21:48.310 "method": "nvmf_set_max_subsystems", 00:21:48.310 "params": { 00:21:48.310 "max_subsystems": 1024 00:21:48.310 } 00:21:48.310 }, 00:21:48.310 { 00:21:48.310 "method": "nvmf_set_crdt", 00:21:48.310 "params": { 00:21:48.310 "crdt1": 0, 00:21:48.310 "crdt2": 0, 00:21:48.310 "crdt3": 0 00:21:48.310 } 00:21:48.310 }, 00:21:48.310 { 00:21:48.310 "method": "nvmf_create_transport", 00:21:48.310 "params": { 00:21:48.310 "trtype": "TCP", 00:21:48.310 "max_queue_depth": 128, 00:21:48.310 "max_io_qpairs_per_ctrlr": 127, 00:21:48.310 "in_capsule_data_size": 4096, 00:21:48.310 "max_io_size": 131072, 00:21:48.310 "io_unit_size": 131072, 00:21:48.310 "max_aq_depth": 128, 00:21:48.310 "num_shared_buffers": 511, 00:21:48.310 "buf_cache_size": 4294967295, 00:21:48.310 "dif_insert_or_strip": false, 00:21:48.310 "zcopy": false, 00:21:48.310 "c2h_success": false, 00:21:48.310 "sock_priority": 0, 00:21:48.310 "abort_timeout_sec": 1, 00:21:48.310 "ack_timeout": 0 00:21:48.310 } 00:21:48.310 }, 00:21:48.310 { 00:21:48.310 "method": "nvmf_create_subsystem", 00:21:48.310 "params": { 00:21:48.310 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:48.310 "allow_any_host": false, 00:21:48.310 "serial_number": "00000000000000000000", 00:21:48.310 "model_number": "SPDK bdev Controller", 00:21:48.310 "max_namespaces": 32, 00:21:48.310 "min_cntlid": 1, 00:21:48.310 "max_cntlid": 65519, 00:21:48.310 "ana_reporting": false 00:21:48.310 } 00:21:48.310 }, 00:21:48.310 { 00:21:48.310 "method": "nvmf_subsystem_add_host", 00:21:48.310 "params": { 00:21:48.310 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:48.310 "host": "nqn.2016-06.io.spdk:host1", 00:21:48.310 "psk": "key0" 00:21:48.310 } 00:21:48.310 }, 00:21:48.310 { 00:21:48.310 "method": "nvmf_subsystem_add_ns", 00:21:48.310 "params": { 00:21:48.310 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:48.310 "namespace": { 00:21:48.310 "nsid": 1, 00:21:48.310 "bdev_name": "malloc0", 00:21:48.310 "nguid": "06CE996C618342119784C01AA5A2536D", 00:21:48.310 "uuid": "06ce996c-6183-4211-9784-c01aa5a2536d", 00:21:48.310 "no_auto_visible": false 00:21:48.310 } 00:21:48.310 } 00:21:48.310 }, 00:21:48.310 { 00:21:48.310 "method": "nvmf_subsystem_add_listener", 00:21:48.310 "params": { 00:21:48.310 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:48.310 "listen_address": { 00:21:48.310 "trtype": "TCP", 00:21:48.310 "adrfam": "IPv4", 00:21:48.310 "traddr": "10.0.0.2", 00:21:48.310 "trsvcid": "4420" 00:21:48.310 }, 00:21:48.310 "secure_channel": true 00:21:48.310 } 00:21:48.310 } 00:21:48.310 ] 00:21:48.310 } 00:21:48.310 ] 00:21:48.310 }' 00:21:48.310 11:57:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:48.310 11:57:38 -- common/autotest_common.sh@10 -- # set +x 00:21:48.310 11:57:38 -- nvmf/common.sh@470 -- # nvmfpid=2527992 00:21:48.310 11:57:38 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:48.310 11:57:38 -- nvmf/common.sh@471 -- # waitforlisten 2527992 00:21:48.310 11:57:38 -- common/autotest_common.sh@817 -- # '[' -z 2527992 ']' 00:21:48.310 11:57:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:48.310 11:57:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:48.310 11:57:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:48.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:48.310 11:57:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:48.310 11:57:38 -- common/autotest_common.sh@10 -- # set +x 00:21:48.310 [2024-04-18 11:57:38.821747] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:21:48.310 [2024-04-18 11:57:38.821837] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:48.569 EAL: No free 2048 kB hugepages reported on node 1 00:21:48.569 [2024-04-18 11:57:38.949597] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:48.828 [2024-04-18 11:57:39.157953] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:48.828 [2024-04-18 11:57:39.157994] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:48.828 [2024-04-18 11:57:39.158006] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:48.828 [2024-04-18 11:57:39.158020] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:48.828 [2024-04-18 11:57:39.158030] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:48.828 [2024-04-18 11:57:39.158130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:49.394 [2024-04-18 11:57:39.712871] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:49.394 [2024-04-18 11:57:39.744903] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:49.394 [2024-04-18 11:57:39.745148] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:49.394 11:57:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:49.394 11:57:39 -- common/autotest_common.sh@850 -- # return 0 00:21:49.394 11:57:39 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:49.394 11:57:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:49.394 11:57:39 -- common/autotest_common.sh@10 -- # set +x 00:21:49.394 11:57:39 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:49.394 11:57:39 -- target/tls.sh@272 -- # bdevperf_pid=2528269 00:21:49.394 11:57:39 -- target/tls.sh@273 -- # waitforlisten 2528269 /var/tmp/bdevperf.sock 00:21:49.394 11:57:39 -- common/autotest_common.sh@817 -- # '[' -z 2528269 ']' 00:21:49.394 11:57:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:49.394 11:57:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:49.394 11:57:39 -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:49.394 11:57:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:49.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:49.394 11:57:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:49.394 11:57:39 -- target/tls.sh@270 -- # echo '{ 00:21:49.394 "subsystems": [ 00:21:49.394 { 00:21:49.394 "subsystem": "keyring", 00:21:49.394 "config": [ 00:21:49.394 { 00:21:49.394 "method": "keyring_file_add_key", 00:21:49.394 "params": { 00:21:49.394 "name": "key0", 00:21:49.394 "path": "/tmp/tmp.BJdxnMKX0m" 00:21:49.394 } 00:21:49.394 } 00:21:49.394 ] 00:21:49.394 }, 00:21:49.394 { 00:21:49.394 "subsystem": "iobuf", 00:21:49.394 "config": [ 00:21:49.394 { 00:21:49.394 "method": "iobuf_set_options", 00:21:49.394 "params": { 00:21:49.394 "small_pool_count": 8192, 00:21:49.394 "large_pool_count": 1024, 00:21:49.394 "small_bufsize": 8192, 00:21:49.394 "large_bufsize": 135168 00:21:49.394 } 00:21:49.394 } 00:21:49.394 ] 00:21:49.394 }, 00:21:49.394 { 00:21:49.394 "subsystem": "sock", 00:21:49.394 "config": [ 00:21:49.394 { 00:21:49.394 "method": "sock_impl_set_options", 00:21:49.394 "params": { 00:21:49.394 "impl_name": "posix", 00:21:49.394 "recv_buf_size": 2097152, 00:21:49.394 "send_buf_size": 2097152, 00:21:49.394 "enable_recv_pipe": true, 00:21:49.394 "enable_quickack": false, 00:21:49.394 "enable_placement_id": 0, 00:21:49.394 "enable_zerocopy_send_server": true, 00:21:49.394 "enable_zerocopy_send_client": false, 00:21:49.394 "zerocopy_threshold": 0, 00:21:49.394 "tls_version": 0, 00:21:49.394 "enable_ktls": false 00:21:49.394 } 00:21:49.394 }, 00:21:49.394 { 00:21:49.394 "method": "sock_impl_set_options", 00:21:49.394 "params": { 00:21:49.394 "impl_name": "ssl", 00:21:49.394 "recv_buf_size": 4096, 00:21:49.394 "send_buf_size": 4096, 00:21:49.394 "enable_recv_pipe": true, 00:21:49.394 "enable_quickack": false, 00:21:49.394 "enable_placement_id": 0, 00:21:49.394 "enable_zerocopy_send_server": true, 00:21:49.394 "enable_zerocopy_send_client": false, 00:21:49.394 "zerocopy_threshold": 0, 00:21:49.394 "tls_version": 0, 00:21:49.394 "enable_ktls": false 00:21:49.394 } 00:21:49.394 } 00:21:49.394 ] 00:21:49.394 }, 00:21:49.394 { 00:21:49.394 "subsystem": "vmd", 00:21:49.394 "config": [] 00:21:49.394 }, 00:21:49.394 { 00:21:49.394 "subsystem": "accel", 00:21:49.394 "config": [ 00:21:49.394 { 00:21:49.394 "method": "accel_set_options", 00:21:49.394 "params": { 00:21:49.394 "small_cache_size": 128, 00:21:49.394 "large_cache_size": 16, 00:21:49.394 "task_count": 2048, 00:21:49.394 "sequence_count": 2048, 00:21:49.394 "buf_count": 2048 00:21:49.394 } 00:21:49.394 } 00:21:49.394 ] 00:21:49.394 }, 00:21:49.394 { 00:21:49.394 "subsystem": "bdev", 00:21:49.394 "config": [ 00:21:49.394 { 00:21:49.394 "method": "bdev_set_options", 00:21:49.394 "params": { 00:21:49.394 "bdev_io_pool_size": 65535, 00:21:49.394 "bdev_io_cache_size": 256, 00:21:49.394 "bdev_auto_examine": true, 00:21:49.394 "iobuf_small_cache_size": 128, 00:21:49.394 "iobuf_large_cache_size": 16 00:21:49.394 } 00:21:49.394 }, 00:21:49.394 { 00:21:49.394 "method": "bdev_raid_set_options", 00:21:49.394 "params": { 00:21:49.394 "process_window_size_kb": 1024 00:21:49.394 } 00:21:49.395 }, 00:21:49.395 { 00:21:49.395 "method": "bdev_iscsi_set_options", 00:21:49.395 "params": { 00:21:49.395 "timeout_sec": 30 00:21:49.395 } 00:21:49.395 }, 00:21:49.395 { 00:21:49.395 "method": "bdev_nvme_set_options", 00:21:49.395 "params": { 00:21:49.395 "action_on_timeout": "none", 00:21:49.395 "timeout_us": 0, 00:21:49.395 "timeout_admin_us": 0, 00:21:49.395 "keep_alive_timeout_ms": 10000, 00:21:49.395 "arbitration_burst": 0, 00:21:49.395 "low_priority_weight": 0, 00:21:49.395 "medium_priority_weight": 0, 00:21:49.395 "high_priority_weight": 0, 00:21:49.395 "nvme_adminq_poll_period_us": 10000, 00:21:49.395 "nvme_ioq_poll_period_us": 0, 00:21:49.395 "io_queue_requests": 512, 00:21:49.395 "delay_cmd_submit": true, 00:21:49.395 "transport_retry_count": 4, 00:21:49.395 "bdev_retry_count": 3, 00:21:49.395 "transport_ack_timeout": 0, 00:21:49.395 "ctrlr_loss_timeout_sec": 0, 00:21:49.395 "reconnect_delay_sec": 0, 00:21:49.395 "fast_io_fail_timeout_sec": 0, 00:21:49.395 "disable_auto_failback": false, 00:21:49.395 "generate_uuids": false, 00:21:49.395 "transport_tos": 0, 00:21:49.395 "nvme_error_stat": false, 00:21:49.395 "rdma_srq_size": 0, 00:21:49.395 "io_path_stat": false, 00:21:49.395 "allow_accel_sequence": false, 00:21:49.395 "rdma_max_cq_size": 0, 00:21:49.395 "rdma_cm_event_timeout_ms": 0, 00:21:49.395 "dhchap_digests": [ 00:21:49.395 "sha256", 00:21:49.395 "sha384", 00:21:49.395 "sha512" 00:21:49.395 ], 00:21:49.395 "dhchap_dhgroups": [ 00:21:49.395 "null", 00:21:49.395 "ffdhe2048", 00:21:49.395 "ffdhe3072", 00:21:49.395 "ffdhe4096", 00:21:49.395 "ffdhe6144", 00:21:49.395 "ffdhe8192" 00:21:49.395 ] 00:21:49.395 } 00:21:49.395 }, 00:21:49.395 { 00:21:49.395 "method": "bdev_nvme_attach_controller", 00:21:49.395 "params": { 00:21:49.395 "name": "nvme0", 00:21:49.395 "trtype": "TCP", 00:21:49.395 "adrfam": "IPv4", 00:21:49.395 "traddr": "10.0.0.2", 00:21:49.395 "trsvcid": "4420", 00:21:49.395 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:49.395 "prchk_reftag": false, 00:21:49.395 "prchk_guard": false, 00:21:49.395 "ctrlr_loss_timeout_sec": 0, 00:21:49.395 "reconnect_delay_sec": 0, 00:21:49.395 "fast_io_fail_timeout_sec": 0, 00:21:49.395 "psk": "key0", 00:21:49.395 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:49.395 "hdgst": false, 00:21:49.395 "ddgst": false 00:21:49.395 } 00:21:49.395 }, 00:21:49.395 { 00:21:49.395 "method": "bdev_nvme_set_hotplug", 00:21:49.395 "params": { 00:21:49.395 "period_us": 100000, 00:21:49.395 "enable": false 00:21:49.395 } 00:21:49.395 }, 00:21:49.395 { 00:21:49.395 "method": "bdev_enable_histogram", 00:21:49.395 "params": { 00:21:49.395 "name": "nvme0n1", 00:21:49.395 "enable": true 00:21:49.395 } 00:21:49.395 }, 00:21:49.395 { 00:21:49.395 "method": "bdev_wait_for_examine" 00:21:49.395 } 00:21:49.395 ] 00:21:49.395 }, 00:21:49.395 { 00:21:49.395 "subsystem": "nbd", 00:21:49.395 "config": [] 00:21:49.395 } 00:21:49.395 ] 00:21:49.395 }' 00:21:49.395 11:57:39 -- common/autotest_common.sh@10 -- # set +x 00:21:49.395 [2024-04-18 11:57:39.896284] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:21:49.395 [2024-04-18 11:57:39.896376] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2528269 ] 00:21:49.653 EAL: No free 2048 kB hugepages reported on node 1 00:21:49.653 [2024-04-18 11:57:40.018746] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:49.912 [2024-04-18 11:57:40.243383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:50.170 [2024-04-18 11:57:40.687396] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:50.428 11:57:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:50.428 11:57:40 -- common/autotest_common.sh@850 -- # return 0 00:21:50.428 11:57:40 -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:50.428 11:57:40 -- target/tls.sh@275 -- # jq -r '.[].name' 00:21:50.686 11:57:41 -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.686 11:57:41 -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:50.686 Running I/O for 1 seconds... 00:21:51.621 00:21:51.621 Latency(us) 00:21:51.621 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:51.621 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:51.621 Verification LBA range: start 0x0 length 0x2000 00:21:51.621 nvme0n1 : 1.04 3426.47 13.38 0.00 0.00 36787.96 7916.75 60817.41 00:21:51.621 =================================================================================================================== 00:21:51.621 Total : 3426.47 13.38 0.00 0.00 36787.96 7916.75 60817.41 00:21:51.621 0 00:21:51.621 11:57:42 -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:21:51.621 11:57:42 -- target/tls.sh@279 -- # cleanup 00:21:51.621 11:57:42 -- target/tls.sh@15 -- # process_shm --id 0 00:21:51.621 11:57:42 -- common/autotest_common.sh@794 -- # type=--id 00:21:51.621 11:57:42 -- common/autotest_common.sh@795 -- # id=0 00:21:51.621 11:57:42 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:21:51.621 11:57:42 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:51.621 11:57:42 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:21:51.621 11:57:42 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:21:51.621 11:57:42 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:21:51.621 11:57:42 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:51.621 nvmf_trace.0 00:21:51.880 11:57:42 -- common/autotest_common.sh@809 -- # return 0 00:21:51.880 11:57:42 -- target/tls.sh@16 -- # killprocess 2528269 00:21:51.880 11:57:42 -- common/autotest_common.sh@936 -- # '[' -z 2528269 ']' 00:21:51.880 11:57:42 -- common/autotest_common.sh@940 -- # kill -0 2528269 00:21:51.880 11:57:42 -- common/autotest_common.sh@941 -- # uname 00:21:51.880 11:57:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:51.880 11:57:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2528269 00:21:51.880 11:57:42 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:51.880 11:57:42 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:51.880 11:57:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2528269' 00:21:51.880 killing process with pid 2528269 00:21:51.880 11:57:42 -- common/autotest_common.sh@955 -- # kill 2528269 00:21:51.880 Received shutdown signal, test time was about 1.000000 seconds 00:21:51.880 00:21:51.880 Latency(us) 00:21:51.880 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:51.880 =================================================================================================================== 00:21:51.880 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:51.880 11:57:42 -- common/autotest_common.sh@960 -- # wait 2528269 00:21:52.816 11:57:43 -- target/tls.sh@17 -- # nvmftestfini 00:21:52.816 11:57:43 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:52.816 11:57:43 -- nvmf/common.sh@117 -- # sync 00:21:52.816 11:57:43 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:52.816 11:57:43 -- nvmf/common.sh@120 -- # set +e 00:21:52.816 11:57:43 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:52.816 11:57:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:52.816 rmmod nvme_tcp 00:21:52.816 rmmod nvme_fabrics 00:21:52.816 rmmod nvme_keyring 00:21:52.816 11:57:43 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:52.816 11:57:43 -- nvmf/common.sh@124 -- # set -e 00:21:52.816 11:57:43 -- nvmf/common.sh@125 -- # return 0 00:21:52.816 11:57:43 -- nvmf/common.sh@478 -- # '[' -n 2527992 ']' 00:21:52.816 11:57:43 -- nvmf/common.sh@479 -- # killprocess 2527992 00:21:52.816 11:57:43 -- common/autotest_common.sh@936 -- # '[' -z 2527992 ']' 00:21:52.816 11:57:43 -- common/autotest_common.sh@940 -- # kill -0 2527992 00:21:52.816 11:57:43 -- common/autotest_common.sh@941 -- # uname 00:21:52.816 11:57:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:52.816 11:57:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2527992 00:21:53.075 11:57:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:53.075 11:57:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:53.075 11:57:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2527992' 00:21:53.075 killing process with pid 2527992 00:21:53.075 11:57:43 -- common/autotest_common.sh@955 -- # kill 2527992 00:21:53.075 11:57:43 -- common/autotest_common.sh@960 -- # wait 2527992 00:21:54.451 11:57:44 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:54.451 11:57:44 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:54.451 11:57:44 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:54.451 11:57:44 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:54.451 11:57:44 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:54.451 11:57:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:54.451 11:57:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:54.451 11:57:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:56.355 11:57:46 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:56.355 11:57:46 -- target/tls.sh@18 -- # rm -f /tmp/tmp.IpcRtNukjS /tmp/tmp.Vl3KezMFlB /tmp/tmp.BJdxnMKX0m 00:21:56.355 00:21:56.355 real 1m46.574s 00:21:56.355 user 2m37.146s 00:21:56.355 sys 0m34.835s 00:21:56.355 11:57:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:56.355 11:57:46 -- common/autotest_common.sh@10 -- # set +x 00:21:56.355 ************************************ 00:21:56.355 END TEST nvmf_tls 00:21:56.355 ************************************ 00:21:56.355 11:57:46 -- nvmf/nvmf.sh@61 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:56.355 11:57:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:56.355 11:57:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:56.355 11:57:46 -- common/autotest_common.sh@10 -- # set +x 00:21:56.614 ************************************ 00:21:56.614 START TEST nvmf_fips 00:21:56.614 ************************************ 00:21:56.614 11:57:46 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:56.614 * Looking for test storage... 00:21:56.614 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:56.614 11:57:47 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:56.614 11:57:47 -- nvmf/common.sh@7 -- # uname -s 00:21:56.614 11:57:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:56.614 11:57:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:56.614 11:57:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:56.614 11:57:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:56.614 11:57:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:56.614 11:57:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:56.614 11:57:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:56.614 11:57:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:56.614 11:57:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:56.614 11:57:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:56.614 11:57:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:56.614 11:57:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:21:56.614 11:57:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:56.614 11:57:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:56.614 11:57:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:56.614 11:57:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:56.614 11:57:47 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:56.614 11:57:47 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:56.614 11:57:47 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:56.614 11:57:47 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:56.614 11:57:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.614 11:57:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.614 11:57:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.614 11:57:47 -- paths/export.sh@5 -- # export PATH 00:21:56.614 11:57:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.614 11:57:47 -- nvmf/common.sh@47 -- # : 0 00:21:56.614 11:57:47 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:56.614 11:57:47 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:56.614 11:57:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:56.614 11:57:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:56.614 11:57:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:56.614 11:57:47 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:56.614 11:57:47 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:56.614 11:57:47 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:56.614 11:57:47 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:56.614 11:57:47 -- fips/fips.sh@89 -- # check_openssl_version 00:21:56.614 11:57:47 -- fips/fips.sh@83 -- # local target=3.0.0 00:21:56.614 11:57:47 -- fips/fips.sh@85 -- # openssl version 00:21:56.614 11:57:47 -- fips/fips.sh@85 -- # awk '{print $2}' 00:21:56.614 11:57:47 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:21:56.614 11:57:47 -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:21:56.614 11:57:47 -- scripts/common.sh@330 -- # local ver1 ver1_l 00:21:56.614 11:57:47 -- scripts/common.sh@331 -- # local ver2 ver2_l 00:21:56.614 11:57:47 -- scripts/common.sh@333 -- # IFS=.-: 00:21:56.614 11:57:47 -- scripts/common.sh@333 -- # read -ra ver1 00:21:56.614 11:57:47 -- scripts/common.sh@334 -- # IFS=.-: 00:21:56.614 11:57:47 -- scripts/common.sh@334 -- # read -ra ver2 00:21:56.614 11:57:47 -- scripts/common.sh@335 -- # local 'op=>=' 00:21:56.614 11:57:47 -- scripts/common.sh@337 -- # ver1_l=3 00:21:56.614 11:57:47 -- scripts/common.sh@338 -- # ver2_l=3 00:21:56.614 11:57:47 -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:21:56.614 11:57:47 -- scripts/common.sh@341 -- # case "$op" in 00:21:56.614 11:57:47 -- scripts/common.sh@345 -- # : 1 00:21:56.614 11:57:47 -- scripts/common.sh@361 -- # (( v = 0 )) 00:21:56.614 11:57:47 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:56.614 11:57:47 -- scripts/common.sh@362 -- # decimal 3 00:21:56.614 11:57:47 -- scripts/common.sh@350 -- # local d=3 00:21:56.614 11:57:47 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:56.614 11:57:47 -- scripts/common.sh@352 -- # echo 3 00:21:56.614 11:57:47 -- scripts/common.sh@362 -- # ver1[v]=3 00:21:56.614 11:57:47 -- scripts/common.sh@363 -- # decimal 3 00:21:56.614 11:57:47 -- scripts/common.sh@350 -- # local d=3 00:21:56.614 11:57:47 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:56.614 11:57:47 -- scripts/common.sh@352 -- # echo 3 00:21:56.614 11:57:47 -- scripts/common.sh@363 -- # ver2[v]=3 00:21:56.614 11:57:47 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:56.614 11:57:47 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:56.614 11:57:47 -- scripts/common.sh@361 -- # (( v++ )) 00:21:56.614 11:57:47 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:56.614 11:57:47 -- scripts/common.sh@362 -- # decimal 0 00:21:56.614 11:57:47 -- scripts/common.sh@350 -- # local d=0 00:21:56.614 11:57:47 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:56.614 11:57:47 -- scripts/common.sh@352 -- # echo 0 00:21:56.614 11:57:47 -- scripts/common.sh@362 -- # ver1[v]=0 00:21:56.614 11:57:47 -- scripts/common.sh@363 -- # decimal 0 00:21:56.614 11:57:47 -- scripts/common.sh@350 -- # local d=0 00:21:56.614 11:57:47 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:56.614 11:57:47 -- scripts/common.sh@352 -- # echo 0 00:21:56.614 11:57:47 -- scripts/common.sh@363 -- # ver2[v]=0 00:21:56.614 11:57:47 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:56.614 11:57:47 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:56.614 11:57:47 -- scripts/common.sh@361 -- # (( v++ )) 00:21:56.614 11:57:47 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:56.614 11:57:47 -- scripts/common.sh@362 -- # decimal 9 00:21:56.614 11:57:47 -- scripts/common.sh@350 -- # local d=9 00:21:56.614 11:57:47 -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:21:56.614 11:57:47 -- scripts/common.sh@352 -- # echo 9 00:21:56.614 11:57:47 -- scripts/common.sh@362 -- # ver1[v]=9 00:21:56.614 11:57:47 -- scripts/common.sh@363 -- # decimal 0 00:21:56.614 11:57:47 -- scripts/common.sh@350 -- # local d=0 00:21:56.614 11:57:47 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:56.614 11:57:47 -- scripts/common.sh@352 -- # echo 0 00:21:56.614 11:57:47 -- scripts/common.sh@363 -- # ver2[v]=0 00:21:56.614 11:57:47 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:56.614 11:57:47 -- scripts/common.sh@364 -- # return 0 00:21:56.614 11:57:47 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:21:56.615 11:57:47 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:56.615 11:57:47 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:21:56.615 11:57:47 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:56.615 11:57:47 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:56.615 11:57:47 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:21:56.615 11:57:47 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:21:56.615 11:57:47 -- fips/fips.sh@113 -- # build_openssl_config 00:21:56.615 11:57:47 -- fips/fips.sh@37 -- # cat 00:21:56.615 11:57:47 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:21:56.615 11:57:47 -- fips/fips.sh@58 -- # cat - 00:21:56.874 11:57:47 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:56.874 11:57:47 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:21:56.874 11:57:47 -- fips/fips.sh@116 -- # mapfile -t providers 00:21:56.874 11:57:47 -- fips/fips.sh@116 -- # openssl list -providers 00:21:56.874 11:57:47 -- fips/fips.sh@116 -- # grep name 00:21:56.874 11:57:47 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:21:56.874 11:57:47 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:21:56.874 11:57:47 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:56.874 11:57:47 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:21:56.874 11:57:47 -- common/autotest_common.sh@638 -- # local es=0 00:21:56.874 11:57:47 -- common/autotest_common.sh@640 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:56.874 11:57:47 -- common/autotest_common.sh@626 -- # local arg=openssl 00:21:56.874 11:57:47 -- fips/fips.sh@127 -- # : 00:21:56.874 11:57:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:56.874 11:57:47 -- common/autotest_common.sh@630 -- # type -t openssl 00:21:56.874 11:57:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:56.874 11:57:47 -- common/autotest_common.sh@632 -- # type -P openssl 00:21:56.874 11:57:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:56.874 11:57:47 -- common/autotest_common.sh@632 -- # arg=/usr/bin/openssl 00:21:56.874 11:57:47 -- common/autotest_common.sh@632 -- # [[ -x /usr/bin/openssl ]] 00:21:56.874 11:57:47 -- common/autotest_common.sh@641 -- # openssl md5 /dev/fd/62 00:21:56.874 Error setting digest 00:21:56.874 0022692D8E7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:21:56.874 0022692D8E7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:21:56.874 11:57:47 -- common/autotest_common.sh@641 -- # es=1 00:21:56.874 11:57:47 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:56.874 11:57:47 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:56.874 11:57:47 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:56.874 11:57:47 -- fips/fips.sh@130 -- # nvmftestinit 00:21:56.874 11:57:47 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:56.874 11:57:47 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:56.874 11:57:47 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:56.874 11:57:47 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:56.874 11:57:47 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:56.874 11:57:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:56.874 11:57:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:56.874 11:57:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:56.874 11:57:47 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:56.874 11:57:47 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:56.874 11:57:47 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:56.874 11:57:47 -- common/autotest_common.sh@10 -- # set +x 00:22:03.438 11:57:53 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:03.438 11:57:53 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:03.438 11:57:53 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:03.438 11:57:53 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:03.438 11:57:53 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:03.438 11:57:53 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:03.438 11:57:53 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:03.438 11:57:53 -- nvmf/common.sh@295 -- # net_devs=() 00:22:03.438 11:57:53 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:03.438 11:57:53 -- nvmf/common.sh@296 -- # e810=() 00:22:03.438 11:57:53 -- nvmf/common.sh@296 -- # local -ga e810 00:22:03.438 11:57:53 -- nvmf/common.sh@297 -- # x722=() 00:22:03.438 11:57:53 -- nvmf/common.sh@297 -- # local -ga x722 00:22:03.438 11:57:53 -- nvmf/common.sh@298 -- # mlx=() 00:22:03.438 11:57:53 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:03.438 11:57:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:03.439 11:57:53 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:03.439 11:57:53 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:03.439 11:57:53 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:03.439 11:57:53 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:03.439 11:57:53 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:03.439 11:57:53 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:03.439 11:57:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:03.439 11:57:53 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:03.439 11:57:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:03.439 11:57:53 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:03.439 11:57:53 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:03.439 11:57:53 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:03.439 11:57:53 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:03.439 11:57:53 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:03.439 11:57:53 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:03.439 11:57:53 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:03.439 11:57:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:03.439 11:57:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:03.439 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:03.439 11:57:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:03.439 11:57:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:03.439 11:57:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:03.439 11:57:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:03.439 11:57:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:03.439 11:57:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:03.439 11:57:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:03.439 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:03.439 11:57:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:03.439 11:57:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:03.439 11:57:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:03.439 11:57:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:03.439 11:57:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:03.439 11:57:53 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:03.439 11:57:53 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:03.439 11:57:53 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:03.439 11:57:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:03.439 11:57:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:03.439 11:57:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:03.439 11:57:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:03.439 11:57:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:03.439 Found net devices under 0000:af:00.0: cvl_0_0 00:22:03.439 11:57:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:03.439 11:57:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:03.439 11:57:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:03.439 11:57:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:03.439 11:57:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:03.439 11:57:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:03.439 Found net devices under 0000:af:00.1: cvl_0_1 00:22:03.439 11:57:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:03.439 11:57:53 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:03.439 11:57:53 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:03.439 11:57:53 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:03.439 11:57:53 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:22:03.439 11:57:53 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:22:03.439 11:57:53 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:03.439 11:57:53 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:03.439 11:57:53 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:03.439 11:57:53 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:03.439 11:57:53 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:03.439 11:57:53 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:03.439 11:57:53 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:03.439 11:57:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:03.439 11:57:53 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:03.439 11:57:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:03.439 11:57:53 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:03.439 11:57:53 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:03.439 11:57:53 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:03.439 11:57:53 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:03.439 11:57:53 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:03.439 11:57:53 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:03.439 11:57:53 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:03.439 11:57:53 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:03.439 11:57:53 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:03.715 11:57:53 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:03.715 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:03.715 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:22:03.715 00:22:03.715 --- 10.0.0.2 ping statistics --- 00:22:03.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:03.715 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:22:03.715 11:57:54 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:03.715 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:03.715 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:22:03.715 00:22:03.715 --- 10.0.0.1 ping statistics --- 00:22:03.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:03.715 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:22:03.715 11:57:54 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:03.715 11:57:54 -- nvmf/common.sh@411 -- # return 0 00:22:03.715 11:57:54 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:03.715 11:57:54 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:03.715 11:57:54 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:03.715 11:57:54 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:03.715 11:57:54 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:03.715 11:57:54 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:03.715 11:57:54 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:03.715 11:57:54 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:22:03.715 11:57:54 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:03.715 11:57:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:03.715 11:57:54 -- common/autotest_common.sh@10 -- # set +x 00:22:03.715 11:57:54 -- nvmf/common.sh@470 -- # nvmfpid=2532808 00:22:03.715 11:57:54 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:03.715 11:57:54 -- nvmf/common.sh@471 -- # waitforlisten 2532808 00:22:03.715 11:57:54 -- common/autotest_common.sh@817 -- # '[' -z 2532808 ']' 00:22:03.715 11:57:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:03.715 11:57:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:03.715 11:57:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:03.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:03.715 11:57:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:03.715 11:57:54 -- common/autotest_common.sh@10 -- # set +x 00:22:03.715 [2024-04-18 11:57:54.163267] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:22:03.715 [2024-04-18 11:57:54.163360] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:03.715 EAL: No free 2048 kB hugepages reported on node 1 00:22:03.990 [2024-04-18 11:57:54.296196] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.990 [2024-04-18 11:57:54.497960] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:03.990 [2024-04-18 11:57:54.498006] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:03.990 [2024-04-18 11:57:54.498018] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:03.990 [2024-04-18 11:57:54.498047] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:03.990 [2024-04-18 11:57:54.498056] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:03.991 [2024-04-18 11:57:54.498092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:04.558 11:57:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:04.558 11:57:54 -- common/autotest_common.sh@850 -- # return 0 00:22:04.558 11:57:54 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:04.558 11:57:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:04.558 11:57:54 -- common/autotest_common.sh@10 -- # set +x 00:22:04.558 11:57:54 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:04.558 11:57:54 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:22:04.558 11:57:54 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:04.558 11:57:54 -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:04.558 11:57:54 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:04.558 11:57:54 -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:04.558 11:57:54 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:04.558 11:57:54 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:04.558 11:57:54 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:04.558 [2024-04-18 11:57:55.085894] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:04.558 [2024-04-18 11:57:55.101894] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:04.558 [2024-04-18 11:57:55.102120] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:04.817 [2024-04-18 11:57:55.182828] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:04.817 malloc0 00:22:04.817 11:57:55 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:04.817 11:57:55 -- fips/fips.sh@147 -- # bdevperf_pid=2532919 00:22:04.817 11:57:55 -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:04.817 11:57:55 -- fips/fips.sh@148 -- # waitforlisten 2532919 /var/tmp/bdevperf.sock 00:22:04.817 11:57:55 -- common/autotest_common.sh@817 -- # '[' -z 2532919 ']' 00:22:04.817 11:57:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:04.817 11:57:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:04.817 11:57:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:04.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:04.817 11:57:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:04.817 11:57:55 -- common/autotest_common.sh@10 -- # set +x 00:22:04.817 [2024-04-18 11:57:55.306626] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:22:04.817 [2024-04-18 11:57:55.306723] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2532919 ] 00:22:05.075 EAL: No free 2048 kB hugepages reported on node 1 00:22:05.075 [2024-04-18 11:57:55.427568] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.347 [2024-04-18 11:57:55.639618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:05.606 11:57:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:05.606 11:57:56 -- common/autotest_common.sh@850 -- # return 0 00:22:05.606 11:57:56 -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:05.864 [2024-04-18 11:57:56.197798] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:05.864 [2024-04-18 11:57:56.197945] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:05.864 TLSTESTn1 00:22:05.864 11:57:56 -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:05.864 Running I/O for 10 seconds... 00:22:18.071 00:22:18.071 Latency(us) 00:22:18.071 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:18.071 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:18.071 Verification LBA range: start 0x0 length 0x2000 00:22:18.071 TLSTESTn1 : 10.03 3899.86 15.23 0.00 0.00 32751.58 5819.60 54525.95 00:22:18.071 =================================================================================================================== 00:22:18.071 Total : 3899.86 15.23 0.00 0.00 32751.58 5819.60 54525.95 00:22:18.071 0 00:22:18.071 11:58:06 -- fips/fips.sh@1 -- # cleanup 00:22:18.071 11:58:06 -- fips/fips.sh@15 -- # process_shm --id 0 00:22:18.071 11:58:06 -- common/autotest_common.sh@794 -- # type=--id 00:22:18.071 11:58:06 -- common/autotest_common.sh@795 -- # id=0 00:22:18.071 11:58:06 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:22:18.071 11:58:06 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:18.071 11:58:06 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:22:18.071 11:58:06 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:22:18.071 11:58:06 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:22:18.071 11:58:06 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:18.071 nvmf_trace.0 00:22:18.071 11:58:06 -- common/autotest_common.sh@809 -- # return 0 00:22:18.071 11:58:06 -- fips/fips.sh@16 -- # killprocess 2532919 00:22:18.071 11:58:06 -- common/autotest_common.sh@936 -- # '[' -z 2532919 ']' 00:22:18.071 11:58:06 -- common/autotest_common.sh@940 -- # kill -0 2532919 00:22:18.071 11:58:06 -- common/autotest_common.sh@941 -- # uname 00:22:18.071 11:58:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:18.071 11:58:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2532919 00:22:18.071 11:58:06 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:22:18.071 11:58:06 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:22:18.071 11:58:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2532919' 00:22:18.071 killing process with pid 2532919 00:22:18.071 11:58:06 -- common/autotest_common.sh@955 -- # kill 2532919 00:22:18.071 Received shutdown signal, test time was about 10.000000 seconds 00:22:18.071 00:22:18.071 Latency(us) 00:22:18.071 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:18.071 =================================================================================================================== 00:22:18.071 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:18.071 [2024-04-18 11:58:06.591566] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:18.071 11:58:06 -- common/autotest_common.sh@960 -- # wait 2532919 00:22:18.071 11:58:07 -- fips/fips.sh@17 -- # nvmftestfini 00:22:18.071 11:58:07 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:18.071 11:58:07 -- nvmf/common.sh@117 -- # sync 00:22:18.071 11:58:07 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:18.071 11:58:07 -- nvmf/common.sh@120 -- # set +e 00:22:18.071 11:58:07 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:18.071 11:58:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:18.071 rmmod nvme_tcp 00:22:18.071 rmmod nvme_fabrics 00:22:18.071 rmmod nvme_keyring 00:22:18.071 11:58:07 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:18.071 11:58:07 -- nvmf/common.sh@124 -- # set -e 00:22:18.071 11:58:07 -- nvmf/common.sh@125 -- # return 0 00:22:18.071 11:58:07 -- nvmf/common.sh@478 -- # '[' -n 2532808 ']' 00:22:18.071 11:58:07 -- nvmf/common.sh@479 -- # killprocess 2532808 00:22:18.071 11:58:07 -- common/autotest_common.sh@936 -- # '[' -z 2532808 ']' 00:22:18.071 11:58:07 -- common/autotest_common.sh@940 -- # kill -0 2532808 00:22:18.071 11:58:07 -- common/autotest_common.sh@941 -- # uname 00:22:18.071 11:58:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:18.071 11:58:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2532808 00:22:18.071 11:58:07 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:18.071 11:58:07 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:18.072 11:58:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2532808' 00:22:18.072 killing process with pid 2532808 00:22:18.072 11:58:07 -- common/autotest_common.sh@955 -- # kill 2532808 00:22:18.072 [2024-04-18 11:58:07.735744] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:18.072 11:58:07 -- common/autotest_common.sh@960 -- # wait 2532808 00:22:18.638 11:58:09 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:18.639 11:58:09 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:18.639 11:58:09 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:18.639 11:58:09 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:18.639 11:58:09 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:18.639 11:58:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:18.639 11:58:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:18.639 11:58:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:21.173 11:58:11 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:21.173 11:58:11 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:21.173 00:22:21.173 real 0m24.192s 00:22:21.173 user 0m25.169s 00:22:21.173 sys 0m10.697s 00:22:21.173 11:58:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:21.173 11:58:11 -- common/autotest_common.sh@10 -- # set +x 00:22:21.173 ************************************ 00:22:21.173 END TEST nvmf_fips 00:22:21.173 ************************************ 00:22:21.173 11:58:11 -- nvmf/nvmf.sh@64 -- # '[' 0 -eq 1 ']' 00:22:21.173 11:58:11 -- nvmf/nvmf.sh@70 -- # [[ phy == phy ]] 00:22:21.173 11:58:11 -- nvmf/nvmf.sh@71 -- # '[' tcp = tcp ']' 00:22:21.173 11:58:11 -- nvmf/nvmf.sh@72 -- # gather_supported_nvmf_pci_devs 00:22:21.173 11:58:11 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:21.173 11:58:11 -- common/autotest_common.sh@10 -- # set +x 00:22:27.740 11:58:17 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:27.740 11:58:17 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:27.740 11:58:17 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:27.740 11:58:17 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:27.740 11:58:17 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:27.740 11:58:17 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:27.740 11:58:17 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:27.740 11:58:17 -- nvmf/common.sh@295 -- # net_devs=() 00:22:27.740 11:58:17 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:27.740 11:58:17 -- nvmf/common.sh@296 -- # e810=() 00:22:27.740 11:58:17 -- nvmf/common.sh@296 -- # local -ga e810 00:22:27.740 11:58:17 -- nvmf/common.sh@297 -- # x722=() 00:22:27.740 11:58:17 -- nvmf/common.sh@297 -- # local -ga x722 00:22:27.740 11:58:17 -- nvmf/common.sh@298 -- # mlx=() 00:22:27.740 11:58:17 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:27.740 11:58:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:27.740 11:58:17 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:27.740 11:58:17 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:27.740 11:58:17 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:27.740 11:58:17 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:27.740 11:58:17 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:27.740 11:58:17 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:27.740 11:58:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:27.740 11:58:17 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:27.740 11:58:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:27.740 11:58:17 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:27.740 11:58:17 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:27.740 11:58:17 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:27.740 11:58:17 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:27.740 11:58:17 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:27.740 11:58:17 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:27.740 11:58:17 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:27.740 11:58:17 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:27.740 11:58:17 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:27.740 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:27.740 11:58:17 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:27.740 11:58:17 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:27.740 11:58:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:27.740 11:58:17 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:27.740 11:58:17 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:27.740 11:58:17 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:27.740 11:58:17 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:27.740 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:27.740 11:58:17 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:27.740 11:58:17 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:27.740 11:58:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:27.740 11:58:17 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:27.740 11:58:17 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:27.740 11:58:17 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:27.740 11:58:17 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:27.740 11:58:17 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:27.740 11:58:17 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:27.740 11:58:17 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:27.740 11:58:17 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:27.740 11:58:17 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:27.740 11:58:17 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:27.740 Found net devices under 0000:af:00.0: cvl_0_0 00:22:27.740 11:58:17 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:27.740 11:58:17 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:27.740 11:58:17 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:27.740 11:58:17 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:27.740 11:58:17 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:27.740 11:58:17 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:27.740 Found net devices under 0000:af:00.1: cvl_0_1 00:22:27.740 11:58:17 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:27.740 11:58:17 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:27.740 11:58:17 -- nvmf/nvmf.sh@73 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:27.740 11:58:17 -- nvmf/nvmf.sh@74 -- # (( 2 > 0 )) 00:22:27.740 11:58:17 -- nvmf/nvmf.sh@75 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:27.740 11:58:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:27.740 11:58:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:27.740 11:58:17 -- common/autotest_common.sh@10 -- # set +x 00:22:27.740 ************************************ 00:22:27.740 START TEST nvmf_perf_adq 00:22:27.740 ************************************ 00:22:27.740 11:58:17 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:27.740 * Looking for test storage... 00:22:27.740 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:27.740 11:58:18 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:27.740 11:58:18 -- nvmf/common.sh@7 -- # uname -s 00:22:27.740 11:58:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:27.740 11:58:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:27.740 11:58:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:27.740 11:58:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:27.740 11:58:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:27.740 11:58:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:27.740 11:58:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:27.740 11:58:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:27.740 11:58:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:27.740 11:58:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:27.740 11:58:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:27.740 11:58:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:22:27.740 11:58:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:27.740 11:58:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:27.740 11:58:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:27.740 11:58:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:27.740 11:58:18 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:27.740 11:58:18 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:27.740 11:58:18 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:27.740 11:58:18 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:27.740 11:58:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.740 11:58:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.740 11:58:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.740 11:58:18 -- paths/export.sh@5 -- # export PATH 00:22:27.740 11:58:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.740 11:58:18 -- nvmf/common.sh@47 -- # : 0 00:22:27.740 11:58:18 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:27.740 11:58:18 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:27.740 11:58:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:27.740 11:58:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:27.740 11:58:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:27.740 11:58:18 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:27.740 11:58:18 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:27.740 11:58:18 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:27.740 11:58:18 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:22:27.740 11:58:18 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:27.740 11:58:18 -- common/autotest_common.sh@10 -- # set +x 00:22:34.355 11:58:24 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:34.355 11:58:24 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:34.355 11:58:24 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:34.355 11:58:24 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:34.355 11:58:24 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:34.355 11:58:24 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:34.355 11:58:24 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:34.355 11:58:24 -- nvmf/common.sh@295 -- # net_devs=() 00:22:34.355 11:58:24 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:34.355 11:58:24 -- nvmf/common.sh@296 -- # e810=() 00:22:34.355 11:58:24 -- nvmf/common.sh@296 -- # local -ga e810 00:22:34.355 11:58:24 -- nvmf/common.sh@297 -- # x722=() 00:22:34.355 11:58:24 -- nvmf/common.sh@297 -- # local -ga x722 00:22:34.355 11:58:24 -- nvmf/common.sh@298 -- # mlx=() 00:22:34.355 11:58:24 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:34.355 11:58:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:34.355 11:58:24 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:34.355 11:58:24 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:34.355 11:58:24 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:34.355 11:58:24 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:34.355 11:58:24 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:34.355 11:58:24 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:34.355 11:58:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:34.355 11:58:24 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:34.355 11:58:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:34.355 11:58:24 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:34.355 11:58:24 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:34.355 11:58:24 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:34.355 11:58:24 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:34.355 11:58:24 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:34.355 11:58:24 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:34.355 11:58:24 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:34.355 11:58:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:34.355 11:58:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:34.355 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:34.355 11:58:24 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:34.355 11:58:24 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:34.355 11:58:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:34.355 11:58:24 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:34.355 11:58:24 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:34.355 11:58:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:34.355 11:58:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:34.355 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:34.355 11:58:24 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:34.355 11:58:24 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:34.355 11:58:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:34.355 11:58:24 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:34.355 11:58:24 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:34.355 11:58:24 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:34.355 11:58:24 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:34.355 11:58:24 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:34.355 11:58:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:34.355 11:58:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:34.355 11:58:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:34.355 11:58:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:34.355 11:58:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:34.355 Found net devices under 0000:af:00.0: cvl_0_0 00:22:34.355 11:58:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:34.355 11:58:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:34.355 11:58:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:34.355 11:58:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:34.355 11:58:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:34.355 11:58:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:34.355 Found net devices under 0000:af:00.1: cvl_0_1 00:22:34.355 11:58:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:34.355 11:58:24 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:34.356 11:58:24 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:34.356 11:58:24 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:22:34.356 11:58:24 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:34.356 11:58:24 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:22:34.356 11:58:24 -- target/perf_adq.sh@52 -- # rmmod ice 00:22:35.731 11:58:26 -- target/perf_adq.sh@53 -- # modprobe ice 00:22:37.634 11:58:28 -- target/perf_adq.sh@54 -- # sleep 5 00:22:42.907 11:58:33 -- target/perf_adq.sh@67 -- # nvmftestinit 00:22:42.907 11:58:33 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:42.907 11:58:33 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:42.907 11:58:33 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:42.907 11:58:33 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:42.907 11:58:33 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:42.907 11:58:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:42.907 11:58:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:42.907 11:58:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.907 11:58:33 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:22:42.907 11:58:33 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:42.907 11:58:33 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:42.907 11:58:33 -- common/autotest_common.sh@10 -- # set +x 00:22:42.907 11:58:33 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:42.907 11:58:33 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:42.907 11:58:33 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:42.907 11:58:33 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:42.907 11:58:33 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:42.907 11:58:33 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:42.907 11:58:33 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:42.907 11:58:33 -- nvmf/common.sh@295 -- # net_devs=() 00:22:42.907 11:58:33 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:42.907 11:58:33 -- nvmf/common.sh@296 -- # e810=() 00:22:42.907 11:58:33 -- nvmf/common.sh@296 -- # local -ga e810 00:22:42.907 11:58:33 -- nvmf/common.sh@297 -- # x722=() 00:22:42.907 11:58:33 -- nvmf/common.sh@297 -- # local -ga x722 00:22:42.907 11:58:33 -- nvmf/common.sh@298 -- # mlx=() 00:22:42.907 11:58:33 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:42.907 11:58:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:42.907 11:58:33 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:42.907 11:58:33 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:42.907 11:58:33 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:42.907 11:58:33 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:42.907 11:58:33 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:42.907 11:58:33 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:42.907 11:58:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:42.907 11:58:33 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:42.907 11:58:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:42.907 11:58:33 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:42.907 11:58:33 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:42.907 11:58:33 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:42.907 11:58:33 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:42.907 11:58:33 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:42.907 11:58:33 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:42.907 11:58:33 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:42.907 11:58:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:42.907 11:58:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:42.907 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:42.907 11:58:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:42.907 11:58:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:42.907 11:58:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:42.907 11:58:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:42.907 11:58:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:42.907 11:58:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:42.907 11:58:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:42.907 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:42.907 11:58:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:42.907 11:58:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:42.907 11:58:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:42.907 11:58:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:42.907 11:58:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:42.907 11:58:33 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:42.907 11:58:33 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:42.907 11:58:33 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:42.907 11:58:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:42.907 11:58:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:42.907 11:58:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:42.907 11:58:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:42.907 11:58:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:42.907 Found net devices under 0000:af:00.0: cvl_0_0 00:22:42.907 11:58:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:42.907 11:58:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:42.907 11:58:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:42.907 11:58:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:42.907 11:58:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:42.907 11:58:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:42.907 Found net devices under 0000:af:00.1: cvl_0_1 00:22:42.907 11:58:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:42.907 11:58:33 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:42.907 11:58:33 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:42.907 11:58:33 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:42.907 11:58:33 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:22:42.907 11:58:33 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:22:42.907 11:58:33 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:42.907 11:58:33 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:42.907 11:58:33 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:42.907 11:58:33 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:42.907 11:58:33 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:42.907 11:58:33 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:42.907 11:58:33 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:42.907 11:58:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:42.907 11:58:33 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:42.907 11:58:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:42.907 11:58:33 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:42.907 11:58:33 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:42.907 11:58:33 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:42.907 11:58:33 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:42.907 11:58:33 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:42.907 11:58:33 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:42.907 11:58:33 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:43.166 11:58:33 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:43.166 11:58:33 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:43.166 11:58:33 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:43.166 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:43.166 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.150 ms 00:22:43.166 00:22:43.166 --- 10.0.0.2 ping statistics --- 00:22:43.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:43.166 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:22:43.166 11:58:33 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:43.166 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:43.166 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:22:43.166 00:22:43.166 --- 10.0.0.1 ping statistics --- 00:22:43.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:43.166 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:22:43.166 11:58:33 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:43.166 11:58:33 -- nvmf/common.sh@411 -- # return 0 00:22:43.166 11:58:33 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:43.166 11:58:33 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:43.166 11:58:33 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:43.166 11:58:33 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:43.166 11:58:33 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:43.166 11:58:33 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:43.166 11:58:33 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:43.166 11:58:33 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:43.166 11:58:33 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:43.166 11:58:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:43.166 11:58:33 -- common/autotest_common.sh@10 -- # set +x 00:22:43.166 11:58:33 -- nvmf/common.sh@470 -- # nvmfpid=2543663 00:22:43.166 11:58:33 -- nvmf/common.sh@471 -- # waitforlisten 2543663 00:22:43.166 11:58:33 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:43.166 11:58:33 -- common/autotest_common.sh@817 -- # '[' -z 2543663 ']' 00:22:43.166 11:58:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:43.166 11:58:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:43.166 11:58:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:43.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:43.166 11:58:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:43.166 11:58:33 -- common/autotest_common.sh@10 -- # set +x 00:22:43.166 [2024-04-18 11:58:33.635998] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:22:43.166 [2024-04-18 11:58:33.636088] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:43.166 EAL: No free 2048 kB hugepages reported on node 1 00:22:43.425 [2024-04-18 11:58:33.766496] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:43.684 [2024-04-18 11:58:33.984974] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:43.684 [2024-04-18 11:58:33.985019] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:43.684 [2024-04-18 11:58:33.985032] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:43.684 [2024-04-18 11:58:33.985047] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:43.684 [2024-04-18 11:58:33.985058] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:43.684 [2024-04-18 11:58:33.985131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:43.684 [2024-04-18 11:58:33.985219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:43.684 [2024-04-18 11:58:33.985286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:43.684 [2024-04-18 11:58:33.985294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:43.943 11:58:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:43.943 11:58:34 -- common/autotest_common.sh@850 -- # return 0 00:22:43.943 11:58:34 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:43.943 11:58:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:43.943 11:58:34 -- common/autotest_common.sh@10 -- # set +x 00:22:43.943 11:58:34 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:43.943 11:58:34 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:22:43.943 11:58:34 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:43.943 11:58:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:43.943 11:58:34 -- common/autotest_common.sh@10 -- # set +x 00:22:43.943 11:58:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:43.943 11:58:34 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:22:43.943 11:58:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:43.943 11:58:34 -- common/autotest_common.sh@10 -- # set +x 00:22:44.511 11:58:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:44.511 11:58:34 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:44.511 11:58:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:44.511 11:58:34 -- common/autotest_common.sh@10 -- # set +x 00:22:44.511 [2024-04-18 11:58:34.848408] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:44.511 11:58:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:44.511 11:58:34 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:44.511 11:58:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:44.511 11:58:34 -- common/autotest_common.sh@10 -- # set +x 00:22:44.511 Malloc1 00:22:44.511 11:58:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:44.511 11:58:34 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:44.511 11:58:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:44.511 11:58:34 -- common/autotest_common.sh@10 -- # set +x 00:22:44.511 11:58:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:44.511 11:58:34 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:44.511 11:58:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:44.511 11:58:34 -- common/autotest_common.sh@10 -- # set +x 00:22:44.511 11:58:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:44.511 11:58:34 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:44.511 11:58:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:44.511 11:58:34 -- common/autotest_common.sh@10 -- # set +x 00:22:44.511 [2024-04-18 11:58:34.967271] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:44.511 11:58:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:44.511 11:58:34 -- target/perf_adq.sh@73 -- # perfpid=2543879 00:22:44.511 11:58:34 -- target/perf_adq.sh@74 -- # sleep 2 00:22:44.511 11:58:34 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:44.511 EAL: No free 2048 kB hugepages reported on node 1 00:22:47.044 11:58:36 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:22:47.044 11:58:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:47.044 11:58:36 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:47.044 11:58:36 -- common/autotest_common.sh@10 -- # set +x 00:22:47.044 11:58:36 -- target/perf_adq.sh@76 -- # wc -l 00:22:47.044 11:58:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:47.044 11:58:37 -- target/perf_adq.sh@76 -- # count=4 00:22:47.044 11:58:37 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:22:47.044 11:58:37 -- target/perf_adq.sh@81 -- # wait 2543879 00:22:55.174 Initializing NVMe Controllers 00:22:55.174 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:55.174 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:55.174 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:55.174 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:55.174 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:55.174 Initialization complete. Launching workers. 00:22:55.174 ======================================================== 00:22:55.174 Latency(us) 00:22:55.174 Device Information : IOPS MiB/s Average min max 00:22:55.174 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9708.70 37.92 6614.28 2311.24 50697.17 00:22:55.174 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9471.51 37.00 6758.00 3631.37 11995.33 00:22:55.174 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9406.32 36.74 6804.38 1730.74 12895.58 00:22:55.174 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9590.61 37.46 6672.92 2634.30 11442.87 00:22:55.174 ======================================================== 00:22:55.174 Total : 38177.14 149.13 6711.51 1730.74 50697.17 00:22:55.174 00:22:55.174 11:58:45 -- target/perf_adq.sh@82 -- # nvmftestfini 00:22:55.174 11:58:45 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:55.174 11:58:45 -- nvmf/common.sh@117 -- # sync 00:22:55.174 11:58:45 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:55.174 11:58:45 -- nvmf/common.sh@120 -- # set +e 00:22:55.174 11:58:45 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:55.174 11:58:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:55.174 rmmod nvme_tcp 00:22:55.174 rmmod nvme_fabrics 00:22:55.174 rmmod nvme_keyring 00:22:55.174 11:58:45 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:55.174 11:58:45 -- nvmf/common.sh@124 -- # set -e 00:22:55.174 11:58:45 -- nvmf/common.sh@125 -- # return 0 00:22:55.174 11:58:45 -- nvmf/common.sh@478 -- # '[' -n 2543663 ']' 00:22:55.174 11:58:45 -- nvmf/common.sh@479 -- # killprocess 2543663 00:22:55.174 11:58:45 -- common/autotest_common.sh@936 -- # '[' -z 2543663 ']' 00:22:55.174 11:58:45 -- common/autotest_common.sh@940 -- # kill -0 2543663 00:22:55.174 11:58:45 -- common/autotest_common.sh@941 -- # uname 00:22:55.174 11:58:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:55.174 11:58:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2543663 00:22:55.174 11:58:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:55.174 11:58:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:55.174 11:58:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2543663' 00:22:55.174 killing process with pid 2543663 00:22:55.174 11:58:45 -- common/autotest_common.sh@955 -- # kill 2543663 00:22:55.174 11:58:45 -- common/autotest_common.sh@960 -- # wait 2543663 00:22:56.550 11:58:46 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:56.550 11:58:46 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:56.550 11:58:46 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:56.550 11:58:46 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:56.550 11:58:46 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:56.550 11:58:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:56.550 11:58:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:56.550 11:58:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:58.494 11:58:48 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:58.494 11:58:48 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:22:58.494 11:58:48 -- target/perf_adq.sh@52 -- # rmmod ice 00:22:59.873 11:58:50 -- target/perf_adq.sh@53 -- # modprobe ice 00:23:02.409 11:58:52 -- target/perf_adq.sh@54 -- # sleep 5 00:23:07.746 11:58:57 -- target/perf_adq.sh@87 -- # nvmftestinit 00:23:07.746 11:58:57 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:07.746 11:58:57 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:07.746 11:58:57 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:07.746 11:58:57 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:07.746 11:58:57 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:07.746 11:58:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:07.746 11:58:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:07.746 11:58:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:07.746 11:58:57 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:23:07.746 11:58:57 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:07.746 11:58:57 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:07.746 11:58:57 -- common/autotest_common.sh@10 -- # set +x 00:23:07.746 11:58:57 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:07.746 11:58:57 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:07.746 11:58:57 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:07.746 11:58:57 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:07.746 11:58:57 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:07.746 11:58:57 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:07.746 11:58:57 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:07.746 11:58:57 -- nvmf/common.sh@295 -- # net_devs=() 00:23:07.746 11:58:57 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:07.746 11:58:57 -- nvmf/common.sh@296 -- # e810=() 00:23:07.746 11:58:57 -- nvmf/common.sh@296 -- # local -ga e810 00:23:07.746 11:58:57 -- nvmf/common.sh@297 -- # x722=() 00:23:07.746 11:58:57 -- nvmf/common.sh@297 -- # local -ga x722 00:23:07.746 11:58:57 -- nvmf/common.sh@298 -- # mlx=() 00:23:07.746 11:58:57 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:07.746 11:58:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:07.746 11:58:57 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:07.746 11:58:57 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:07.746 11:58:57 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:07.746 11:58:57 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:07.746 11:58:57 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:07.746 11:58:57 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:07.746 11:58:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:07.746 11:58:57 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:07.746 11:58:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:07.746 11:58:57 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:07.746 11:58:57 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:07.746 11:58:57 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:07.746 11:58:57 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:07.746 11:58:57 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:07.746 11:58:57 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:07.746 11:58:57 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:07.746 11:58:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:07.746 11:58:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:07.746 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:07.746 11:58:57 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:07.746 11:58:57 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:07.746 11:58:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:07.746 11:58:57 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:07.746 11:58:57 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:07.746 11:58:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:07.746 11:58:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:07.746 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:07.746 11:58:57 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:07.746 11:58:57 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:07.746 11:58:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:07.746 11:58:57 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:07.746 11:58:57 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:07.746 11:58:57 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:07.746 11:58:57 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:07.746 11:58:57 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:07.746 11:58:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:07.746 11:58:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:07.746 11:58:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:07.746 11:58:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:07.747 11:58:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:07.747 Found net devices under 0000:af:00.0: cvl_0_0 00:23:07.747 11:58:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:07.747 11:58:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:07.747 11:58:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:07.747 11:58:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:07.747 11:58:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:07.747 11:58:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:07.747 Found net devices under 0000:af:00.1: cvl_0_1 00:23:07.747 11:58:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:07.747 11:58:57 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:07.747 11:58:57 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:07.747 11:58:57 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:07.747 11:58:57 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:23:07.747 11:58:57 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:23:07.747 11:58:57 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:07.747 11:58:57 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:07.747 11:58:57 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:07.747 11:58:57 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:07.747 11:58:57 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:07.747 11:58:57 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:07.747 11:58:57 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:07.747 11:58:57 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:07.747 11:58:57 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:07.747 11:58:57 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:07.747 11:58:57 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:07.747 11:58:57 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:07.747 11:58:57 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:07.747 11:58:57 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:07.747 11:58:57 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:07.747 11:58:57 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:07.747 11:58:57 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:07.747 11:58:57 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:07.747 11:58:57 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:07.747 11:58:57 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:07.747 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:07.747 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:23:07.747 00:23:07.747 --- 10.0.0.2 ping statistics --- 00:23:07.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.747 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:23:07.747 11:58:57 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:07.747 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:07.747 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.237 ms 00:23:07.747 00:23:07.747 --- 10.0.0.1 ping statistics --- 00:23:07.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.747 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:23:07.747 11:58:57 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:07.747 11:58:57 -- nvmf/common.sh@411 -- # return 0 00:23:07.747 11:58:57 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:07.747 11:58:57 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:07.747 11:58:57 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:07.747 11:58:57 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:07.747 11:58:57 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:07.747 11:58:57 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:07.747 11:58:57 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:07.747 11:58:57 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:23:07.747 11:58:57 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:23:07.747 11:58:57 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:23:07.747 11:58:57 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:23:07.747 net.core.busy_poll = 1 00:23:07.747 11:58:57 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:23:07.747 net.core.busy_read = 1 00:23:07.747 11:58:57 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:23:07.747 11:58:57 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:23:07.747 11:58:57 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:23:07.747 11:58:58 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:23:07.747 11:58:58 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:23:07.747 11:58:58 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:07.747 11:58:58 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:07.747 11:58:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:07.747 11:58:58 -- common/autotest_common.sh@10 -- # set +x 00:23:07.747 11:58:58 -- nvmf/common.sh@470 -- # nvmfpid=2548228 00:23:07.747 11:58:58 -- nvmf/common.sh@471 -- # waitforlisten 2548228 00:23:07.747 11:58:58 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:07.747 11:58:58 -- common/autotest_common.sh@817 -- # '[' -z 2548228 ']' 00:23:07.747 11:58:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:07.747 11:58:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:07.747 11:58:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:07.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:07.747 11:58:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:07.747 11:58:58 -- common/autotest_common.sh@10 -- # set +x 00:23:07.747 [2024-04-18 11:58:58.171059] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:23:07.747 [2024-04-18 11:58:58.171151] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:07.747 EAL: No free 2048 kB hugepages reported on node 1 00:23:08.006 [2024-04-18 11:58:58.299426] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:08.006 [2024-04-18 11:58:58.512170] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:08.006 [2024-04-18 11:58:58.512220] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:08.006 [2024-04-18 11:58:58.512231] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:08.006 [2024-04-18 11:58:58.512260] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:08.006 [2024-04-18 11:58:58.512270] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:08.006 [2024-04-18 11:58:58.512353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:08.006 [2024-04-18 11:58:58.512424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:08.006 [2024-04-18 11:58:58.512518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:08.006 [2024-04-18 11:58:58.512527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:08.573 11:58:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:08.573 11:58:58 -- common/autotest_common.sh@850 -- # return 0 00:23:08.573 11:58:58 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:08.573 11:58:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:08.573 11:58:58 -- common/autotest_common.sh@10 -- # set +x 00:23:08.573 11:58:58 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:08.573 11:58:58 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:23:08.573 11:58:58 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:23:08.573 11:58:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.573 11:58:58 -- common/autotest_common.sh@10 -- # set +x 00:23:08.573 11:58:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.573 11:58:58 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:23:08.573 11:58:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.573 11:58:58 -- common/autotest_common.sh@10 -- # set +x 00:23:09.140 11:58:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:09.140 11:58:59 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:23:09.140 11:58:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:09.140 11:58:59 -- common/autotest_common.sh@10 -- # set +x 00:23:09.140 [2024-04-18 11:58:59.409821] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:09.140 11:58:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:09.140 11:58:59 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:09.140 11:58:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:09.140 11:58:59 -- common/autotest_common.sh@10 -- # set +x 00:23:09.140 Malloc1 00:23:09.140 11:58:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:09.140 11:58:59 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:09.140 11:58:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:09.140 11:58:59 -- common/autotest_common.sh@10 -- # set +x 00:23:09.140 11:58:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:09.140 11:58:59 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:09.140 11:58:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:09.140 11:58:59 -- common/autotest_common.sh@10 -- # set +x 00:23:09.140 11:58:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:09.140 11:58:59 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:09.140 11:58:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:09.140 11:58:59 -- common/autotest_common.sh@10 -- # set +x 00:23:09.140 [2024-04-18 11:58:59.524698] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:09.140 11:58:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:09.140 11:58:59 -- target/perf_adq.sh@94 -- # perfpid=2548512 00:23:09.140 11:58:59 -- target/perf_adq.sh@95 -- # sleep 2 00:23:09.140 11:58:59 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:09.140 EAL: No free 2048 kB hugepages reported on node 1 00:23:11.044 11:59:01 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:23:11.044 11:59:01 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:23:11.044 11:59:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:11.044 11:59:01 -- target/perf_adq.sh@97 -- # wc -l 00:23:11.044 11:59:01 -- common/autotest_common.sh@10 -- # set +x 00:23:11.044 11:59:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:11.044 11:59:01 -- target/perf_adq.sh@97 -- # count=2 00:23:11.044 11:59:01 -- target/perf_adq.sh@98 -- # [[ 2 -lt 2 ]] 00:23:11.044 11:59:01 -- target/perf_adq.sh@103 -- # wait 2548512 00:23:21.027 Initializing NVMe Controllers 00:23:21.027 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:21.027 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:21.027 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:21.027 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:21.027 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:21.027 Initialization complete. Launching workers. 00:23:21.027 ======================================================== 00:23:21.027 Latency(us) 00:23:21.027 Device Information : IOPS MiB/s Average min max 00:23:21.027 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 12500.52 48.83 5119.60 1666.74 9384.97 00:23:21.027 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4998.17 19.52 12805.06 1746.68 57824.22 00:23:21.027 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4735.97 18.50 13516.01 1873.19 59809.25 00:23:21.027 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4666.57 18.23 13742.85 1686.56 61541.52 00:23:21.027 ======================================================== 00:23:21.027 Total : 26901.23 105.08 9521.60 1666.74 61541.52 00:23:21.027 00:23:21.027 11:59:09 -- target/perf_adq.sh@104 -- # nvmftestfini 00:23:21.027 11:59:09 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:21.027 11:59:09 -- nvmf/common.sh@117 -- # sync 00:23:21.027 11:59:09 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:21.027 11:59:09 -- nvmf/common.sh@120 -- # set +e 00:23:21.028 11:59:09 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:21.028 11:59:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:21.028 rmmod nvme_tcp 00:23:21.028 rmmod nvme_fabrics 00:23:21.028 rmmod nvme_keyring 00:23:21.028 11:59:09 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:21.028 11:59:09 -- nvmf/common.sh@124 -- # set -e 00:23:21.028 11:59:09 -- nvmf/common.sh@125 -- # return 0 00:23:21.028 11:59:09 -- nvmf/common.sh@478 -- # '[' -n 2548228 ']' 00:23:21.028 11:59:09 -- nvmf/common.sh@479 -- # killprocess 2548228 00:23:21.028 11:59:09 -- common/autotest_common.sh@936 -- # '[' -z 2548228 ']' 00:23:21.028 11:59:09 -- common/autotest_common.sh@940 -- # kill -0 2548228 00:23:21.028 11:59:09 -- common/autotest_common.sh@941 -- # uname 00:23:21.028 11:59:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:21.028 11:59:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2548228 00:23:21.028 11:59:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:21.028 11:59:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:21.028 11:59:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2548228' 00:23:21.028 killing process with pid 2548228 00:23:21.028 11:59:09 -- common/autotest_common.sh@955 -- # kill 2548228 00:23:21.028 11:59:09 -- common/autotest_common.sh@960 -- # wait 2548228 00:23:21.028 11:59:11 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:21.028 11:59:11 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:21.028 11:59:11 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:21.028 11:59:11 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:21.028 11:59:11 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:21.028 11:59:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:21.028 11:59:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:21.028 11:59:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:22.933 11:59:13 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:22.933 11:59:13 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:23:22.933 00:23:22.933 real 0m55.504s 00:23:22.933 user 2m55.538s 00:23:22.933 sys 0m14.014s 00:23:22.933 11:59:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:22.933 11:59:13 -- common/autotest_common.sh@10 -- # set +x 00:23:22.933 ************************************ 00:23:22.933 END TEST nvmf_perf_adq 00:23:22.933 ************************************ 00:23:23.193 11:59:13 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:23.193 11:59:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:23.193 11:59:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:23.193 11:59:13 -- common/autotest_common.sh@10 -- # set +x 00:23:23.193 ************************************ 00:23:23.193 START TEST nvmf_shutdown 00:23:23.193 ************************************ 00:23:23.193 11:59:13 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:23.452 * Looking for test storage... 00:23:23.452 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:23.452 11:59:13 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:23.452 11:59:13 -- nvmf/common.sh@7 -- # uname -s 00:23:23.452 11:59:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:23.452 11:59:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:23.452 11:59:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:23.452 11:59:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:23.452 11:59:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:23.452 11:59:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:23.452 11:59:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:23.452 11:59:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:23.452 11:59:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:23.452 11:59:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:23.452 11:59:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:23.452 11:59:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:23:23.452 11:59:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:23.452 11:59:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:23.452 11:59:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:23.452 11:59:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:23.452 11:59:13 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:23.452 11:59:13 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:23.452 11:59:13 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:23.452 11:59:13 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:23.452 11:59:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.452 11:59:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.452 11:59:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.452 11:59:13 -- paths/export.sh@5 -- # export PATH 00:23:23.453 11:59:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.453 11:59:13 -- nvmf/common.sh@47 -- # : 0 00:23:23.453 11:59:13 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:23.453 11:59:13 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:23.453 11:59:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:23.453 11:59:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:23.453 11:59:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:23.453 11:59:13 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:23.453 11:59:13 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:23.453 11:59:13 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:23.453 11:59:13 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:23.453 11:59:13 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:23.453 11:59:13 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:23:23.453 11:59:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:23.453 11:59:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:23.453 11:59:13 -- common/autotest_common.sh@10 -- # set +x 00:23:23.453 ************************************ 00:23:23.453 START TEST nvmf_shutdown_tc1 00:23:23.453 ************************************ 00:23:23.453 11:59:13 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc1 00:23:23.453 11:59:13 -- target/shutdown.sh@74 -- # starttarget 00:23:23.453 11:59:13 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:23.453 11:59:13 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:23.453 11:59:13 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:23.453 11:59:13 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:23.453 11:59:13 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:23.453 11:59:13 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:23.453 11:59:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:23.453 11:59:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:23.453 11:59:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:23.453 11:59:13 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:23:23.453 11:59:13 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:23.453 11:59:13 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:23.453 11:59:13 -- common/autotest_common.sh@10 -- # set +x 00:23:30.022 11:59:20 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:30.022 11:59:20 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:30.022 11:59:20 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:30.022 11:59:20 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:30.022 11:59:20 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:30.022 11:59:20 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:30.022 11:59:20 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:30.022 11:59:20 -- nvmf/common.sh@295 -- # net_devs=() 00:23:30.022 11:59:20 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:30.022 11:59:20 -- nvmf/common.sh@296 -- # e810=() 00:23:30.022 11:59:20 -- nvmf/common.sh@296 -- # local -ga e810 00:23:30.022 11:59:20 -- nvmf/common.sh@297 -- # x722=() 00:23:30.022 11:59:20 -- nvmf/common.sh@297 -- # local -ga x722 00:23:30.022 11:59:20 -- nvmf/common.sh@298 -- # mlx=() 00:23:30.022 11:59:20 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:30.022 11:59:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:30.022 11:59:20 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:30.022 11:59:20 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:30.022 11:59:20 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:30.022 11:59:20 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:30.022 11:59:20 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:30.022 11:59:20 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:30.022 11:59:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:30.022 11:59:20 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:30.022 11:59:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:30.022 11:59:20 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:30.022 11:59:20 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:30.022 11:59:20 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:30.022 11:59:20 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:30.022 11:59:20 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:30.022 11:59:20 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:30.022 11:59:20 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:30.022 11:59:20 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:30.022 11:59:20 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:30.022 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:30.022 11:59:20 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:30.022 11:59:20 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:30.022 11:59:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:30.022 11:59:20 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:30.022 11:59:20 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:30.022 11:59:20 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:30.022 11:59:20 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:30.022 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:30.022 11:59:20 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:30.022 11:59:20 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:30.022 11:59:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:30.022 11:59:20 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:30.022 11:59:20 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:30.022 11:59:20 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:30.022 11:59:20 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:30.022 11:59:20 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:30.022 11:59:20 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:30.022 11:59:20 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:30.022 11:59:20 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:30.022 11:59:20 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:30.022 11:59:20 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:30.022 Found net devices under 0000:af:00.0: cvl_0_0 00:23:30.022 11:59:20 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:30.022 11:59:20 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:30.022 11:59:20 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:30.022 11:59:20 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:30.022 11:59:20 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:30.022 11:59:20 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:30.022 Found net devices under 0000:af:00.1: cvl_0_1 00:23:30.022 11:59:20 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:30.022 11:59:20 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:30.022 11:59:20 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:30.022 11:59:20 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:30.022 11:59:20 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:23:30.022 11:59:20 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:23:30.022 11:59:20 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:30.022 11:59:20 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:30.022 11:59:20 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:30.022 11:59:20 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:30.022 11:59:20 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:30.022 11:59:20 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:30.022 11:59:20 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:30.022 11:59:20 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:30.022 11:59:20 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:30.022 11:59:20 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:30.022 11:59:20 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:30.022 11:59:20 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:30.022 11:59:20 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:30.022 11:59:20 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:30.022 11:59:20 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:30.022 11:59:20 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:30.022 11:59:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:30.281 11:59:20 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:30.281 11:59:20 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:30.281 11:59:20 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:30.281 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:30.281 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:23:30.281 00:23:30.281 --- 10.0.0.2 ping statistics --- 00:23:30.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:30.281 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:23:30.281 11:59:20 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:30.281 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:30.281 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:23:30.281 00:23:30.281 --- 10.0.0.1 ping statistics --- 00:23:30.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:30.281 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:23:30.281 11:59:20 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:30.281 11:59:20 -- nvmf/common.sh@411 -- # return 0 00:23:30.281 11:59:20 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:30.281 11:59:20 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:30.281 11:59:20 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:30.281 11:59:20 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:30.281 11:59:20 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:30.281 11:59:20 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:30.281 11:59:20 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:30.281 11:59:20 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:30.281 11:59:20 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:30.281 11:59:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:30.281 11:59:20 -- common/autotest_common.sh@10 -- # set +x 00:23:30.281 11:59:20 -- nvmf/common.sh@470 -- # nvmfpid=2554184 00:23:30.281 11:59:20 -- nvmf/common.sh@471 -- # waitforlisten 2554184 00:23:30.281 11:59:20 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:30.281 11:59:20 -- common/autotest_common.sh@817 -- # '[' -z 2554184 ']' 00:23:30.281 11:59:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:30.281 11:59:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:30.281 11:59:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:30.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:30.281 11:59:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:30.281 11:59:20 -- common/autotest_common.sh@10 -- # set +x 00:23:30.281 [2024-04-18 11:59:20.789988] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:23:30.281 [2024-04-18 11:59:20.790076] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:30.541 EAL: No free 2048 kB hugepages reported on node 1 00:23:30.541 [2024-04-18 11:59:20.929883] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:30.800 [2024-04-18 11:59:21.171477] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:30.800 [2024-04-18 11:59:21.171536] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:30.800 [2024-04-18 11:59:21.171569] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:30.800 [2024-04-18 11:59:21.171587] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:30.800 [2024-04-18 11:59:21.171603] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:30.800 [2024-04-18 11:59:21.171761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:30.800 [2024-04-18 11:59:21.171834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:30.800 [2024-04-18 11:59:21.171922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:30.800 [2024-04-18 11:59:21.171944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:23:31.059 11:59:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:31.059 11:59:21 -- common/autotest_common.sh@850 -- # return 0 00:23:31.059 11:59:21 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:31.059 11:59:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:31.059 11:59:21 -- common/autotest_common.sh@10 -- # set +x 00:23:31.318 11:59:21 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:31.318 11:59:21 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:31.318 11:59:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.318 11:59:21 -- common/autotest_common.sh@10 -- # set +x 00:23:31.318 [2024-04-18 11:59:21.616565] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:31.318 11:59:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.318 11:59:21 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:31.318 11:59:21 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:31.318 11:59:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:31.318 11:59:21 -- common/autotest_common.sh@10 -- # set +x 00:23:31.318 11:59:21 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:31.318 11:59:21 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:31.318 11:59:21 -- target/shutdown.sh@28 -- # cat 00:23:31.319 11:59:21 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:31.319 11:59:21 -- target/shutdown.sh@28 -- # cat 00:23:31.319 11:59:21 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:31.319 11:59:21 -- target/shutdown.sh@28 -- # cat 00:23:31.319 11:59:21 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:31.319 11:59:21 -- target/shutdown.sh@28 -- # cat 00:23:31.319 11:59:21 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:31.319 11:59:21 -- target/shutdown.sh@28 -- # cat 00:23:31.319 11:59:21 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:31.319 11:59:21 -- target/shutdown.sh@28 -- # cat 00:23:31.319 11:59:21 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:31.319 11:59:21 -- target/shutdown.sh@28 -- # cat 00:23:31.319 11:59:21 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:31.319 11:59:21 -- target/shutdown.sh@28 -- # cat 00:23:31.319 11:59:21 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:31.319 11:59:21 -- target/shutdown.sh@28 -- # cat 00:23:31.319 11:59:21 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:31.319 11:59:21 -- target/shutdown.sh@28 -- # cat 00:23:31.319 11:59:21 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:31.319 11:59:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.319 11:59:21 -- common/autotest_common.sh@10 -- # set +x 00:23:31.319 Malloc1 00:23:31.319 [2024-04-18 11:59:21.799652] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:31.578 Malloc2 00:23:31.578 Malloc3 00:23:31.578 Malloc4 00:23:31.837 Malloc5 00:23:31.837 Malloc6 00:23:32.095 Malloc7 00:23:32.095 Malloc8 00:23:32.355 Malloc9 00:23:32.355 Malloc10 00:23:32.355 11:59:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:32.355 11:59:22 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:32.355 11:59:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:32.355 11:59:22 -- common/autotest_common.sh@10 -- # set +x 00:23:32.355 11:59:22 -- target/shutdown.sh@78 -- # perfpid=2554506 00:23:32.355 11:59:22 -- target/shutdown.sh@79 -- # waitforlisten 2554506 /var/tmp/bdevperf.sock 00:23:32.355 11:59:22 -- common/autotest_common.sh@817 -- # '[' -z 2554506 ']' 00:23:32.355 11:59:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:32.355 11:59:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:32.355 11:59:22 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:32.355 11:59:22 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:32.355 11:59:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:32.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:32.355 11:59:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:32.355 11:59:22 -- nvmf/common.sh@521 -- # config=() 00:23:32.355 11:59:22 -- common/autotest_common.sh@10 -- # set +x 00:23:32.355 11:59:22 -- nvmf/common.sh@521 -- # local subsystem config 00:23:32.355 11:59:22 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:32.355 11:59:22 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:32.355 { 00:23:32.355 "params": { 00:23:32.355 "name": "Nvme$subsystem", 00:23:32.355 "trtype": "$TEST_TRANSPORT", 00:23:32.355 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:32.355 "adrfam": "ipv4", 00:23:32.355 "trsvcid": "$NVMF_PORT", 00:23:32.355 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:32.355 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:32.355 "hdgst": ${hdgst:-false}, 00:23:32.355 "ddgst": ${ddgst:-false} 00:23:32.355 }, 00:23:32.355 "method": "bdev_nvme_attach_controller" 00:23:32.355 } 00:23:32.355 EOF 00:23:32.355 )") 00:23:32.355 11:59:22 -- nvmf/common.sh@543 -- # cat 00:23:32.355 11:59:22 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:32.355 11:59:22 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:32.355 { 00:23:32.355 "params": { 00:23:32.355 "name": "Nvme$subsystem", 00:23:32.355 "trtype": "$TEST_TRANSPORT", 00:23:32.355 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:32.355 "adrfam": "ipv4", 00:23:32.355 "trsvcid": "$NVMF_PORT", 00:23:32.355 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:32.355 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:32.355 "hdgst": ${hdgst:-false}, 00:23:32.355 "ddgst": ${ddgst:-false} 00:23:32.355 }, 00:23:32.355 "method": "bdev_nvme_attach_controller" 00:23:32.355 } 00:23:32.355 EOF 00:23:32.355 )") 00:23:32.355 11:59:22 -- nvmf/common.sh@543 -- # cat 00:23:32.355 11:59:22 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:32.355 11:59:22 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:32.355 { 00:23:32.355 "params": { 00:23:32.355 "name": "Nvme$subsystem", 00:23:32.355 "trtype": "$TEST_TRANSPORT", 00:23:32.355 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:32.355 "adrfam": "ipv4", 00:23:32.355 "trsvcid": "$NVMF_PORT", 00:23:32.355 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:32.355 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:32.355 "hdgst": ${hdgst:-false}, 00:23:32.355 "ddgst": ${ddgst:-false} 00:23:32.355 }, 00:23:32.355 "method": "bdev_nvme_attach_controller" 00:23:32.355 } 00:23:32.355 EOF 00:23:32.355 )") 00:23:32.355 11:59:22 -- nvmf/common.sh@543 -- # cat 00:23:32.355 11:59:22 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:32.355 11:59:22 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:32.355 { 00:23:32.355 "params": { 00:23:32.355 "name": "Nvme$subsystem", 00:23:32.355 "trtype": "$TEST_TRANSPORT", 00:23:32.355 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:32.355 "adrfam": "ipv4", 00:23:32.355 "trsvcid": "$NVMF_PORT", 00:23:32.355 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:32.355 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:32.355 "hdgst": ${hdgst:-false}, 00:23:32.355 "ddgst": ${ddgst:-false} 00:23:32.355 }, 00:23:32.355 "method": "bdev_nvme_attach_controller" 00:23:32.355 } 00:23:32.355 EOF 00:23:32.355 )") 00:23:32.355 11:59:22 -- nvmf/common.sh@543 -- # cat 00:23:32.355 11:59:22 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:32.355 11:59:22 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:32.355 { 00:23:32.355 "params": { 00:23:32.355 "name": "Nvme$subsystem", 00:23:32.355 "trtype": "$TEST_TRANSPORT", 00:23:32.356 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:32.356 "adrfam": "ipv4", 00:23:32.356 "trsvcid": "$NVMF_PORT", 00:23:32.356 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:32.356 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:32.356 "hdgst": ${hdgst:-false}, 00:23:32.356 "ddgst": ${ddgst:-false} 00:23:32.356 }, 00:23:32.356 "method": "bdev_nvme_attach_controller" 00:23:32.356 } 00:23:32.356 EOF 00:23:32.356 )") 00:23:32.615 11:59:22 -- nvmf/common.sh@543 -- # cat 00:23:32.615 11:59:22 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:32.615 11:59:22 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:32.615 { 00:23:32.615 "params": { 00:23:32.615 "name": "Nvme$subsystem", 00:23:32.615 "trtype": "$TEST_TRANSPORT", 00:23:32.615 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:32.615 "adrfam": "ipv4", 00:23:32.615 "trsvcid": "$NVMF_PORT", 00:23:32.615 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:32.615 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:32.615 "hdgst": ${hdgst:-false}, 00:23:32.615 "ddgst": ${ddgst:-false} 00:23:32.615 }, 00:23:32.615 "method": "bdev_nvme_attach_controller" 00:23:32.615 } 00:23:32.615 EOF 00:23:32.615 )") 00:23:32.615 11:59:22 -- nvmf/common.sh@543 -- # cat 00:23:32.615 11:59:22 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:32.615 11:59:22 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:32.615 { 00:23:32.615 "params": { 00:23:32.615 "name": "Nvme$subsystem", 00:23:32.615 "trtype": "$TEST_TRANSPORT", 00:23:32.615 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:32.615 "adrfam": "ipv4", 00:23:32.615 "trsvcid": "$NVMF_PORT", 00:23:32.615 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:32.615 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:32.615 "hdgst": ${hdgst:-false}, 00:23:32.615 "ddgst": ${ddgst:-false} 00:23:32.615 }, 00:23:32.615 "method": "bdev_nvme_attach_controller" 00:23:32.615 } 00:23:32.615 EOF 00:23:32.615 )") 00:23:32.615 11:59:22 -- nvmf/common.sh@543 -- # cat 00:23:32.615 11:59:22 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:32.615 11:59:22 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:32.615 { 00:23:32.615 "params": { 00:23:32.615 "name": "Nvme$subsystem", 00:23:32.615 "trtype": "$TEST_TRANSPORT", 00:23:32.615 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:32.615 "adrfam": "ipv4", 00:23:32.615 "trsvcid": "$NVMF_PORT", 00:23:32.615 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:32.615 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:32.615 "hdgst": ${hdgst:-false}, 00:23:32.615 "ddgst": ${ddgst:-false} 00:23:32.615 }, 00:23:32.615 "method": "bdev_nvme_attach_controller" 00:23:32.615 } 00:23:32.615 EOF 00:23:32.615 )") 00:23:32.615 11:59:22 -- nvmf/common.sh@543 -- # cat 00:23:32.615 11:59:22 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:32.615 11:59:22 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:32.615 { 00:23:32.615 "params": { 00:23:32.615 "name": "Nvme$subsystem", 00:23:32.615 "trtype": "$TEST_TRANSPORT", 00:23:32.615 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:32.615 "adrfam": "ipv4", 00:23:32.615 "trsvcid": "$NVMF_PORT", 00:23:32.615 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:32.615 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:32.615 "hdgst": ${hdgst:-false}, 00:23:32.615 "ddgst": ${ddgst:-false} 00:23:32.615 }, 00:23:32.615 "method": "bdev_nvme_attach_controller" 00:23:32.615 } 00:23:32.615 EOF 00:23:32.615 )") 00:23:32.615 11:59:22 -- nvmf/common.sh@543 -- # cat 00:23:32.615 11:59:22 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:32.615 11:59:22 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:32.615 { 00:23:32.615 "params": { 00:23:32.615 "name": "Nvme$subsystem", 00:23:32.615 "trtype": "$TEST_TRANSPORT", 00:23:32.615 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:32.615 "adrfam": "ipv4", 00:23:32.615 "trsvcid": "$NVMF_PORT", 00:23:32.615 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:32.615 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:32.615 "hdgst": ${hdgst:-false}, 00:23:32.615 "ddgst": ${ddgst:-false} 00:23:32.615 }, 00:23:32.615 "method": "bdev_nvme_attach_controller" 00:23:32.615 } 00:23:32.615 EOF 00:23:32.615 )") 00:23:32.615 11:59:22 -- nvmf/common.sh@543 -- # cat 00:23:32.615 [2024-04-18 11:59:22.944523] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:23:32.615 [2024-04-18 11:59:22.944612] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:32.615 11:59:22 -- nvmf/common.sh@545 -- # jq . 00:23:32.615 11:59:22 -- nvmf/common.sh@546 -- # IFS=, 00:23:32.615 11:59:22 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:23:32.615 "params": { 00:23:32.615 "name": "Nvme1", 00:23:32.615 "trtype": "tcp", 00:23:32.615 "traddr": "10.0.0.2", 00:23:32.615 "adrfam": "ipv4", 00:23:32.615 "trsvcid": "4420", 00:23:32.615 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:32.615 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:32.615 "hdgst": false, 00:23:32.615 "ddgst": false 00:23:32.615 }, 00:23:32.615 "method": "bdev_nvme_attach_controller" 00:23:32.615 },{ 00:23:32.615 "params": { 00:23:32.615 "name": "Nvme2", 00:23:32.615 "trtype": "tcp", 00:23:32.615 "traddr": "10.0.0.2", 00:23:32.615 "adrfam": "ipv4", 00:23:32.615 "trsvcid": "4420", 00:23:32.615 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:32.615 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:32.615 "hdgst": false, 00:23:32.615 "ddgst": false 00:23:32.615 }, 00:23:32.615 "method": "bdev_nvme_attach_controller" 00:23:32.615 },{ 00:23:32.615 "params": { 00:23:32.615 "name": "Nvme3", 00:23:32.615 "trtype": "tcp", 00:23:32.615 "traddr": "10.0.0.2", 00:23:32.615 "adrfam": "ipv4", 00:23:32.615 "trsvcid": "4420", 00:23:32.615 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:32.615 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:32.615 "hdgst": false, 00:23:32.615 "ddgst": false 00:23:32.615 }, 00:23:32.615 "method": "bdev_nvme_attach_controller" 00:23:32.615 },{ 00:23:32.615 "params": { 00:23:32.615 "name": "Nvme4", 00:23:32.615 "trtype": "tcp", 00:23:32.615 "traddr": "10.0.0.2", 00:23:32.615 "adrfam": "ipv4", 00:23:32.615 "trsvcid": "4420", 00:23:32.615 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:32.616 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:32.616 "hdgst": false, 00:23:32.616 "ddgst": false 00:23:32.616 }, 00:23:32.616 "method": "bdev_nvme_attach_controller" 00:23:32.616 },{ 00:23:32.616 "params": { 00:23:32.616 "name": "Nvme5", 00:23:32.616 "trtype": "tcp", 00:23:32.616 "traddr": "10.0.0.2", 00:23:32.616 "adrfam": "ipv4", 00:23:32.616 "trsvcid": "4420", 00:23:32.616 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:32.616 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:32.616 "hdgst": false, 00:23:32.616 "ddgst": false 00:23:32.616 }, 00:23:32.616 "method": "bdev_nvme_attach_controller" 00:23:32.616 },{ 00:23:32.616 "params": { 00:23:32.616 "name": "Nvme6", 00:23:32.616 "trtype": "tcp", 00:23:32.616 "traddr": "10.0.0.2", 00:23:32.616 "adrfam": "ipv4", 00:23:32.616 "trsvcid": "4420", 00:23:32.616 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:32.616 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:32.616 "hdgst": false, 00:23:32.616 "ddgst": false 00:23:32.616 }, 00:23:32.616 "method": "bdev_nvme_attach_controller" 00:23:32.616 },{ 00:23:32.616 "params": { 00:23:32.616 "name": "Nvme7", 00:23:32.616 "trtype": "tcp", 00:23:32.616 "traddr": "10.0.0.2", 00:23:32.616 "adrfam": "ipv4", 00:23:32.616 "trsvcid": "4420", 00:23:32.616 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:32.616 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:32.616 "hdgst": false, 00:23:32.616 "ddgst": false 00:23:32.616 }, 00:23:32.616 "method": "bdev_nvme_attach_controller" 00:23:32.616 },{ 00:23:32.616 "params": { 00:23:32.616 "name": "Nvme8", 00:23:32.616 "trtype": "tcp", 00:23:32.616 "traddr": "10.0.0.2", 00:23:32.616 "adrfam": "ipv4", 00:23:32.616 "trsvcid": "4420", 00:23:32.616 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:32.616 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:32.616 "hdgst": false, 00:23:32.616 "ddgst": false 00:23:32.616 }, 00:23:32.616 "method": "bdev_nvme_attach_controller" 00:23:32.616 },{ 00:23:32.616 "params": { 00:23:32.616 "name": "Nvme9", 00:23:32.616 "trtype": "tcp", 00:23:32.616 "traddr": "10.0.0.2", 00:23:32.616 "adrfam": "ipv4", 00:23:32.616 "trsvcid": "4420", 00:23:32.616 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:32.616 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:32.616 "hdgst": false, 00:23:32.616 "ddgst": false 00:23:32.616 }, 00:23:32.616 "method": "bdev_nvme_attach_controller" 00:23:32.616 },{ 00:23:32.616 "params": { 00:23:32.616 "name": "Nvme10", 00:23:32.616 "trtype": "tcp", 00:23:32.616 "traddr": "10.0.0.2", 00:23:32.616 "adrfam": "ipv4", 00:23:32.616 "trsvcid": "4420", 00:23:32.616 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:32.616 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:32.616 "hdgst": false, 00:23:32.616 "ddgst": false 00:23:32.616 }, 00:23:32.616 "method": "bdev_nvme_attach_controller" 00:23:32.616 }' 00:23:32.616 EAL: No free 2048 kB hugepages reported on node 1 00:23:32.616 [2024-04-18 11:59:23.074122] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.875 [2024-04-18 11:59:23.298262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:35.406 11:59:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:35.406 11:59:25 -- common/autotest_common.sh@850 -- # return 0 00:23:35.406 11:59:25 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:35.406 11:59:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:35.406 11:59:25 -- common/autotest_common.sh@10 -- # set +x 00:23:35.406 11:59:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:35.406 11:59:25 -- target/shutdown.sh@83 -- # kill -9 2554506 00:23:35.406 11:59:25 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:23:35.406 11:59:25 -- target/shutdown.sh@87 -- # sleep 1 00:23:35.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2554506 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:35.975 11:59:26 -- target/shutdown.sh@88 -- # kill -0 2554184 00:23:35.975 11:59:26 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:35.975 11:59:26 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:35.975 11:59:26 -- nvmf/common.sh@521 -- # config=() 00:23:35.975 11:59:26 -- nvmf/common.sh@521 -- # local subsystem config 00:23:35.975 11:59:26 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:35.975 11:59:26 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:35.975 { 00:23:35.975 "params": { 00:23:35.975 "name": "Nvme$subsystem", 00:23:35.975 "trtype": "$TEST_TRANSPORT", 00:23:35.975 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:35.975 "adrfam": "ipv4", 00:23:35.975 "trsvcid": "$NVMF_PORT", 00:23:35.975 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:35.975 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:35.975 "hdgst": ${hdgst:-false}, 00:23:35.975 "ddgst": ${ddgst:-false} 00:23:35.975 }, 00:23:35.975 "method": "bdev_nvme_attach_controller" 00:23:35.975 } 00:23:35.975 EOF 00:23:35.975 )") 00:23:35.975 11:59:26 -- nvmf/common.sh@543 -- # cat 00:23:35.975 11:59:26 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:35.975 11:59:26 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:35.975 { 00:23:35.975 "params": { 00:23:35.975 "name": "Nvme$subsystem", 00:23:35.975 "trtype": "$TEST_TRANSPORT", 00:23:35.975 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:35.975 "adrfam": "ipv4", 00:23:35.975 "trsvcid": "$NVMF_PORT", 00:23:35.975 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:35.975 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:35.975 "hdgst": ${hdgst:-false}, 00:23:35.975 "ddgst": ${ddgst:-false} 00:23:35.975 }, 00:23:35.975 "method": "bdev_nvme_attach_controller" 00:23:35.975 } 00:23:35.975 EOF 00:23:35.975 )") 00:23:35.975 11:59:26 -- nvmf/common.sh@543 -- # cat 00:23:35.975 11:59:26 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:35.975 11:59:26 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:35.975 { 00:23:35.975 "params": { 00:23:35.975 "name": "Nvme$subsystem", 00:23:35.975 "trtype": "$TEST_TRANSPORT", 00:23:35.975 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:35.975 "adrfam": "ipv4", 00:23:35.975 "trsvcid": "$NVMF_PORT", 00:23:35.975 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:35.975 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:35.975 "hdgst": ${hdgst:-false}, 00:23:35.975 "ddgst": ${ddgst:-false} 00:23:35.975 }, 00:23:35.975 "method": "bdev_nvme_attach_controller" 00:23:35.975 } 00:23:35.975 EOF 00:23:35.975 )") 00:23:35.975 11:59:26 -- nvmf/common.sh@543 -- # cat 00:23:35.975 11:59:26 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:35.975 11:59:26 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:35.975 { 00:23:35.975 "params": { 00:23:35.975 "name": "Nvme$subsystem", 00:23:35.975 "trtype": "$TEST_TRANSPORT", 00:23:35.975 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:35.975 "adrfam": "ipv4", 00:23:35.975 "trsvcid": "$NVMF_PORT", 00:23:35.975 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:35.975 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:35.975 "hdgst": ${hdgst:-false}, 00:23:35.975 "ddgst": ${ddgst:-false} 00:23:35.975 }, 00:23:35.975 "method": "bdev_nvme_attach_controller" 00:23:35.975 } 00:23:35.975 EOF 00:23:35.975 )") 00:23:35.975 11:59:26 -- nvmf/common.sh@543 -- # cat 00:23:35.975 11:59:26 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:35.975 11:59:26 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:35.975 { 00:23:35.975 "params": { 00:23:35.975 "name": "Nvme$subsystem", 00:23:35.975 "trtype": "$TEST_TRANSPORT", 00:23:35.975 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:35.975 "adrfam": "ipv4", 00:23:35.975 "trsvcid": "$NVMF_PORT", 00:23:35.975 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:35.975 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:35.975 "hdgst": ${hdgst:-false}, 00:23:35.975 "ddgst": ${ddgst:-false} 00:23:35.975 }, 00:23:35.975 "method": "bdev_nvme_attach_controller" 00:23:35.975 } 00:23:35.975 EOF 00:23:35.975 )") 00:23:35.975 11:59:26 -- nvmf/common.sh@543 -- # cat 00:23:35.975 11:59:26 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:35.975 11:59:26 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:35.975 { 00:23:35.975 "params": { 00:23:35.975 "name": "Nvme$subsystem", 00:23:35.975 "trtype": "$TEST_TRANSPORT", 00:23:35.975 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:35.975 "adrfam": "ipv4", 00:23:35.975 "trsvcid": "$NVMF_PORT", 00:23:35.975 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:35.975 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:35.975 "hdgst": ${hdgst:-false}, 00:23:35.975 "ddgst": ${ddgst:-false} 00:23:35.975 }, 00:23:35.975 "method": "bdev_nvme_attach_controller" 00:23:35.975 } 00:23:35.975 EOF 00:23:35.975 )") 00:23:35.975 11:59:26 -- nvmf/common.sh@543 -- # cat 00:23:35.975 11:59:26 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:35.975 11:59:26 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:35.975 { 00:23:35.975 "params": { 00:23:35.975 "name": "Nvme$subsystem", 00:23:35.975 "trtype": "$TEST_TRANSPORT", 00:23:35.975 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:35.975 "adrfam": "ipv4", 00:23:35.975 "trsvcid": "$NVMF_PORT", 00:23:35.975 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:35.975 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:35.975 "hdgst": ${hdgst:-false}, 00:23:35.975 "ddgst": ${ddgst:-false} 00:23:35.975 }, 00:23:35.975 "method": "bdev_nvme_attach_controller" 00:23:35.975 } 00:23:35.975 EOF 00:23:35.975 )") 00:23:35.975 11:59:26 -- nvmf/common.sh@543 -- # cat 00:23:35.975 11:59:26 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:35.975 11:59:26 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:35.975 { 00:23:35.975 "params": { 00:23:35.975 "name": "Nvme$subsystem", 00:23:35.975 "trtype": "$TEST_TRANSPORT", 00:23:35.975 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:35.975 "adrfam": "ipv4", 00:23:35.975 "trsvcid": "$NVMF_PORT", 00:23:35.975 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:35.975 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:35.975 "hdgst": ${hdgst:-false}, 00:23:35.975 "ddgst": ${ddgst:-false} 00:23:35.975 }, 00:23:35.975 "method": "bdev_nvme_attach_controller" 00:23:35.975 } 00:23:35.975 EOF 00:23:35.975 )") 00:23:35.975 11:59:26 -- nvmf/common.sh@543 -- # cat 00:23:35.975 11:59:26 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:35.975 11:59:26 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:35.975 { 00:23:35.975 "params": { 00:23:35.975 "name": "Nvme$subsystem", 00:23:35.975 "trtype": "$TEST_TRANSPORT", 00:23:35.975 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:35.975 "adrfam": "ipv4", 00:23:35.975 "trsvcid": "$NVMF_PORT", 00:23:35.975 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:35.975 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:35.975 "hdgst": ${hdgst:-false}, 00:23:35.975 "ddgst": ${ddgst:-false} 00:23:35.975 }, 00:23:35.975 "method": "bdev_nvme_attach_controller" 00:23:35.975 } 00:23:35.975 EOF 00:23:35.975 )") 00:23:35.975 11:59:26 -- nvmf/common.sh@543 -- # cat 00:23:35.975 11:59:26 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:35.975 11:59:26 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:35.975 { 00:23:35.975 "params": { 00:23:35.975 "name": "Nvme$subsystem", 00:23:35.975 "trtype": "$TEST_TRANSPORT", 00:23:35.975 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:35.975 "adrfam": "ipv4", 00:23:35.975 "trsvcid": "$NVMF_PORT", 00:23:35.975 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:35.975 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:35.975 "hdgst": ${hdgst:-false}, 00:23:35.975 "ddgst": ${ddgst:-false} 00:23:35.975 }, 00:23:35.975 "method": "bdev_nvme_attach_controller" 00:23:35.975 } 00:23:35.975 EOF 00:23:35.975 )") 00:23:35.975 [2024-04-18 11:59:26.502274] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:23:35.975 [2024-04-18 11:59:26.502369] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2555202 ] 00:23:35.975 11:59:26 -- nvmf/common.sh@543 -- # cat 00:23:35.975 11:59:26 -- nvmf/common.sh@545 -- # jq . 00:23:35.975 11:59:26 -- nvmf/common.sh@546 -- # IFS=, 00:23:35.976 11:59:26 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:23:35.976 "params": { 00:23:35.976 "name": "Nvme1", 00:23:35.976 "trtype": "tcp", 00:23:35.976 "traddr": "10.0.0.2", 00:23:35.976 "adrfam": "ipv4", 00:23:35.976 "trsvcid": "4420", 00:23:35.976 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:35.976 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:35.976 "hdgst": false, 00:23:35.976 "ddgst": false 00:23:35.976 }, 00:23:35.976 "method": "bdev_nvme_attach_controller" 00:23:35.976 },{ 00:23:35.976 "params": { 00:23:35.976 "name": "Nvme2", 00:23:35.976 "trtype": "tcp", 00:23:35.976 "traddr": "10.0.0.2", 00:23:35.976 "adrfam": "ipv4", 00:23:35.976 "trsvcid": "4420", 00:23:35.976 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:35.976 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:35.976 "hdgst": false, 00:23:35.976 "ddgst": false 00:23:35.976 }, 00:23:35.976 "method": "bdev_nvme_attach_controller" 00:23:35.976 },{ 00:23:35.976 "params": { 00:23:35.976 "name": "Nvme3", 00:23:35.976 "trtype": "tcp", 00:23:35.976 "traddr": "10.0.0.2", 00:23:35.976 "adrfam": "ipv4", 00:23:35.976 "trsvcid": "4420", 00:23:35.976 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:35.976 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:35.976 "hdgst": false, 00:23:35.976 "ddgst": false 00:23:35.976 }, 00:23:35.976 "method": "bdev_nvme_attach_controller" 00:23:35.976 },{ 00:23:35.976 "params": { 00:23:35.976 "name": "Nvme4", 00:23:35.976 "trtype": "tcp", 00:23:35.976 "traddr": "10.0.0.2", 00:23:35.976 "adrfam": "ipv4", 00:23:35.976 "trsvcid": "4420", 00:23:35.976 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:35.976 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:35.976 "hdgst": false, 00:23:35.976 "ddgst": false 00:23:35.976 }, 00:23:35.976 "method": "bdev_nvme_attach_controller" 00:23:35.976 },{ 00:23:35.976 "params": { 00:23:35.976 "name": "Nvme5", 00:23:35.976 "trtype": "tcp", 00:23:35.976 "traddr": "10.0.0.2", 00:23:35.976 "adrfam": "ipv4", 00:23:35.976 "trsvcid": "4420", 00:23:35.976 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:35.976 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:35.976 "hdgst": false, 00:23:35.976 "ddgst": false 00:23:35.976 }, 00:23:35.976 "method": "bdev_nvme_attach_controller" 00:23:35.976 },{ 00:23:35.976 "params": { 00:23:35.976 "name": "Nvme6", 00:23:35.976 "trtype": "tcp", 00:23:35.976 "traddr": "10.0.0.2", 00:23:35.976 "adrfam": "ipv4", 00:23:35.976 "trsvcid": "4420", 00:23:35.976 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:35.976 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:35.976 "hdgst": false, 00:23:35.976 "ddgst": false 00:23:35.976 }, 00:23:35.976 "method": "bdev_nvme_attach_controller" 00:23:35.976 },{ 00:23:35.976 "params": { 00:23:35.976 "name": "Nvme7", 00:23:35.976 "trtype": "tcp", 00:23:35.976 "traddr": "10.0.0.2", 00:23:35.976 "adrfam": "ipv4", 00:23:35.976 "trsvcid": "4420", 00:23:35.976 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:35.976 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:35.976 "hdgst": false, 00:23:35.976 "ddgst": false 00:23:35.976 }, 00:23:35.976 "method": "bdev_nvme_attach_controller" 00:23:35.976 },{ 00:23:35.976 "params": { 00:23:35.976 "name": "Nvme8", 00:23:35.976 "trtype": "tcp", 00:23:35.976 "traddr": "10.0.0.2", 00:23:35.976 "adrfam": "ipv4", 00:23:35.976 "trsvcid": "4420", 00:23:35.976 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:35.976 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:35.976 "hdgst": false, 00:23:35.976 "ddgst": false 00:23:35.976 }, 00:23:35.976 "method": "bdev_nvme_attach_controller" 00:23:35.976 },{ 00:23:35.976 "params": { 00:23:35.976 "name": "Nvme9", 00:23:35.976 "trtype": "tcp", 00:23:35.976 "traddr": "10.0.0.2", 00:23:35.976 "adrfam": "ipv4", 00:23:35.976 "trsvcid": "4420", 00:23:35.976 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:35.976 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:35.976 "hdgst": false, 00:23:35.976 "ddgst": false 00:23:35.976 }, 00:23:35.976 "method": "bdev_nvme_attach_controller" 00:23:35.976 },{ 00:23:35.976 "params": { 00:23:35.976 "name": "Nvme10", 00:23:35.976 "trtype": "tcp", 00:23:35.976 "traddr": "10.0.0.2", 00:23:35.976 "adrfam": "ipv4", 00:23:35.976 "trsvcid": "4420", 00:23:35.976 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:35.976 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:35.976 "hdgst": false, 00:23:35.976 "ddgst": false 00:23:35.976 }, 00:23:35.976 "method": "bdev_nvme_attach_controller" 00:23:35.976 }' 00:23:36.236 EAL: No free 2048 kB hugepages reported on node 1 00:23:36.236 [2024-04-18 11:59:26.629848] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.495 [2024-04-18 11:59:26.851535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:38.399 Running I/O for 1 seconds... 00:23:39.337 00:23:39.337 Latency(us) 00:23:39.337 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:39.337 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.337 Verification LBA range: start 0x0 length 0x400 00:23:39.337 Nvme1n1 : 1.02 249.88 15.62 0.00 0.00 253477.68 20342.37 233203.30 00:23:39.337 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.337 Verification LBA range: start 0x0 length 0x400 00:23:39.337 Nvme2n1 : 1.08 236.69 14.79 0.00 0.00 263903.64 20761.80 236558.75 00:23:39.337 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.337 Verification LBA range: start 0x0 length 0x400 00:23:39.337 Nvme3n1 : 1.09 293.75 18.36 0.00 0.00 209292.00 16986.93 219781.53 00:23:39.337 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.337 Verification LBA range: start 0x0 length 0x400 00:23:39.337 Nvme4n1 : 1.15 282.38 17.65 0.00 0.00 214773.81 3106.41 233203.30 00:23:39.337 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.337 Verification LBA range: start 0x0 length 0x400 00:23:39.337 Nvme5n1 : 1.08 237.89 14.87 0.00 0.00 250411.01 20866.66 231525.58 00:23:39.337 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.337 Verification LBA range: start 0x0 length 0x400 00:23:39.337 Nvme6n1 : 1.16 276.51 17.28 0.00 0.00 213217.77 21915.24 234881.02 00:23:39.337 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.337 Verification LBA range: start 0x0 length 0x400 00:23:39.337 Nvme7n1 : 1.12 229.38 14.34 0.00 0.00 252073.78 22963.81 233203.30 00:23:39.337 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.337 Verification LBA range: start 0x0 length 0x400 00:23:39.337 Nvme8n1 : 1.16 275.49 17.22 0.00 0.00 207660.81 21390.95 216426.09 00:23:39.337 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.337 Verification LBA range: start 0x0 length 0x400 00:23:39.337 Nvme9n1 : 1.18 272.28 17.02 0.00 0.00 207227.78 20342.37 241591.91 00:23:39.337 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.337 Verification LBA range: start 0x0 length 0x400 00:23:39.337 Nvme10n1 : 1.18 270.47 16.90 0.00 0.00 205629.52 14470.35 256691.40 00:23:39.337 =================================================================================================================== 00:23:39.337 Total : 2624.72 164.04 0.00 0.00 225387.17 3106.41 256691.40 00:23:40.794 11:59:30 -- target/shutdown.sh@94 -- # stoptarget 00:23:40.794 11:59:30 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:40.794 11:59:30 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:40.794 11:59:30 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:40.794 11:59:30 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:40.794 11:59:30 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:40.794 11:59:30 -- nvmf/common.sh@117 -- # sync 00:23:40.794 11:59:30 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:40.794 11:59:30 -- nvmf/common.sh@120 -- # set +e 00:23:40.794 11:59:30 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:40.794 11:59:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:40.794 rmmod nvme_tcp 00:23:40.794 rmmod nvme_fabrics 00:23:40.794 rmmod nvme_keyring 00:23:40.794 11:59:31 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:40.794 11:59:31 -- nvmf/common.sh@124 -- # set -e 00:23:40.794 11:59:31 -- nvmf/common.sh@125 -- # return 0 00:23:40.794 11:59:31 -- nvmf/common.sh@478 -- # '[' -n 2554184 ']' 00:23:40.794 11:59:31 -- nvmf/common.sh@479 -- # killprocess 2554184 00:23:40.794 11:59:31 -- common/autotest_common.sh@936 -- # '[' -z 2554184 ']' 00:23:40.794 11:59:31 -- common/autotest_common.sh@940 -- # kill -0 2554184 00:23:40.794 11:59:31 -- common/autotest_common.sh@941 -- # uname 00:23:40.794 11:59:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:40.794 11:59:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2554184 00:23:40.794 11:59:31 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:40.794 11:59:31 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:40.794 11:59:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2554184' 00:23:40.794 killing process with pid 2554184 00:23:40.794 11:59:31 -- common/autotest_common.sh@955 -- # kill 2554184 00:23:40.794 11:59:31 -- common/autotest_common.sh@960 -- # wait 2554184 00:23:44.086 11:59:34 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:44.086 11:59:34 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:44.086 11:59:34 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:44.086 11:59:34 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:44.086 11:59:34 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:44.086 11:59:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:44.086 11:59:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:44.086 11:59:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:45.993 11:59:36 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:45.993 00:23:45.993 real 0m22.385s 00:23:45.993 user 0m58.677s 00:23:45.993 sys 0m7.307s 00:23:45.993 11:59:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:45.993 11:59:36 -- common/autotest_common.sh@10 -- # set +x 00:23:45.993 ************************************ 00:23:45.993 END TEST nvmf_shutdown_tc1 00:23:45.993 ************************************ 00:23:45.993 11:59:36 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:45.993 11:59:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:45.993 11:59:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:45.993 11:59:36 -- common/autotest_common.sh@10 -- # set +x 00:23:46.252 ************************************ 00:23:46.252 START TEST nvmf_shutdown_tc2 00:23:46.252 ************************************ 00:23:46.252 11:59:36 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc2 00:23:46.252 11:59:36 -- target/shutdown.sh@99 -- # starttarget 00:23:46.252 11:59:36 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:46.252 11:59:36 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:46.252 11:59:36 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:46.252 11:59:36 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:46.252 11:59:36 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:46.252 11:59:36 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:46.252 11:59:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:46.252 11:59:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:46.252 11:59:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:46.252 11:59:36 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:23:46.252 11:59:36 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:46.252 11:59:36 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:46.252 11:59:36 -- common/autotest_common.sh@10 -- # set +x 00:23:46.252 11:59:36 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:46.252 11:59:36 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:46.252 11:59:36 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:46.252 11:59:36 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:46.252 11:59:36 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:46.252 11:59:36 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:46.252 11:59:36 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:46.252 11:59:36 -- nvmf/common.sh@295 -- # net_devs=() 00:23:46.252 11:59:36 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:46.252 11:59:36 -- nvmf/common.sh@296 -- # e810=() 00:23:46.252 11:59:36 -- nvmf/common.sh@296 -- # local -ga e810 00:23:46.252 11:59:36 -- nvmf/common.sh@297 -- # x722=() 00:23:46.252 11:59:36 -- nvmf/common.sh@297 -- # local -ga x722 00:23:46.252 11:59:36 -- nvmf/common.sh@298 -- # mlx=() 00:23:46.252 11:59:36 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:46.252 11:59:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:46.252 11:59:36 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:46.252 11:59:36 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:46.252 11:59:36 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:46.252 11:59:36 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:46.252 11:59:36 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:46.252 11:59:36 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:46.252 11:59:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:46.252 11:59:36 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:46.252 11:59:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:46.252 11:59:36 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:46.252 11:59:36 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:46.252 11:59:36 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:46.252 11:59:36 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:46.252 11:59:36 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:46.252 11:59:36 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:46.252 11:59:36 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:46.252 11:59:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:46.252 11:59:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:46.252 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:46.252 11:59:36 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:46.252 11:59:36 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:46.252 11:59:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:46.252 11:59:36 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:46.252 11:59:36 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:46.252 11:59:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:46.252 11:59:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:46.252 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:46.252 11:59:36 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:46.252 11:59:36 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:46.252 11:59:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:46.252 11:59:36 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:46.252 11:59:36 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:46.252 11:59:36 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:46.252 11:59:36 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:46.252 11:59:36 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:46.252 11:59:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:46.252 11:59:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:46.252 11:59:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:46.252 11:59:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:46.252 11:59:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:46.252 Found net devices under 0000:af:00.0: cvl_0_0 00:23:46.252 11:59:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:46.252 11:59:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:46.252 11:59:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:46.252 11:59:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:46.252 11:59:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:46.252 11:59:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:46.252 Found net devices under 0000:af:00.1: cvl_0_1 00:23:46.252 11:59:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:46.252 11:59:36 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:46.252 11:59:36 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:46.252 11:59:36 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:46.252 11:59:36 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:23:46.253 11:59:36 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:23:46.253 11:59:36 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:46.253 11:59:36 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:46.253 11:59:36 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:46.253 11:59:36 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:46.253 11:59:36 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:46.253 11:59:36 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:46.253 11:59:36 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:46.253 11:59:36 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:46.253 11:59:36 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:46.253 11:59:36 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:46.253 11:59:36 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:46.253 11:59:36 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:46.253 11:59:36 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:46.253 11:59:36 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:46.253 11:59:36 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:46.253 11:59:36 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:46.253 11:59:36 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:46.512 11:59:36 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:46.512 11:59:36 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:46.512 11:59:36 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:46.512 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:46.512 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.301 ms 00:23:46.512 00:23:46.512 --- 10.0.0.2 ping statistics --- 00:23:46.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:46.512 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:23:46.512 11:59:36 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:46.512 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:46.512 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.257 ms 00:23:46.512 00:23:46.512 --- 10.0.0.1 ping statistics --- 00:23:46.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:46.512 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:23:46.512 11:59:36 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:46.512 11:59:36 -- nvmf/common.sh@411 -- # return 0 00:23:46.512 11:59:36 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:46.512 11:59:36 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:46.512 11:59:36 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:46.512 11:59:36 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:46.512 11:59:36 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:46.512 11:59:36 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:46.512 11:59:36 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:46.512 11:59:36 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:46.512 11:59:36 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:46.512 11:59:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:46.512 11:59:36 -- common/autotest_common.sh@10 -- # set +x 00:23:46.512 11:59:36 -- nvmf/common.sh@470 -- # nvmfpid=2557043 00:23:46.512 11:59:36 -- nvmf/common.sh@471 -- # waitforlisten 2557043 00:23:46.512 11:59:36 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:46.512 11:59:36 -- common/autotest_common.sh@817 -- # '[' -z 2557043 ']' 00:23:46.512 11:59:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:46.512 11:59:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:46.512 11:59:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:46.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:46.512 11:59:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:46.512 11:59:36 -- common/autotest_common.sh@10 -- # set +x 00:23:46.512 [2024-04-18 11:59:37.047618] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:23:46.512 [2024-04-18 11:59:37.047705] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:46.771 EAL: No free 2048 kB hugepages reported on node 1 00:23:46.771 [2024-04-18 11:59:37.179434] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:47.031 [2024-04-18 11:59:37.401786] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:47.031 [2024-04-18 11:59:37.401829] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:47.031 [2024-04-18 11:59:37.401840] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:47.031 [2024-04-18 11:59:37.401869] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:47.031 [2024-04-18 11:59:37.401879] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:47.031 [2024-04-18 11:59:37.402000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:47.031 [2024-04-18 11:59:37.402070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:47.031 [2024-04-18 11:59:37.402153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:47.031 [2024-04-18 11:59:37.402178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:23:47.290 11:59:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:47.290 11:59:37 -- common/autotest_common.sh@850 -- # return 0 00:23:47.290 11:59:37 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:47.290 11:59:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:47.290 11:59:37 -- common/autotest_common.sh@10 -- # set +x 00:23:47.549 11:59:37 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:47.549 11:59:37 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:47.549 11:59:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:47.549 11:59:37 -- common/autotest_common.sh@10 -- # set +x 00:23:47.550 [2024-04-18 11:59:37.880533] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:47.550 11:59:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:47.550 11:59:37 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:47.550 11:59:37 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:47.550 11:59:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:47.550 11:59:37 -- common/autotest_common.sh@10 -- # set +x 00:23:47.550 11:59:37 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:47.550 11:59:37 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:47.550 11:59:37 -- target/shutdown.sh@28 -- # cat 00:23:47.550 11:59:37 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:47.550 11:59:37 -- target/shutdown.sh@28 -- # cat 00:23:47.550 11:59:37 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:47.550 11:59:37 -- target/shutdown.sh@28 -- # cat 00:23:47.550 11:59:37 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:47.550 11:59:37 -- target/shutdown.sh@28 -- # cat 00:23:47.550 11:59:37 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:47.550 11:59:37 -- target/shutdown.sh@28 -- # cat 00:23:47.550 11:59:37 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:47.550 11:59:37 -- target/shutdown.sh@28 -- # cat 00:23:47.550 11:59:37 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:47.550 11:59:37 -- target/shutdown.sh@28 -- # cat 00:23:47.550 11:59:37 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:47.550 11:59:37 -- target/shutdown.sh@28 -- # cat 00:23:47.550 11:59:37 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:47.550 11:59:37 -- target/shutdown.sh@28 -- # cat 00:23:47.550 11:59:37 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:47.550 11:59:37 -- target/shutdown.sh@28 -- # cat 00:23:47.550 11:59:37 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:47.550 11:59:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:47.550 11:59:37 -- common/autotest_common.sh@10 -- # set +x 00:23:47.550 Malloc1 00:23:47.550 [2024-04-18 11:59:38.064557] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:47.809 Malloc2 00:23:47.809 Malloc3 00:23:48.068 Malloc4 00:23:48.068 Malloc5 00:23:48.068 Malloc6 00:23:48.327 Malloc7 00:23:48.327 Malloc8 00:23:48.586 Malloc9 00:23:48.586 Malloc10 00:23:48.586 11:59:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:48.586 11:59:39 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:48.586 11:59:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:48.586 11:59:39 -- common/autotest_common.sh@10 -- # set +x 00:23:48.846 11:59:39 -- target/shutdown.sh@103 -- # perfpid=2557508 00:23:48.846 11:59:39 -- target/shutdown.sh@104 -- # waitforlisten 2557508 /var/tmp/bdevperf.sock 00:23:48.846 11:59:39 -- common/autotest_common.sh@817 -- # '[' -z 2557508 ']' 00:23:48.846 11:59:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:48.846 11:59:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:48.846 11:59:39 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:48.846 11:59:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:48.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:48.846 11:59:39 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:48.846 11:59:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:48.846 11:59:39 -- nvmf/common.sh@521 -- # config=() 00:23:48.846 11:59:39 -- common/autotest_common.sh@10 -- # set +x 00:23:48.846 11:59:39 -- nvmf/common.sh@521 -- # local subsystem config 00:23:48.846 11:59:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:48.846 11:59:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:48.846 { 00:23:48.846 "params": { 00:23:48.846 "name": "Nvme$subsystem", 00:23:48.846 "trtype": "$TEST_TRANSPORT", 00:23:48.846 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:48.846 "adrfam": "ipv4", 00:23:48.846 "trsvcid": "$NVMF_PORT", 00:23:48.846 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:48.846 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:48.846 "hdgst": ${hdgst:-false}, 00:23:48.846 "ddgst": ${ddgst:-false} 00:23:48.846 }, 00:23:48.846 "method": "bdev_nvme_attach_controller" 00:23:48.846 } 00:23:48.846 EOF 00:23:48.846 )") 00:23:48.846 11:59:39 -- nvmf/common.sh@543 -- # cat 00:23:48.846 11:59:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:48.846 11:59:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:48.846 { 00:23:48.846 "params": { 00:23:48.846 "name": "Nvme$subsystem", 00:23:48.846 "trtype": "$TEST_TRANSPORT", 00:23:48.846 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:48.846 "adrfam": "ipv4", 00:23:48.846 "trsvcid": "$NVMF_PORT", 00:23:48.846 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:48.846 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:48.846 "hdgst": ${hdgst:-false}, 00:23:48.846 "ddgst": ${ddgst:-false} 00:23:48.846 }, 00:23:48.846 "method": "bdev_nvme_attach_controller" 00:23:48.846 } 00:23:48.846 EOF 00:23:48.846 )") 00:23:48.846 11:59:39 -- nvmf/common.sh@543 -- # cat 00:23:48.846 11:59:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:48.846 11:59:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:48.846 { 00:23:48.846 "params": { 00:23:48.846 "name": "Nvme$subsystem", 00:23:48.846 "trtype": "$TEST_TRANSPORT", 00:23:48.846 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:48.846 "adrfam": "ipv4", 00:23:48.846 "trsvcid": "$NVMF_PORT", 00:23:48.846 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:48.846 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:48.846 "hdgst": ${hdgst:-false}, 00:23:48.846 "ddgst": ${ddgst:-false} 00:23:48.846 }, 00:23:48.846 "method": "bdev_nvme_attach_controller" 00:23:48.846 } 00:23:48.846 EOF 00:23:48.846 )") 00:23:48.846 11:59:39 -- nvmf/common.sh@543 -- # cat 00:23:48.846 11:59:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:48.846 11:59:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:48.846 { 00:23:48.846 "params": { 00:23:48.846 "name": "Nvme$subsystem", 00:23:48.846 "trtype": "$TEST_TRANSPORT", 00:23:48.846 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:48.846 "adrfam": "ipv4", 00:23:48.846 "trsvcid": "$NVMF_PORT", 00:23:48.846 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:48.846 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:48.846 "hdgst": ${hdgst:-false}, 00:23:48.846 "ddgst": ${ddgst:-false} 00:23:48.846 }, 00:23:48.846 "method": "bdev_nvme_attach_controller" 00:23:48.846 } 00:23:48.846 EOF 00:23:48.846 )") 00:23:48.846 11:59:39 -- nvmf/common.sh@543 -- # cat 00:23:48.846 11:59:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:48.846 11:59:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:48.846 { 00:23:48.846 "params": { 00:23:48.846 "name": "Nvme$subsystem", 00:23:48.846 "trtype": "$TEST_TRANSPORT", 00:23:48.846 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:48.846 "adrfam": "ipv4", 00:23:48.846 "trsvcid": "$NVMF_PORT", 00:23:48.846 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:48.846 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:48.846 "hdgst": ${hdgst:-false}, 00:23:48.846 "ddgst": ${ddgst:-false} 00:23:48.846 }, 00:23:48.846 "method": "bdev_nvme_attach_controller" 00:23:48.846 } 00:23:48.846 EOF 00:23:48.846 )") 00:23:48.846 11:59:39 -- nvmf/common.sh@543 -- # cat 00:23:48.846 11:59:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:48.846 11:59:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:48.846 { 00:23:48.846 "params": { 00:23:48.846 "name": "Nvme$subsystem", 00:23:48.846 "trtype": "$TEST_TRANSPORT", 00:23:48.846 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:48.846 "adrfam": "ipv4", 00:23:48.846 "trsvcid": "$NVMF_PORT", 00:23:48.846 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:48.846 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:48.846 "hdgst": ${hdgst:-false}, 00:23:48.846 "ddgst": ${ddgst:-false} 00:23:48.846 }, 00:23:48.846 "method": "bdev_nvme_attach_controller" 00:23:48.846 } 00:23:48.846 EOF 00:23:48.846 )") 00:23:48.846 11:59:39 -- nvmf/common.sh@543 -- # cat 00:23:48.846 11:59:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:48.846 11:59:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:48.846 { 00:23:48.846 "params": { 00:23:48.846 "name": "Nvme$subsystem", 00:23:48.846 "trtype": "$TEST_TRANSPORT", 00:23:48.846 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:48.846 "adrfam": "ipv4", 00:23:48.846 "trsvcid": "$NVMF_PORT", 00:23:48.846 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:48.846 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:48.846 "hdgst": ${hdgst:-false}, 00:23:48.846 "ddgst": ${ddgst:-false} 00:23:48.846 }, 00:23:48.846 "method": "bdev_nvme_attach_controller" 00:23:48.846 } 00:23:48.846 EOF 00:23:48.846 )") 00:23:48.847 11:59:39 -- nvmf/common.sh@543 -- # cat 00:23:48.847 11:59:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:48.847 11:59:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:48.847 { 00:23:48.847 "params": { 00:23:48.847 "name": "Nvme$subsystem", 00:23:48.847 "trtype": "$TEST_TRANSPORT", 00:23:48.847 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:48.847 "adrfam": "ipv4", 00:23:48.847 "trsvcid": "$NVMF_PORT", 00:23:48.847 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:48.847 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:48.847 "hdgst": ${hdgst:-false}, 00:23:48.847 "ddgst": ${ddgst:-false} 00:23:48.847 }, 00:23:48.847 "method": "bdev_nvme_attach_controller" 00:23:48.847 } 00:23:48.847 EOF 00:23:48.847 )") 00:23:48.847 11:59:39 -- nvmf/common.sh@543 -- # cat 00:23:48.847 11:59:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:48.847 11:59:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:48.847 { 00:23:48.847 "params": { 00:23:48.847 "name": "Nvme$subsystem", 00:23:48.847 "trtype": "$TEST_TRANSPORT", 00:23:48.847 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:48.847 "adrfam": "ipv4", 00:23:48.847 "trsvcid": "$NVMF_PORT", 00:23:48.847 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:48.847 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:48.847 "hdgst": ${hdgst:-false}, 00:23:48.847 "ddgst": ${ddgst:-false} 00:23:48.847 }, 00:23:48.847 "method": "bdev_nvme_attach_controller" 00:23:48.847 } 00:23:48.847 EOF 00:23:48.847 )") 00:23:48.847 11:59:39 -- nvmf/common.sh@543 -- # cat 00:23:48.847 11:59:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:48.847 11:59:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:48.847 { 00:23:48.847 "params": { 00:23:48.847 "name": "Nvme$subsystem", 00:23:48.847 "trtype": "$TEST_TRANSPORT", 00:23:48.847 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:48.847 "adrfam": "ipv4", 00:23:48.847 "trsvcid": "$NVMF_PORT", 00:23:48.847 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:48.847 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:48.847 "hdgst": ${hdgst:-false}, 00:23:48.847 "ddgst": ${ddgst:-false} 00:23:48.847 }, 00:23:48.847 "method": "bdev_nvme_attach_controller" 00:23:48.847 } 00:23:48.847 EOF 00:23:48.847 )") 00:23:48.847 11:59:39 -- nvmf/common.sh@543 -- # cat 00:23:48.847 [2024-04-18 11:59:39.219530] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:23:48.847 [2024-04-18 11:59:39.219622] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2557508 ] 00:23:48.847 11:59:39 -- nvmf/common.sh@545 -- # jq . 00:23:48.847 11:59:39 -- nvmf/common.sh@546 -- # IFS=, 00:23:48.847 11:59:39 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:23:48.847 "params": { 00:23:48.847 "name": "Nvme1", 00:23:48.847 "trtype": "tcp", 00:23:48.847 "traddr": "10.0.0.2", 00:23:48.847 "adrfam": "ipv4", 00:23:48.847 "trsvcid": "4420", 00:23:48.847 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:48.847 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:48.847 "hdgst": false, 00:23:48.847 "ddgst": false 00:23:48.847 }, 00:23:48.847 "method": "bdev_nvme_attach_controller" 00:23:48.847 },{ 00:23:48.847 "params": { 00:23:48.847 "name": "Nvme2", 00:23:48.847 "trtype": "tcp", 00:23:48.847 "traddr": "10.0.0.2", 00:23:48.847 "adrfam": "ipv4", 00:23:48.847 "trsvcid": "4420", 00:23:48.847 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:48.847 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:48.847 "hdgst": false, 00:23:48.847 "ddgst": false 00:23:48.847 }, 00:23:48.847 "method": "bdev_nvme_attach_controller" 00:23:48.847 },{ 00:23:48.847 "params": { 00:23:48.847 "name": "Nvme3", 00:23:48.847 "trtype": "tcp", 00:23:48.847 "traddr": "10.0.0.2", 00:23:48.847 "adrfam": "ipv4", 00:23:48.847 "trsvcid": "4420", 00:23:48.847 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:48.847 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:48.847 "hdgst": false, 00:23:48.847 "ddgst": false 00:23:48.847 }, 00:23:48.847 "method": "bdev_nvme_attach_controller" 00:23:48.847 },{ 00:23:48.847 "params": { 00:23:48.847 "name": "Nvme4", 00:23:48.847 "trtype": "tcp", 00:23:48.847 "traddr": "10.0.0.2", 00:23:48.847 "adrfam": "ipv4", 00:23:48.847 "trsvcid": "4420", 00:23:48.847 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:48.847 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:48.847 "hdgst": false, 00:23:48.847 "ddgst": false 00:23:48.847 }, 00:23:48.847 "method": "bdev_nvme_attach_controller" 00:23:48.847 },{ 00:23:48.847 "params": { 00:23:48.847 "name": "Nvme5", 00:23:48.847 "trtype": "tcp", 00:23:48.847 "traddr": "10.0.0.2", 00:23:48.847 "adrfam": "ipv4", 00:23:48.847 "trsvcid": "4420", 00:23:48.847 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:48.847 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:48.847 "hdgst": false, 00:23:48.847 "ddgst": false 00:23:48.847 }, 00:23:48.847 "method": "bdev_nvme_attach_controller" 00:23:48.847 },{ 00:23:48.847 "params": { 00:23:48.847 "name": "Nvme6", 00:23:48.847 "trtype": "tcp", 00:23:48.847 "traddr": "10.0.0.2", 00:23:48.847 "adrfam": "ipv4", 00:23:48.847 "trsvcid": "4420", 00:23:48.847 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:48.847 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:48.847 "hdgst": false, 00:23:48.847 "ddgst": false 00:23:48.847 }, 00:23:48.847 "method": "bdev_nvme_attach_controller" 00:23:48.847 },{ 00:23:48.847 "params": { 00:23:48.847 "name": "Nvme7", 00:23:48.847 "trtype": "tcp", 00:23:48.847 "traddr": "10.0.0.2", 00:23:48.847 "adrfam": "ipv4", 00:23:48.847 "trsvcid": "4420", 00:23:48.847 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:48.847 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:48.847 "hdgst": false, 00:23:48.847 "ddgst": false 00:23:48.847 }, 00:23:48.847 "method": "bdev_nvme_attach_controller" 00:23:48.847 },{ 00:23:48.847 "params": { 00:23:48.847 "name": "Nvme8", 00:23:48.847 "trtype": "tcp", 00:23:48.847 "traddr": "10.0.0.2", 00:23:48.847 "adrfam": "ipv4", 00:23:48.847 "trsvcid": "4420", 00:23:48.847 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:48.847 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:48.847 "hdgst": false, 00:23:48.847 "ddgst": false 00:23:48.847 }, 00:23:48.847 "method": "bdev_nvme_attach_controller" 00:23:48.847 },{ 00:23:48.847 "params": { 00:23:48.847 "name": "Nvme9", 00:23:48.847 "trtype": "tcp", 00:23:48.847 "traddr": "10.0.0.2", 00:23:48.847 "adrfam": "ipv4", 00:23:48.847 "trsvcid": "4420", 00:23:48.847 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:48.847 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:48.847 "hdgst": false, 00:23:48.847 "ddgst": false 00:23:48.847 }, 00:23:48.847 "method": "bdev_nvme_attach_controller" 00:23:48.847 },{ 00:23:48.847 "params": { 00:23:48.847 "name": "Nvme10", 00:23:48.847 "trtype": "tcp", 00:23:48.847 "traddr": "10.0.0.2", 00:23:48.847 "adrfam": "ipv4", 00:23:48.847 "trsvcid": "4420", 00:23:48.847 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:48.847 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:48.847 "hdgst": false, 00:23:48.847 "ddgst": false 00:23:48.847 }, 00:23:48.847 "method": "bdev_nvme_attach_controller" 00:23:48.847 }' 00:23:48.847 EAL: No free 2048 kB hugepages reported on node 1 00:23:48.847 [2024-04-18 11:59:39.346312] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.106 [2024-04-18 11:59:39.570593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:51.011 Running I/O for 10 seconds... 00:23:51.271 11:59:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:51.271 11:59:41 -- common/autotest_common.sh@850 -- # return 0 00:23:51.271 11:59:41 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:51.271 11:59:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:51.271 11:59:41 -- common/autotest_common.sh@10 -- # set +x 00:23:51.271 11:59:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:51.271 11:59:41 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:51.271 11:59:41 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:51.271 11:59:41 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:51.271 11:59:41 -- target/shutdown.sh@57 -- # local ret=1 00:23:51.271 11:59:41 -- target/shutdown.sh@58 -- # local i 00:23:51.271 11:59:41 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:51.271 11:59:41 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:51.271 11:59:41 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:51.271 11:59:41 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:51.271 11:59:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:51.271 11:59:41 -- common/autotest_common.sh@10 -- # set +x 00:23:51.271 11:59:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:51.271 11:59:41 -- target/shutdown.sh@60 -- # read_io_count=67 00:23:51.271 11:59:41 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:23:51.271 11:59:41 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:51.531 11:59:41 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:51.531 11:59:41 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:51.531 11:59:41 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:51.531 11:59:41 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:51.531 11:59:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:51.531 11:59:42 -- common/autotest_common.sh@10 -- # set +x 00:23:51.531 11:59:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:51.531 11:59:42 -- target/shutdown.sh@60 -- # read_io_count=131 00:23:51.531 11:59:42 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:23:51.531 11:59:42 -- target/shutdown.sh@64 -- # ret=0 00:23:51.531 11:59:42 -- target/shutdown.sh@65 -- # break 00:23:51.531 11:59:42 -- target/shutdown.sh@69 -- # return 0 00:23:51.531 11:59:42 -- target/shutdown.sh@110 -- # killprocess 2557508 00:23:51.531 11:59:42 -- common/autotest_common.sh@936 -- # '[' -z 2557508 ']' 00:23:51.531 11:59:42 -- common/autotest_common.sh@940 -- # kill -0 2557508 00:23:51.531 11:59:42 -- common/autotest_common.sh@941 -- # uname 00:23:51.531 11:59:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:51.531 11:59:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2557508 00:23:51.791 11:59:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:51.791 11:59:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:51.791 11:59:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2557508' 00:23:51.791 killing process with pid 2557508 00:23:51.791 11:59:42 -- common/autotest_common.sh@955 -- # kill 2557508 00:23:51.791 11:59:42 -- common/autotest_common.sh@960 -- # wait 2557508 00:23:51.791 Received shutdown signal, test time was about 0.754715 seconds 00:23:51.791 00:23:51.791 Latency(us) 00:23:51.791 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:51.791 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:51.791 Verification LBA range: start 0x0 length 0x400 00:23:51.791 Nvme1n1 : 0.70 272.92 17.06 0.00 0.00 231270.26 17511.22 216426.09 00:23:51.791 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:51.791 Verification LBA range: start 0x0 length 0x400 00:23:51.791 Nvme2n1 : 0.73 264.25 16.52 0.00 0.00 233505.59 19818.09 216426.09 00:23:51.791 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:51.791 Verification LBA range: start 0x0 length 0x400 00:23:51.791 Nvme3n1 : 0.75 257.52 16.10 0.00 0.00 234325.88 19398.66 228170.14 00:23:51.791 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:51.791 Verification LBA range: start 0x0 length 0x400 00:23:51.791 Nvme4n1 : 0.70 275.87 17.24 0.00 0.00 212648.48 18454.94 219781.53 00:23:51.791 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:51.791 Verification LBA range: start 0x0 length 0x400 00:23:51.791 Nvme5n1 : 0.72 266.62 16.66 0.00 0.00 214947.70 34183.58 184549.38 00:23:51.791 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:51.791 Verification LBA range: start 0x0 length 0x400 00:23:51.791 Nvme6n1 : 0.73 262.19 16.39 0.00 0.00 213774.06 19398.66 193776.84 00:23:51.791 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:51.791 Verification LBA range: start 0x0 length 0x400 00:23:51.791 Nvme7n1 : 0.72 267.00 16.69 0.00 0.00 201503.54 19188.94 214748.36 00:23:51.791 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:51.791 Verification LBA range: start 0x0 length 0x400 00:23:51.791 Nvme8n1 : 0.75 254.63 15.91 0.00 0.00 210480.06 23488.10 211392.92 00:23:51.791 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:51.791 Verification LBA range: start 0x0 length 0x400 00:23:51.791 Nvme9n1 : 0.74 260.01 16.25 0.00 0.00 199770.11 18350.08 243269.63 00:23:51.791 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:51.791 Verification LBA range: start 0x0 length 0x400 00:23:51.791 Nvme10n1 : 0.74 258.20 16.14 0.00 0.00 196364.15 19398.66 228170.14 00:23:51.791 =================================================================================================================== 00:23:51.791 Total : 2639.21 164.95 0.00 0.00 214858.98 17511.22 243269.63 00:23:53.167 11:59:43 -- target/shutdown.sh@113 -- # sleep 1 00:23:54.102 11:59:44 -- target/shutdown.sh@114 -- # kill -0 2557043 00:23:54.102 11:59:44 -- target/shutdown.sh@116 -- # stoptarget 00:23:54.102 11:59:44 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:54.102 11:59:44 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:54.102 11:59:44 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:54.102 11:59:44 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:54.103 11:59:44 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:54.103 11:59:44 -- nvmf/common.sh@117 -- # sync 00:23:54.103 11:59:44 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:54.103 11:59:44 -- nvmf/common.sh@120 -- # set +e 00:23:54.103 11:59:44 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:54.103 11:59:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:54.103 rmmod nvme_tcp 00:23:54.103 rmmod nvme_fabrics 00:23:54.103 rmmod nvme_keyring 00:23:54.103 11:59:44 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:54.103 11:59:44 -- nvmf/common.sh@124 -- # set -e 00:23:54.103 11:59:44 -- nvmf/common.sh@125 -- # return 0 00:23:54.103 11:59:44 -- nvmf/common.sh@478 -- # '[' -n 2557043 ']' 00:23:54.103 11:59:44 -- nvmf/common.sh@479 -- # killprocess 2557043 00:23:54.103 11:59:44 -- common/autotest_common.sh@936 -- # '[' -z 2557043 ']' 00:23:54.103 11:59:44 -- common/autotest_common.sh@940 -- # kill -0 2557043 00:23:54.103 11:59:44 -- common/autotest_common.sh@941 -- # uname 00:23:54.103 11:59:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:54.103 11:59:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2557043 00:23:54.103 11:59:44 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:54.103 11:59:44 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:54.103 11:59:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2557043' 00:23:54.103 killing process with pid 2557043 00:23:54.103 11:59:44 -- common/autotest_common.sh@955 -- # kill 2557043 00:23:54.103 11:59:44 -- common/autotest_common.sh@960 -- # wait 2557043 00:23:57.430 11:59:47 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:57.430 11:59:47 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:57.430 11:59:47 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:57.430 11:59:47 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:57.430 11:59:47 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:57.430 11:59:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:57.430 11:59:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:57.430 11:59:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:59.337 11:59:49 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:59.337 00:23:59.337 real 0m13.174s 00:23:59.337 user 0m43.336s 00:23:59.337 sys 0m1.977s 00:23:59.337 11:59:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:59.337 11:59:49 -- common/autotest_common.sh@10 -- # set +x 00:23:59.337 ************************************ 00:23:59.337 END TEST nvmf_shutdown_tc2 00:23:59.337 ************************************ 00:23:59.337 11:59:49 -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:59.337 11:59:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:59.337 11:59:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:59.337 11:59:49 -- common/autotest_common.sh@10 -- # set +x 00:23:59.596 ************************************ 00:23:59.596 START TEST nvmf_shutdown_tc3 00:23:59.596 ************************************ 00:23:59.596 11:59:49 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc3 00:23:59.596 11:59:49 -- target/shutdown.sh@121 -- # starttarget 00:23:59.596 11:59:49 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:59.596 11:59:49 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:59.596 11:59:49 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:59.596 11:59:49 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:59.596 11:59:49 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:59.596 11:59:49 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:59.596 11:59:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:59.596 11:59:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:59.596 11:59:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:59.596 11:59:49 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:23:59.596 11:59:49 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:59.596 11:59:49 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:59.596 11:59:49 -- common/autotest_common.sh@10 -- # set +x 00:23:59.596 11:59:49 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:59.596 11:59:49 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:59.596 11:59:49 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:59.596 11:59:49 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:59.596 11:59:49 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:59.596 11:59:49 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:59.596 11:59:49 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:59.596 11:59:49 -- nvmf/common.sh@295 -- # net_devs=() 00:23:59.596 11:59:49 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:59.596 11:59:49 -- nvmf/common.sh@296 -- # e810=() 00:23:59.596 11:59:49 -- nvmf/common.sh@296 -- # local -ga e810 00:23:59.596 11:59:49 -- nvmf/common.sh@297 -- # x722=() 00:23:59.596 11:59:49 -- nvmf/common.sh@297 -- # local -ga x722 00:23:59.596 11:59:49 -- nvmf/common.sh@298 -- # mlx=() 00:23:59.596 11:59:49 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:59.596 11:59:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:59.596 11:59:49 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:59.596 11:59:49 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:59.596 11:59:49 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:59.596 11:59:49 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:59.596 11:59:49 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:59.596 11:59:49 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:59.596 11:59:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:59.596 11:59:49 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:59.596 11:59:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:59.596 11:59:49 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:59.596 11:59:49 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:59.597 11:59:49 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:59.597 11:59:49 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:59.597 11:59:49 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:59.597 11:59:49 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:59.597 11:59:49 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:59.597 11:59:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:59.597 11:59:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:59.597 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:59.597 11:59:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:59.597 11:59:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:59.597 11:59:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:59.597 11:59:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:59.597 11:59:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:59.597 11:59:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:59.597 11:59:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:59.597 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:59.597 11:59:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:59.597 11:59:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:59.597 11:59:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:59.597 11:59:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:59.597 11:59:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:59.597 11:59:49 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:59.597 11:59:49 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:59.597 11:59:49 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:59.597 11:59:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:59.597 11:59:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:59.597 11:59:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:59.597 11:59:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:59.597 11:59:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:59.597 Found net devices under 0000:af:00.0: cvl_0_0 00:23:59.597 11:59:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:59.597 11:59:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:59.597 11:59:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:59.597 11:59:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:59.597 11:59:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:59.597 11:59:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:59.597 Found net devices under 0000:af:00.1: cvl_0_1 00:23:59.597 11:59:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:59.597 11:59:49 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:59.597 11:59:49 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:59.597 11:59:49 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:59.597 11:59:49 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:23:59.597 11:59:49 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:23:59.597 11:59:49 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:59.597 11:59:49 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:59.597 11:59:49 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:59.597 11:59:49 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:59.597 11:59:49 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:59.597 11:59:49 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:59.597 11:59:49 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:59.597 11:59:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:59.597 11:59:49 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:59.597 11:59:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:59.597 11:59:50 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:59.597 11:59:50 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:59.597 11:59:50 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:59.856 11:59:50 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:59.857 11:59:50 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:59.857 11:59:50 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:59.857 11:59:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:59.857 11:59:50 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:59.857 11:59:50 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:59.857 11:59:50 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:59.857 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:59.857 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:23:59.857 00:23:59.857 --- 10.0.0.2 ping statistics --- 00:23:59.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:59.857 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:23:59.857 11:59:50 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:59.857 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:59.857 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:23:59.857 00:23:59.857 --- 10.0.0.1 ping statistics --- 00:23:59.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:59.857 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:23:59.857 11:59:50 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:59.857 11:59:50 -- nvmf/common.sh@411 -- # return 0 00:23:59.857 11:59:50 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:59.857 11:59:50 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:59.857 11:59:50 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:59.857 11:59:50 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:59.857 11:59:50 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:59.857 11:59:50 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:59.857 11:59:50 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:59.857 11:59:50 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:59.857 11:59:50 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:59.857 11:59:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:59.857 11:59:50 -- common/autotest_common.sh@10 -- # set +x 00:23:59.857 11:59:50 -- nvmf/common.sh@470 -- # nvmfpid=2559532 00:23:59.857 11:59:50 -- nvmf/common.sh@471 -- # waitforlisten 2559532 00:23:59.857 11:59:50 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:59.857 11:59:50 -- common/autotest_common.sh@817 -- # '[' -z 2559532 ']' 00:23:59.857 11:59:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:59.857 11:59:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:59.857 11:59:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:59.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:59.857 11:59:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:59.857 11:59:50 -- common/autotest_common.sh@10 -- # set +x 00:24:00.116 [2024-04-18 11:59:50.455416] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:24:00.116 [2024-04-18 11:59:50.455510] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:00.116 EAL: No free 2048 kB hugepages reported on node 1 00:24:00.116 [2024-04-18 11:59:50.586830] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:00.375 [2024-04-18 11:59:50.814578] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:00.375 [2024-04-18 11:59:50.814625] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:00.375 [2024-04-18 11:59:50.814638] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:00.375 [2024-04-18 11:59:50.814652] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:00.375 [2024-04-18 11:59:50.814661] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:00.375 [2024-04-18 11:59:50.814797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:00.375 [2024-04-18 11:59:50.814864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:00.375 [2024-04-18 11:59:50.814948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:00.375 [2024-04-18 11:59:50.814972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:00.944 11:59:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:00.944 11:59:51 -- common/autotest_common.sh@850 -- # return 0 00:24:00.944 11:59:51 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:00.944 11:59:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:00.944 11:59:51 -- common/autotest_common.sh@10 -- # set +x 00:24:00.944 11:59:51 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:00.944 11:59:51 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:00.944 11:59:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:00.944 11:59:51 -- common/autotest_common.sh@10 -- # set +x 00:24:00.944 [2024-04-18 11:59:51.299061] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:00.944 11:59:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:00.944 11:59:51 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:00.944 11:59:51 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:00.944 11:59:51 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:00.944 11:59:51 -- common/autotest_common.sh@10 -- # set +x 00:24:00.944 11:59:51 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:00.944 11:59:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:00.944 11:59:51 -- target/shutdown.sh@28 -- # cat 00:24:00.944 11:59:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:00.944 11:59:51 -- target/shutdown.sh@28 -- # cat 00:24:00.944 11:59:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:00.944 11:59:51 -- target/shutdown.sh@28 -- # cat 00:24:00.944 11:59:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:00.944 11:59:51 -- target/shutdown.sh@28 -- # cat 00:24:00.944 11:59:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:00.944 11:59:51 -- target/shutdown.sh@28 -- # cat 00:24:00.944 11:59:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:00.944 11:59:51 -- target/shutdown.sh@28 -- # cat 00:24:00.944 11:59:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:00.944 11:59:51 -- target/shutdown.sh@28 -- # cat 00:24:00.944 11:59:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:00.944 11:59:51 -- target/shutdown.sh@28 -- # cat 00:24:00.944 11:59:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:00.944 11:59:51 -- target/shutdown.sh@28 -- # cat 00:24:00.944 11:59:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:00.944 11:59:51 -- target/shutdown.sh@28 -- # cat 00:24:00.944 11:59:51 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:00.944 11:59:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:00.944 11:59:51 -- common/autotest_common.sh@10 -- # set +x 00:24:00.944 Malloc1 00:24:00.944 [2024-04-18 11:59:51.480141] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:01.204 Malloc2 00:24:01.204 Malloc3 00:24:01.463 Malloc4 00:24:01.463 Malloc5 00:24:01.723 Malloc6 00:24:01.723 Malloc7 00:24:01.723 Malloc8 00:24:01.982 Malloc9 00:24:01.982 Malloc10 00:24:01.982 11:59:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:01.982 11:59:52 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:01.982 11:59:52 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:01.982 11:59:52 -- common/autotest_common.sh@10 -- # set +x 00:24:02.241 11:59:52 -- target/shutdown.sh@125 -- # perfpid=2559930 00:24:02.241 11:59:52 -- target/shutdown.sh@126 -- # waitforlisten 2559930 /var/tmp/bdevperf.sock 00:24:02.241 11:59:52 -- common/autotest_common.sh@817 -- # '[' -z 2559930 ']' 00:24:02.241 11:59:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:02.241 11:59:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:02.241 11:59:52 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:02.241 11:59:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:02.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:02.241 11:59:52 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:02.241 11:59:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:02.241 11:59:52 -- nvmf/common.sh@521 -- # config=() 00:24:02.241 11:59:52 -- common/autotest_common.sh@10 -- # set +x 00:24:02.241 11:59:52 -- nvmf/common.sh@521 -- # local subsystem config 00:24:02.241 11:59:52 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:02.241 11:59:52 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:02.241 { 00:24:02.241 "params": { 00:24:02.241 "name": "Nvme$subsystem", 00:24:02.241 "trtype": "$TEST_TRANSPORT", 00:24:02.241 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:02.241 "adrfam": "ipv4", 00:24:02.241 "trsvcid": "$NVMF_PORT", 00:24:02.241 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:02.241 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:02.241 "hdgst": ${hdgst:-false}, 00:24:02.241 "ddgst": ${ddgst:-false} 00:24:02.241 }, 00:24:02.241 "method": "bdev_nvme_attach_controller" 00:24:02.241 } 00:24:02.241 EOF 00:24:02.241 )") 00:24:02.241 11:59:52 -- nvmf/common.sh@543 -- # cat 00:24:02.241 11:59:52 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:02.241 11:59:52 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:02.241 { 00:24:02.241 "params": { 00:24:02.241 "name": "Nvme$subsystem", 00:24:02.241 "trtype": "$TEST_TRANSPORT", 00:24:02.241 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:02.241 "adrfam": "ipv4", 00:24:02.241 "trsvcid": "$NVMF_PORT", 00:24:02.241 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:02.241 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:02.241 "hdgst": ${hdgst:-false}, 00:24:02.241 "ddgst": ${ddgst:-false} 00:24:02.241 }, 00:24:02.241 "method": "bdev_nvme_attach_controller" 00:24:02.241 } 00:24:02.241 EOF 00:24:02.241 )") 00:24:02.241 11:59:52 -- nvmf/common.sh@543 -- # cat 00:24:02.241 11:59:52 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:02.241 11:59:52 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:02.241 { 00:24:02.241 "params": { 00:24:02.241 "name": "Nvme$subsystem", 00:24:02.241 "trtype": "$TEST_TRANSPORT", 00:24:02.241 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:02.241 "adrfam": "ipv4", 00:24:02.241 "trsvcid": "$NVMF_PORT", 00:24:02.241 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:02.241 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:02.241 "hdgst": ${hdgst:-false}, 00:24:02.241 "ddgst": ${ddgst:-false} 00:24:02.241 }, 00:24:02.241 "method": "bdev_nvme_attach_controller" 00:24:02.241 } 00:24:02.241 EOF 00:24:02.241 )") 00:24:02.241 11:59:52 -- nvmf/common.sh@543 -- # cat 00:24:02.241 11:59:52 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:02.241 11:59:52 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:02.241 { 00:24:02.241 "params": { 00:24:02.242 "name": "Nvme$subsystem", 00:24:02.242 "trtype": "$TEST_TRANSPORT", 00:24:02.242 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:02.242 "adrfam": "ipv4", 00:24:02.242 "trsvcid": "$NVMF_PORT", 00:24:02.242 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:02.242 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:02.242 "hdgst": ${hdgst:-false}, 00:24:02.242 "ddgst": ${ddgst:-false} 00:24:02.242 }, 00:24:02.242 "method": "bdev_nvme_attach_controller" 00:24:02.242 } 00:24:02.242 EOF 00:24:02.242 )") 00:24:02.242 11:59:52 -- nvmf/common.sh@543 -- # cat 00:24:02.242 11:59:52 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:02.242 11:59:52 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:02.242 { 00:24:02.242 "params": { 00:24:02.242 "name": "Nvme$subsystem", 00:24:02.242 "trtype": "$TEST_TRANSPORT", 00:24:02.242 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:02.242 "adrfam": "ipv4", 00:24:02.242 "trsvcid": "$NVMF_PORT", 00:24:02.242 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:02.242 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:02.242 "hdgst": ${hdgst:-false}, 00:24:02.242 "ddgst": ${ddgst:-false} 00:24:02.242 }, 00:24:02.242 "method": "bdev_nvme_attach_controller" 00:24:02.242 } 00:24:02.242 EOF 00:24:02.242 )") 00:24:02.242 11:59:52 -- nvmf/common.sh@543 -- # cat 00:24:02.242 11:59:52 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:02.242 11:59:52 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:02.242 { 00:24:02.242 "params": { 00:24:02.242 "name": "Nvme$subsystem", 00:24:02.242 "trtype": "$TEST_TRANSPORT", 00:24:02.242 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:02.242 "adrfam": "ipv4", 00:24:02.242 "trsvcid": "$NVMF_PORT", 00:24:02.242 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:02.242 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:02.242 "hdgst": ${hdgst:-false}, 00:24:02.242 "ddgst": ${ddgst:-false} 00:24:02.242 }, 00:24:02.242 "method": "bdev_nvme_attach_controller" 00:24:02.242 } 00:24:02.242 EOF 00:24:02.242 )") 00:24:02.242 11:59:52 -- nvmf/common.sh@543 -- # cat 00:24:02.242 11:59:52 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:02.242 11:59:52 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:02.242 { 00:24:02.242 "params": { 00:24:02.242 "name": "Nvme$subsystem", 00:24:02.242 "trtype": "$TEST_TRANSPORT", 00:24:02.242 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:02.242 "adrfam": "ipv4", 00:24:02.242 "trsvcid": "$NVMF_PORT", 00:24:02.242 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:02.242 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:02.242 "hdgst": ${hdgst:-false}, 00:24:02.242 "ddgst": ${ddgst:-false} 00:24:02.242 }, 00:24:02.242 "method": "bdev_nvme_attach_controller" 00:24:02.242 } 00:24:02.242 EOF 00:24:02.242 )") 00:24:02.242 11:59:52 -- nvmf/common.sh@543 -- # cat 00:24:02.242 11:59:52 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:02.242 11:59:52 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:02.242 { 00:24:02.242 "params": { 00:24:02.242 "name": "Nvme$subsystem", 00:24:02.242 "trtype": "$TEST_TRANSPORT", 00:24:02.242 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:02.242 "adrfam": "ipv4", 00:24:02.242 "trsvcid": "$NVMF_PORT", 00:24:02.242 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:02.242 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:02.242 "hdgst": ${hdgst:-false}, 00:24:02.242 "ddgst": ${ddgst:-false} 00:24:02.242 }, 00:24:02.242 "method": "bdev_nvme_attach_controller" 00:24:02.242 } 00:24:02.242 EOF 00:24:02.242 )") 00:24:02.242 11:59:52 -- nvmf/common.sh@543 -- # cat 00:24:02.242 11:59:52 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:02.242 11:59:52 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:02.242 { 00:24:02.242 "params": { 00:24:02.242 "name": "Nvme$subsystem", 00:24:02.242 "trtype": "$TEST_TRANSPORT", 00:24:02.242 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:02.242 "adrfam": "ipv4", 00:24:02.242 "trsvcid": "$NVMF_PORT", 00:24:02.242 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:02.242 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:02.242 "hdgst": ${hdgst:-false}, 00:24:02.242 "ddgst": ${ddgst:-false} 00:24:02.242 }, 00:24:02.242 "method": "bdev_nvme_attach_controller" 00:24:02.242 } 00:24:02.242 EOF 00:24:02.242 )") 00:24:02.242 11:59:52 -- nvmf/common.sh@543 -- # cat 00:24:02.242 11:59:52 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:02.242 11:59:52 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:02.242 { 00:24:02.242 "params": { 00:24:02.242 "name": "Nvme$subsystem", 00:24:02.242 "trtype": "$TEST_TRANSPORT", 00:24:02.242 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:02.242 "adrfam": "ipv4", 00:24:02.242 "trsvcid": "$NVMF_PORT", 00:24:02.242 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:02.242 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:02.242 "hdgst": ${hdgst:-false}, 00:24:02.242 "ddgst": ${ddgst:-false} 00:24:02.242 }, 00:24:02.242 "method": "bdev_nvme_attach_controller" 00:24:02.242 } 00:24:02.242 EOF 00:24:02.242 )") 00:24:02.242 [2024-04-18 11:59:52.636272] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:24:02.242 [2024-04-18 11:59:52.636365] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2559930 ] 00:24:02.242 11:59:52 -- nvmf/common.sh@543 -- # cat 00:24:02.242 11:59:52 -- nvmf/common.sh@545 -- # jq . 00:24:02.242 11:59:52 -- nvmf/common.sh@546 -- # IFS=, 00:24:02.242 11:59:52 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:24:02.242 "params": { 00:24:02.242 "name": "Nvme1", 00:24:02.242 "trtype": "tcp", 00:24:02.242 "traddr": "10.0.0.2", 00:24:02.242 "adrfam": "ipv4", 00:24:02.242 "trsvcid": "4420", 00:24:02.242 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:02.242 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:02.242 "hdgst": false, 00:24:02.242 "ddgst": false 00:24:02.242 }, 00:24:02.242 "method": "bdev_nvme_attach_controller" 00:24:02.242 },{ 00:24:02.242 "params": { 00:24:02.242 "name": "Nvme2", 00:24:02.242 "trtype": "tcp", 00:24:02.242 "traddr": "10.0.0.2", 00:24:02.242 "adrfam": "ipv4", 00:24:02.242 "trsvcid": "4420", 00:24:02.242 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:02.242 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:02.242 "hdgst": false, 00:24:02.242 "ddgst": false 00:24:02.242 }, 00:24:02.242 "method": "bdev_nvme_attach_controller" 00:24:02.242 },{ 00:24:02.242 "params": { 00:24:02.242 "name": "Nvme3", 00:24:02.242 "trtype": "tcp", 00:24:02.242 "traddr": "10.0.0.2", 00:24:02.242 "adrfam": "ipv4", 00:24:02.242 "trsvcid": "4420", 00:24:02.242 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:02.242 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:02.242 "hdgst": false, 00:24:02.242 "ddgst": false 00:24:02.242 }, 00:24:02.242 "method": "bdev_nvme_attach_controller" 00:24:02.242 },{ 00:24:02.242 "params": { 00:24:02.242 "name": "Nvme4", 00:24:02.242 "trtype": "tcp", 00:24:02.242 "traddr": "10.0.0.2", 00:24:02.242 "adrfam": "ipv4", 00:24:02.242 "trsvcid": "4420", 00:24:02.242 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:02.242 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:02.242 "hdgst": false, 00:24:02.242 "ddgst": false 00:24:02.242 }, 00:24:02.242 "method": "bdev_nvme_attach_controller" 00:24:02.242 },{ 00:24:02.242 "params": { 00:24:02.242 "name": "Nvme5", 00:24:02.242 "trtype": "tcp", 00:24:02.242 "traddr": "10.0.0.2", 00:24:02.242 "adrfam": "ipv4", 00:24:02.242 "trsvcid": "4420", 00:24:02.242 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:02.242 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:02.242 "hdgst": false, 00:24:02.242 "ddgst": false 00:24:02.242 }, 00:24:02.242 "method": "bdev_nvme_attach_controller" 00:24:02.242 },{ 00:24:02.242 "params": { 00:24:02.242 "name": "Nvme6", 00:24:02.242 "trtype": "tcp", 00:24:02.242 "traddr": "10.0.0.2", 00:24:02.242 "adrfam": "ipv4", 00:24:02.242 "trsvcid": "4420", 00:24:02.242 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:02.242 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:02.242 "hdgst": false, 00:24:02.242 "ddgst": false 00:24:02.242 }, 00:24:02.242 "method": "bdev_nvme_attach_controller" 00:24:02.242 },{ 00:24:02.242 "params": { 00:24:02.242 "name": "Nvme7", 00:24:02.242 "trtype": "tcp", 00:24:02.242 "traddr": "10.0.0.2", 00:24:02.242 "adrfam": "ipv4", 00:24:02.242 "trsvcid": "4420", 00:24:02.242 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:02.242 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:02.242 "hdgst": false, 00:24:02.242 "ddgst": false 00:24:02.242 }, 00:24:02.242 "method": "bdev_nvme_attach_controller" 00:24:02.242 },{ 00:24:02.242 "params": { 00:24:02.242 "name": "Nvme8", 00:24:02.242 "trtype": "tcp", 00:24:02.242 "traddr": "10.0.0.2", 00:24:02.242 "adrfam": "ipv4", 00:24:02.242 "trsvcid": "4420", 00:24:02.242 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:02.242 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:02.242 "hdgst": false, 00:24:02.242 "ddgst": false 00:24:02.242 }, 00:24:02.242 "method": "bdev_nvme_attach_controller" 00:24:02.242 },{ 00:24:02.243 "params": { 00:24:02.243 "name": "Nvme9", 00:24:02.243 "trtype": "tcp", 00:24:02.243 "traddr": "10.0.0.2", 00:24:02.243 "adrfam": "ipv4", 00:24:02.243 "trsvcid": "4420", 00:24:02.243 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:02.243 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:02.243 "hdgst": false, 00:24:02.243 "ddgst": false 00:24:02.243 }, 00:24:02.243 "method": "bdev_nvme_attach_controller" 00:24:02.243 },{ 00:24:02.243 "params": { 00:24:02.243 "name": "Nvme10", 00:24:02.243 "trtype": "tcp", 00:24:02.243 "traddr": "10.0.0.2", 00:24:02.243 "adrfam": "ipv4", 00:24:02.243 "trsvcid": "4420", 00:24:02.243 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:02.243 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:02.243 "hdgst": false, 00:24:02.243 "ddgst": false 00:24:02.243 }, 00:24:02.243 "method": "bdev_nvme_attach_controller" 00:24:02.243 }' 00:24:02.243 EAL: No free 2048 kB hugepages reported on node 1 00:24:02.243 [2024-04-18 11:59:52.762767] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:02.501 [2024-04-18 11:59:52.992667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:04.414 Running I/O for 10 seconds... 00:24:04.673 11:59:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:04.673 11:59:55 -- common/autotest_common.sh@850 -- # return 0 00:24:04.673 11:59:55 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:04.673 11:59:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:04.673 11:59:55 -- common/autotest_common.sh@10 -- # set +x 00:24:04.673 11:59:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:04.673 11:59:55 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:04.673 11:59:55 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:04.673 11:59:55 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:04.673 11:59:55 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:24:04.673 11:59:55 -- target/shutdown.sh@57 -- # local ret=1 00:24:04.673 11:59:55 -- target/shutdown.sh@58 -- # local i 00:24:04.673 11:59:55 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:24:04.673 11:59:55 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:04.673 11:59:55 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:04.673 11:59:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:04.673 11:59:55 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:04.673 11:59:55 -- common/autotest_common.sh@10 -- # set +x 00:24:04.673 11:59:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:04.673 11:59:55 -- target/shutdown.sh@60 -- # read_io_count=3 00:24:04.673 11:59:55 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:24:04.673 11:59:55 -- target/shutdown.sh@67 -- # sleep 0.25 00:24:04.932 11:59:55 -- target/shutdown.sh@59 -- # (( i-- )) 00:24:04.932 11:59:55 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:04.932 11:59:55 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:04.932 11:59:55 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:04.932 11:59:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:04.932 11:59:55 -- common/autotest_common.sh@10 -- # set +x 00:24:04.932 11:59:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:05.205 11:59:55 -- target/shutdown.sh@60 -- # read_io_count=131 00:24:05.205 11:59:55 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:24:05.205 11:59:55 -- target/shutdown.sh@64 -- # ret=0 00:24:05.205 11:59:55 -- target/shutdown.sh@65 -- # break 00:24:05.205 11:59:55 -- target/shutdown.sh@69 -- # return 0 00:24:05.205 11:59:55 -- target/shutdown.sh@135 -- # killprocess 2559532 00:24:05.205 11:59:55 -- common/autotest_common.sh@936 -- # '[' -z 2559532 ']' 00:24:05.205 11:59:55 -- common/autotest_common.sh@940 -- # kill -0 2559532 00:24:05.205 11:59:55 -- common/autotest_common.sh@941 -- # uname 00:24:05.205 11:59:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:05.205 11:59:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2559532 00:24:05.205 11:59:55 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:05.205 11:59:55 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:05.205 11:59:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2559532' 00:24:05.205 killing process with pid 2559532 00:24:05.206 11:59:55 -- common/autotest_common.sh@955 -- # kill 2559532 00:24:05.206 11:59:55 -- common/autotest_common.sh@960 -- # wait 2559532 00:24:05.206 [2024-04-18 11:59:55.546414] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.546478] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.546491] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.546502] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.546512] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.546523] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.546535] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.546547] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.546557] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.546568] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.546578] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.546589] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.546599] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.546610] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.546620] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.546630] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.546640] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.546651] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.546662] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.546672] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.546682] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.546692] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.546703] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.546713] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.546723] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.546733] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.546743] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.546755] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.546765] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.546776] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.546786] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.546796] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.546806] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.546818] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.546828] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.546839] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.546849] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.546859] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.546870] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.546880] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.546891] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.546902] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.546912] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.546923] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.546933] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.546944] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.546954] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.546964] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.546974] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.546984] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.546995] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.547005] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.547016] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.547030] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.547041] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.547051] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.547061] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.547072] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.547082] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.547092] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.547102] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.547112] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.547122] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.549283] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.549316] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.549329] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.549340] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.549350] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.549361] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.549372] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.549383] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.549393] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.549403] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.549414] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.549424] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.549435] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.549446] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.549461] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.549472] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.549483] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:24:05.206 [2024-04-18 11:59:55.549497] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.549507] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.549518] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.549528] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.549539] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.549550] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.549560] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.549571] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.549581] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.549592] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.549602] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.549612] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.549622] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.549632] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.549643] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.549653] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.549664] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.549674] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.549684] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.549694] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.549705] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.549716] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.549726] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.549736] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.549746] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.549756] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.549768] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.549778] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.549788] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.549798] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.549809] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.549819] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.549829] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.549839] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.549849] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.549861] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.549871] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.549881] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.549892] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.549902] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.549912] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.549922] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.549932] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.549942] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.549953] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.549963] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.552193] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.552213] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.552225] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.552236] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.552247] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.552257] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.552271] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.552282] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.552292] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.552303] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.552313] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.552323] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.552334] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.552344] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.552355] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.552366] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.552376] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.552387] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.552397] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.552408] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.552418] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.552429] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.552439] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.552454] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.552465] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.552475] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.552486] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.552496] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.552507] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.552517] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.552528] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.552538] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.552550] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:24:05.207 [2024-04-18 11:59:55.552561] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.552571] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.552582] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.552592] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.552603] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.552613] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.552624] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.552634] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.552644] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.552655] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.552665] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.552675] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.552686] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.552697] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.552707] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.552717] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.552727] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.552737] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.552748] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.552758] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.552768] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.552778] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.552789] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.552799] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.552809] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.552820] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.552831] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.552842] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.552852] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.552862] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.555734] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.555763] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.555776] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.555787] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.555797] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.555808] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.555819] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.555829] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.555840] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.555850] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.555860] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.555871] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.555881] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.555891] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.555902] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.555912] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.555923] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.555934] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.555944] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.555955] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.555966] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.555976] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.555990] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.556001] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.556011] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.556021] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.556032] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.556042] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.556053] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.556063] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.556073] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.556084] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.556094] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.556105] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.556115] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.556126] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.556136] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.556146] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.556156] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.556167] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.556177] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.556187] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.556197] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.556208] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.556218] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.556228] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.556238] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.556248] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.556260] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.556270] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.556280] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(5) to be set 00:24:05.208 [2024-04-18 11:59:55.556290] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.556300] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.556310] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.556321] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.556331] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.556341] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.556351] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.556361] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.556371] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.556382] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.556392] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.556402] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.558515] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.558546] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.558558] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.558569] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.558580] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.558591] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.558602] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.558613] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.558624] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.558634] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.558644] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.558658] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.558669] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.558680] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.558691] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.558702] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.558712] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.558723] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.558733] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.558743] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.558754] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.558764] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.558774] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.558784] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.558794] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.558804] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.558814] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.558824] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.558834] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.558845] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.558856] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.558866] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.558877] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.558888] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.558898] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.558909] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.558919] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.558929] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.558941] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.558951] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.558961] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.558972] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.558982] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.558992] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.559002] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.559012] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.559024] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.559035] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.559045] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.559055] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.559066] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.559076] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.559087] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.559097] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.559117] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.559128] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.559139] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.559149] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.559159] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.559170] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.559180] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.559191] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.559201] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:24:05.209 [2024-04-18 11:59:55.561401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.209 [2024-04-18 11:59:55.561445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.209 [2024-04-18 11:59:55.561469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.209 [2024-04-18 11:59:55.561482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.209 [2024-04-18 11:59:55.561495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.209 [2024-04-18 11:59:55.561507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.209 [2024-04-18 11:59:55.561520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.210 [2024-04-18 11:59:55.561532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.210 [2024-04-18 11:59:55.561543] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:24:05.210 [2024-04-18 11:59:55.561588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.210 [2024-04-18 11:59:55.561602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.210 [2024-04-18 11:59:55.561615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.210 [2024-04-18 11:59:55.561627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.210 [2024-04-18 11:59:55.561639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.210 [2024-04-18 11:59:55.561651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.210 [2024-04-18 11:59:55.561664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.210 [2024-04-18 11:59:55.561676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.210 [2024-04-18 11:59:55.561688] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007840 is same with the state(5) to be set 00:24:05.210 [2024-04-18 11:59:55.561722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.210 [2024-04-18 11:59:55.561736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.210 [2024-04-18 11:59:55.561749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.210 [2024-04-18 11:59:55.561761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.210 [2024-04-18 11:59:55.561774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.210 [2024-04-18 11:59:55.561786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.210 [2024-04-18 11:59:55.561798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.210 [2024-04-18 11:59:55.561810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.210 [2024-04-18 11:59:55.561823] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61400000d240 is same with the state(5) to be set 00:24:05.210 [2024-04-18 11:59:55.561875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.210 [2024-04-18 11:59:55.561888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.210 [2024-04-18 11:59:55.561901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.210 [2024-04-18 11:59:55.561913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.210 [2024-04-18 11:59:55.561926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.210 [2024-04-18 11:59:55.561938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.210 [2024-04-18 11:59:55.561950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.210 [2024-04-18 11:59:55.561962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.210 [2024-04-18 11:59:55.561973] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61400000b440 is same with the state(5) to be set 00:24:05.210 [2024-04-18 11:59:55.562006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.210 [2024-04-18 11:59:55.562019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.210 [2024-04-18 11:59:55.562032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.210 [2024-04-18 11:59:55.562044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.210 [2024-04-18 11:59:55.562058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.210 [2024-04-18 11:59:55.562079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.210 [2024-04-18 11:59:55.562092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.210 [2024-04-18 11:59:55.562104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.210 [2024-04-18 11:59:55.562116] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000009640 is same with the state(5) to be set 00:24:05.210 [2024-04-18 11:59:55.562150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.210 [2024-04-18 11:59:55.562163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.210 [2024-04-18 11:59:55.562176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.210 [2024-04-18 11:59:55.562187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.210 [2024-04-18 11:59:55.562200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.210 [2024-04-18 11:59:55.562212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.210 [2024-04-18 11:59:55.562224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.210 [2024-04-18 11:59:55.562237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.210 [2024-04-18 11:59:55.562249] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000016840 is same with the state(5) to be set 00:24:05.210 [2024-04-18 11:59:55.562859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.210 [2024-04-18 11:59:55.562890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.210 [2024-04-18 11:59:55.562918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.210 [2024-04-18 11:59:55.562931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.210 [2024-04-18 11:59:55.562946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.210 [2024-04-18 11:59:55.562959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.210 [2024-04-18 11:59:55.562973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.210 [2024-04-18 11:59:55.562985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.210 [2024-04-18 11:59:55.563000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.211 [2024-04-18 11:59:55.563012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.211 [2024-04-18 11:59:55.563026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.211 [2024-04-18 11:59:55.563039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.211 [2024-04-18 11:59:55.563054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.211 [2024-04-18 11:59:55.563066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.211 [2024-04-18 11:59:55.563081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.211 [2024-04-18 11:59:55.563093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.211 [2024-04-18 11:59:55.563106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.211 [2024-04-18 11:59:55.563118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.211 [2024-04-18 11:59:55.563132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.211 [2024-04-18 11:59:55.563143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.211 [2024-04-18 11:59:55.563157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.211 [2024-04-18 11:59:55.563168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.211 [2024-04-18 11:59:55.563182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.211 [2024-04-18 11:59:55.563196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.211 [2024-04-18 11:59:55.563210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.211 [2024-04-18 11:59:55.563222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.211 [2024-04-18 11:59:55.563235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.211 [2024-04-18 11:59:55.563247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.211 [2024-04-18 11:59:55.563261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.211 [2024-04-18 11:59:55.563272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.211 [2024-04-18 11:59:55.563287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.211 [2024-04-18 11:59:55.563299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.211 [2024-04-18 11:59:55.563312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.211 [2024-04-18 11:59:55.563324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.211 [2024-04-18 11:59:55.563337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.211 [2024-04-18 11:59:55.563349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.211 [2024-04-18 11:59:55.563362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.211 [2024-04-18 11:59:55.563374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.211 [2024-04-18 11:59:55.563387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.211 [2024-04-18 11:59:55.563399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.211 [2024-04-18 11:59:55.563412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.211 [2024-04-18 11:59:55.563425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.211 [2024-04-18 11:59:55.563438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.211 [2024-04-18 11:59:55.563468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.211 [2024-04-18 11:59:55.563483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.211 [2024-04-18 11:59:55.563495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.211 [2024-04-18 11:59:55.563509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.211 [2024-04-18 11:59:55.563521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.211 [2024-04-18 11:59:55.563536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.211 [2024-04-18 11:59:55.563548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.211 [2024-04-18 11:59:55.563561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.211 [2024-04-18 11:59:55.563573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.211 [2024-04-18 11:59:55.563586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.211 [2024-04-18 11:59:55.563598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.211 [2024-04-18 11:59:55.563611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.211 [2024-04-18 11:59:55.563623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.211 [2024-04-18 11:59:55.563636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.211 [2024-04-18 11:59:55.563648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.211 [2024-04-18 11:59:55.563662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.211 [2024-04-18 11:59:55.563673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.211 [2024-04-18 11:59:55.563686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.211 [2024-04-18 11:59:55.563698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.211 [2024-04-18 11:59:55.563711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.211 [2024-04-18 11:59:55.563723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.211 [2024-04-18 11:59:55.563736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.211 [2024-04-18 11:59:55.563747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.211 [2024-04-18 11:59:55.563761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.211 [2024-04-18 11:59:55.563772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.211 [2024-04-18 11:59:55.563785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.211 [2024-04-18 11:59:55.563797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.211 [2024-04-18 11:59:55.563810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.211 [2024-04-18 11:59:55.563822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.211 [2024-04-18 11:59:55.563835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.211 [2024-04-18 11:59:55.563849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.211 [2024-04-18 11:59:55.563862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.211 [2024-04-18 11:59:55.563874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.211 [2024-04-18 11:59:55.563887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.211 [2024-04-18 11:59:55.563898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.211 [2024-04-18 11:59:55.563912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.211 [2024-04-18 11:59:55.563923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.211 [2024-04-18 11:59:55.563937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.211 [2024-04-18 11:59:55.563949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.211 [2024-04-18 11:59:55.563962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.211 [2024-04-18 11:59:55.563974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.211 [2024-04-18 11:59:55.563987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.211 [2024-04-18 11:59:55.563999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.211 [2024-04-18 11:59:55.564012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.211 [2024-04-18 11:59:55.564024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.212 [2024-04-18 11:59:55.564037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.212 [2024-04-18 11:59:55.564049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.212 [2024-04-18 11:59:55.564062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.212 [2024-04-18 11:59:55.564073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.212 [2024-04-18 11:59:55.564087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.212 [2024-04-18 11:59:55.564098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.212 [2024-04-18 11:59:55.564111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.212 [2024-04-18 11:59:55.564123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.212 [2024-04-18 11:59:55.564137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.212 [2024-04-18 11:59:55.564148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.212 [2024-04-18 11:59:55.564163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.212 [2024-04-18 11:59:55.564175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.212 [2024-04-18 11:59:55.564188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.212 [2024-04-18 11:59:55.564200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.212 [2024-04-18 11:59:55.564213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.212 [2024-04-18 11:59:55.564224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.212 [2024-04-18 11:59:55.564237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.212 [2024-04-18 11:59:55.564249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.212 [2024-04-18 11:59:55.564262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.212 [2024-04-18 11:59:55.564274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.212 [2024-04-18 11:59:55.564288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.212 [2024-04-18 11:59:55.564300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.212 [2024-04-18 11:59:55.564317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.212 [2024-04-18 11:59:55.564328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.212 [2024-04-18 11:59:55.564342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.212 [2024-04-18 11:59:55.564353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.212 [2024-04-18 11:59:55.564367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.212 [2024-04-18 11:59:55.564376] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same [2024-04-18 11:59:55.564386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:24:05.212 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.212 [2024-04-18 11:59:55.564401] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same [2024-04-18 11:59:55.564402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:12with the state(5) to be set 00:24:05.212 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.212 [2024-04-18 11:59:55.564414] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:24:05.212 [2024-04-18 11:59:55.564416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.212 [2024-04-18 11:59:55.564426] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:24:05.212 [2024-04-18 11:59:55.564431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.212 [2024-04-18 11:59:55.564437] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:24:05.212 [2024-04-18 11:59:55.564448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-18 11:59:55.564454] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.212 with the state(5) to be set 00:24:05.212 [2024-04-18 11:59:55.564466] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:24:05.212 [2024-04-18 11:59:55.564469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.212 [2024-04-18 11:59:55.564477] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:24:05.212 [2024-04-18 11:59:55.564481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.212 [2024-04-18 11:59:55.564488] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:24:05.212 [2024-04-18 11:59:55.564496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.212 [2024-04-18 11:59:55.564499] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:24:05.212 [2024-04-18 11:59:55.564507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.212 [2024-04-18 11:59:55.564510] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:24:05.212 [2024-04-18 11:59:55.564521] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:24:05.212 [2024-04-18 11:59:55.564522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.212 [2024-04-18 11:59:55.564531] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:24:05.212 [2024-04-18 11:59:55.564534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.212 [2024-04-18 11:59:55.564542] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:24:05.212 [2024-04-18 11:59:55.564548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.212 [2024-04-18 11:59:55.564553] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:24:05.212 [2024-04-18 11:59:55.564560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.212 [2024-04-18 11:59:55.564564] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:24:05.212 [2024-04-18 11:59:55.564575] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:24:05.212 [2024-04-18 11:59:55.564585] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:24:05.212 [2024-04-18 11:59:55.564595] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:24:05.212 [2024-04-18 11:59:55.564605] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:24:05.212 [2024-04-18 11:59:55.564605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:05.212 [2024-04-18 11:59:55.564615] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:24:05.212 [2024-04-18 11:59:55.564626] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:24:05.212 [2024-04-18 11:59:55.564636] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:24:05.212 [2024-04-18 11:59:55.564646] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:24:05.212 [2024-04-18 11:59:55.564657] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:24:05.212 [2024-04-18 11:59:55.564667] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:24:05.212 [2024-04-18 11:59:55.564677] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:24:05.212 [2024-04-18 11:59:55.564687] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:24:05.212 [2024-04-18 11:59:55.564698] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:24:05.212 [2024-04-18 11:59:55.564708] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:24:05.212 [2024-04-18 11:59:55.564718] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:24:05.212 [2024-04-18 11:59:55.564729] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:24:05.212 [2024-04-18 11:59:55.564739] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:24:05.212 [2024-04-18 11:59:55.564750] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:24:05.212 [2024-04-18 11:59:55.564760] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:24:05.212 [2024-04-18 11:59:55.564770] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:24:05.212 [2024-04-18 11:59:55.564780] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:24:05.212 [2024-04-18 11:59:55.564790] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:24:05.213 [2024-04-18 11:59:55.564800] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:24:05.213 [2024-04-18 11:59:55.564810] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:24:05.213 [2024-04-18 11:59:55.564820] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:24:05.213 [2024-04-18 11:59:55.564831] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:24:05.213 [2024-04-18 11:59:55.564841] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:24:05.213 [2024-04-18 11:59:55.564851] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:24:05.213 [2024-04-18 11:59:55.564861] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:24:05.213 [2024-04-18 11:59:55.564871] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:24:05.213 [2024-04-18 11:59:55.564885] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:24:05.213 [2024-04-18 11:59:55.564896] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:24:05.213 [2024-04-18 11:59:55.564902] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61400001b640 was disconnected and freed. reset controller. 00:24:05.213 [2024-04-18 11:59:55.564906] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:24:05.213 [2024-04-18 11:59:55.564917] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:24:05.213 [2024-04-18 11:59:55.564927] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:24:05.213 [2024-04-18 11:59:55.564937] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:24:05.213 [2024-04-18 11:59:55.564947] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:24:05.213 [2024-04-18 11:59:55.564957] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:24:05.213 [2024-04-18 11:59:55.564967] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:24:05.213 [2024-04-18 11:59:55.564977] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:24:05.213 [2024-04-18 11:59:55.564988] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:24:05.213 [2024-04-18 11:59:55.564998] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:24:05.213 [2024-04-18 11:59:55.565008] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:24:05.213 [2024-04-18 11:59:55.565018] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:24:05.213 [2024-04-18 11:59:55.565028] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:24:05.213 [2024-04-18 11:59:55.565038] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:24:05.213 [2024-04-18 11:59:55.565049] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:24:05.213 [2024-04-18 11:59:55.567081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.213 [2024-04-18 11:59:55.567112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.213 [2024-04-18 11:59:55.567133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.213 [2024-04-18 11:59:55.567145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.213 [2024-04-18 11:59:55.567159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.213 [2024-04-18 11:59:55.567171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.213 [2024-04-18 11:59:55.567185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.213 [2024-04-18 11:59:55.567197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.213 [2024-04-18 11:59:55.567215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.213 [2024-04-18 11:59:55.567227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.213 [2024-04-18 11:59:55.567241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.213 [2024-04-18 11:59:55.567253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.213 [2024-04-18 11:59:55.567267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.213 [2024-04-18 11:59:55.567278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.213 [2024-04-18 11:59:55.567292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.213 [2024-04-18 11:59:55.567303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.213 [2024-04-18 11:59:55.567317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.213 [2024-04-18 11:59:55.567328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.213 [2024-04-18 11:59:55.567341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.213 [2024-04-18 11:59:55.567353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.213 [2024-04-18 11:59:55.567366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.213 [2024-04-18 11:59:55.567377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.213 [2024-04-18 11:59:55.567391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.213 [2024-04-18 11:59:55.567403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.213 [2024-04-18 11:59:55.567416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.213 [2024-04-18 11:59:55.567427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.213 [2024-04-18 11:59:55.567441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.213 [2024-04-18 11:59:55.567458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.213 [2024-04-18 11:59:55.567472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.213 [2024-04-18 11:59:55.567483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.213 [2024-04-18 11:59:55.567496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.213 [2024-04-18 11:59:55.567508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.213 [2024-04-18 11:59:55.567521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.213 [2024-04-18 11:59:55.567534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.213 [2024-04-18 11:59:55.567548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.213 [2024-04-18 11:59:55.567560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.213 [2024-04-18 11:59:55.567573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.213 [2024-04-18 11:59:55.567585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.213 [2024-04-18 11:59:55.567598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.213 [2024-04-18 11:59:55.567610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.213 [2024-04-18 11:59:55.567623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.213 [2024-04-18 11:59:55.567635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.213 [2024-04-18 11:59:55.567648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.213 [2024-04-18 11:59:55.567659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.213 [2024-04-18 11:59:55.567673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.213 [2024-04-18 11:59:55.567684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.213 [2024-04-18 11:59:55.567698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.213 [2024-04-18 11:59:55.567709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.213 [2024-04-18 11:59:55.567722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.213 [2024-04-18 11:59:55.567734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.213 [2024-04-18 11:59:55.567747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.213 [2024-04-18 11:59:55.567759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.213 [2024-04-18 11:59:55.567772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.213 [2024-04-18 11:59:55.567784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.213 [2024-04-18 11:59:55.567798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.214 [2024-04-18 11:59:55.567809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.214 [2024-04-18 11:59:55.567823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.214 [2024-04-18 11:59:55.567834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.214 [2024-04-18 11:59:55.567849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.214 [2024-04-18 11:59:55.567860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.214 [2024-04-18 11:59:55.567873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.214 [2024-04-18 11:59:55.567885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.214 [2024-04-18 11:59:55.567898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.214 [2024-04-18 11:59:55.567909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.214 [2024-04-18 11:59:55.567922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.214 [2024-04-18 11:59:55.567933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.214 [2024-04-18 11:59:55.567947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.214 [2024-04-18 11:59:55.567958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.214 [2024-04-18 11:59:55.567971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.214 [2024-04-18 11:59:55.567983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.214 [2024-04-18 11:59:55.567996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.214 [2024-04-18 11:59:55.568007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.214 [2024-04-18 11:59:55.568020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.214 [2024-04-18 11:59:55.568032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.214 [2024-04-18 11:59:55.568045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.214 [2024-04-18 11:59:55.568056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.214 [2024-04-18 11:59:55.568070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.214 [2024-04-18 11:59:55.568081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.214 [2024-04-18 11:59:55.568094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.214 [2024-04-18 11:59:55.568106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.214 [2024-04-18 11:59:55.568119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.214 [2024-04-18 11:59:55.568118] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same [2024-04-18 11:59:55.568130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:24:05.214 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.214 [2024-04-18 11:59:55.568149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.214 [2024-04-18 11:59:55.568151] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:24:05.214 [2024-04-18 11:59:55.568161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.214 [2024-04-18 11:59:55.568163] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:24:05.214 [2024-04-18 11:59:55.568175] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:24:05.214 [2024-04-18 11:59:55.568177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.214 [2024-04-18 11:59:55.568186] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:24:05.214 [2024-04-18 11:59:55.568190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.214 [2024-04-18 11:59:55.568198] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:24:05.214 [2024-04-18 11:59:55.568204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.214 [2024-04-18 11:59:55.568208] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:24:05.214 [2024-04-18 11:59:55.568216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.214 [2024-04-18 11:59:55.568220] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:24:05.214 [2024-04-18 11:59:55.568230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:1[2024-04-18 11:59:55.568231] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.214 with the state(5) to be set 00:24:05.214 [2024-04-18 11:59:55.568243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-18 11:59:55.568244] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.214 with the state(5) to be set 00:24:05.214 [2024-04-18 11:59:55.568257] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:24:05.214 [2024-04-18 11:59:55.568259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.214 [2024-04-18 11:59:55.568267] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:24:05.214 [2024-04-18 11:59:55.568271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.214 [2024-04-18 11:59:55.568278] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:24:05.214 [2024-04-18 11:59:55.568285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.214 [2024-04-18 11:59:55.568289] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:24:05.214 [2024-04-18 11:59:55.568297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.214 [2024-04-18 11:59:55.568300] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:24:05.214 [2024-04-18 11:59:55.568311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:1[2024-04-18 11:59:55.568312] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.214 with the state(5) to be set 00:24:05.214 [2024-04-18 11:59:55.568324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-18 11:59:55.568324] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.214 with the state(5) to be set 00:24:05.214 [2024-04-18 11:59:55.568338] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:24:05.214 [2024-04-18 11:59:55.568340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.214 [2024-04-18 11:59:55.568349] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:24:05.214 [2024-04-18 11:59:55.568351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.214 [2024-04-18 11:59:55.568360] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:24:05.214 [2024-04-18 11:59:55.568365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.214 [2024-04-18 11:59:55.568371] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:24:05.214 [2024-04-18 11:59:55.568378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.214 [2024-04-18 11:59:55.568382] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:24:05.214 [2024-04-18 11:59:55.568391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.214 [2024-04-18 11:59:55.568393] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:24:05.214 [2024-04-18 11:59:55.568404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-18 11:59:55.568405] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.214 with the state(5) to be set 00:24:05.214 [2024-04-18 11:59:55.568417] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:24:05.214 [2024-04-18 11:59:55.568418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.214 [2024-04-18 11:59:55.568427] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:24:05.214 [2024-04-18 11:59:55.568431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.214 [2024-04-18 11:59:55.568438] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:24:05.214 [2024-04-18 11:59:55.568444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.214 [2024-04-18 11:59:55.568448] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:24:05.215 [2024-04-18 11:59:55.568460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.215 [2024-04-18 11:59:55.568464] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:24:05.215 [2024-04-18 11:59:55.568474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.215 [2024-04-18 11:59:55.568477] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:24:05.215 [2024-04-18 11:59:55.568486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-18 11:59:55.568487] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.215 with the state(5) to be set 00:24:05.215 [2024-04-18 11:59:55.568499] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:24:05.215 [2024-04-18 11:59:55.568501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.215 [2024-04-18 11:59:55.568510] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:24:05.215 [2024-04-18 11:59:55.568513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.215 [2024-04-18 11:59:55.568522] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:24:05.215 [2024-04-18 11:59:55.568528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.215 [2024-04-18 11:59:55.568533] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:24:05.215 [2024-04-18 11:59:55.568540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.215 [2024-04-18 11:59:55.568544] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:24:05.215 [2024-04-18 11:59:55.568555] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:24:05.215 [2024-04-18 11:59:55.568554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.215 [2024-04-18 11:59:55.568565] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:24:05.215 [2024-04-18 11:59:55.568574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.215 [2024-04-18 11:59:55.568576] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:24:05.215 [2024-04-18 11:59:55.568587] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:24:05.215 [2024-04-18 11:59:55.568589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.215 [2024-04-18 11:59:55.568598] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:24:05.215 [2024-04-18 11:59:55.568601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.215 [2024-04-18 11:59:55.568609] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:24:05.215 [2024-04-18 11:59:55.568616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.215 [2024-04-18 11:59:55.568620] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:24:05.215 [2024-04-18 11:59:55.568628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.215 [2024-04-18 11:59:55.568633] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:24:05.215 [2024-04-18 11:59:55.568642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:1[2024-04-18 11:59:55.568644] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.215 with the state(5) to be set 00:24:05.215 [2024-04-18 11:59:55.568656] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same [2024-04-18 11:59:55.568656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:24:05.215 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.215 [2024-04-18 11:59:55.568667] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:24:05.215 [2024-04-18 11:59:55.568672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.215 [2024-04-18 11:59:55.568678] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:24:05.215 [2024-04-18 11:59:55.568684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.215 [2024-04-18 11:59:55.568689] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:24:05.215 [2024-04-18 11:59:55.568698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.215 [2024-04-18 11:59:55.568700] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:24:05.215 [2024-04-18 11:59:55.568710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-18 11:59:55.568711] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.215 with the state(5) to be set 00:24:05.215 [2024-04-18 11:59:55.568723] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:24:05.215 [2024-04-18 11:59:55.568725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.215 [2024-04-18 11:59:55.568734] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:24:05.215 [2024-04-18 11:59:55.568737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.215 [2024-04-18 11:59:55.568745] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:24:05.215 [2024-04-18 11:59:55.568751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.215 [2024-04-18 11:59:55.568755] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:24:05.215 [2024-04-18 11:59:55.568764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.215 [2024-04-18 11:59:55.568766] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:24:05.215 [2024-04-18 11:59:55.568777] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:24:05.215 [2024-04-18 11:59:55.568787] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:24:05.215 [2024-04-18 11:59:55.568801] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:24:05.215 [2024-04-18 11:59:55.568812] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:24:05.215 [2024-04-18 11:59:55.568822] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:24:05.215 [2024-04-18 11:59:55.568832] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:24:05.215 [2024-04-18 11:59:55.568843] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:24:05.215 [2024-04-18 11:59:55.569065] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61400001bc40 was disconnected and freed. reset controller. 00:24:05.215 [2024-04-18 11:59:55.569199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.215 [2024-04-18 11:59:55.569216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.215 [2024-04-18 11:59:55.569232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.215 [2024-04-18 11:59:55.569244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.215 [2024-04-18 11:59:55.569258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.215 [2024-04-18 11:59:55.569269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.215 [2024-04-18 11:59:55.569283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.215 [2024-04-18 11:59:55.569298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.216 [2024-04-18 11:59:55.569312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.216 [2024-04-18 11:59:55.569324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.216 [2024-04-18 11:59:55.569338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.216 [2024-04-18 11:59:55.569350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.216 [2024-04-18 11:59:55.569363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.216 [2024-04-18 11:59:55.569375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.216 [2024-04-18 11:59:55.569391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.216 [2024-04-18 11:59:55.569403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.216 [2024-04-18 11:59:55.569416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.216 [2024-04-18 11:59:55.569428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.216 [2024-04-18 11:59:55.569442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.216 [2024-04-18 11:59:55.569462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.216 [2024-04-18 11:59:55.569476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.216 [2024-04-18 11:59:55.569488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.216 [2024-04-18 11:59:55.569502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.216 [2024-04-18 11:59:55.569514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.216 [2024-04-18 11:59:55.569528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.216 [2024-04-18 11:59:55.569540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.216 [2024-04-18 11:59:55.569553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.216 [2024-04-18 11:59:55.569565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.216 [2024-04-18 11:59:55.569578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.216 [2024-04-18 11:59:55.569591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.216 [2024-04-18 11:59:55.569604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.216 [2024-04-18 11:59:55.569616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.216 [2024-04-18 11:59:55.569629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.216 [2024-04-18 11:59:55.569640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.216 [2024-04-18 11:59:55.569653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.216 [2024-04-18 11:59:55.569665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.216 [2024-04-18 11:59:55.569678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.216 [2024-04-18 11:59:55.569690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.216 [2024-04-18 11:59:55.569703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.216 [2024-04-18 11:59:55.569715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.216 [2024-04-18 11:59:55.569729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.216 [2024-04-18 11:59:55.569740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.216 [2024-04-18 11:59:55.569754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.216 [2024-04-18 11:59:55.569766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.216 [2024-04-18 11:59:55.569781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.216 [2024-04-18 11:59:55.569793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.216 [2024-04-18 11:59:55.569806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.216 [2024-04-18 11:59:55.569818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.216 [2024-04-18 11:59:55.569831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.216 [2024-04-18 11:59:55.569843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.216 [2024-04-18 11:59:55.569856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.216 [2024-04-18 11:59:55.569868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.216 [2024-04-18 11:59:55.569881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.216 [2024-04-18 11:59:55.569893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.216 [2024-04-18 11:59:55.569907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.216 [2024-04-18 11:59:55.569918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.216 [2024-04-18 11:59:55.569939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.216 [2024-04-18 11:59:55.569955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.216 [2024-04-18 11:59:55.569977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.216 [2024-04-18 11:59:55.569993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.216 [2024-04-18 11:59:55.570007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.216 [2024-04-18 11:59:55.570019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.216 [2024-04-18 11:59:55.570032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.216 [2024-04-18 11:59:55.570044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.216 [2024-04-18 11:59:55.570058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.216 [2024-04-18 11:59:55.570069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.216 [2024-04-18 11:59:55.570083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.216 [2024-04-18 11:59:55.570095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.216 [2024-04-18 11:59:55.570108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.216 [2024-04-18 11:59:55.570121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.216 [2024-04-18 11:59:55.570135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.216 [2024-04-18 11:59:55.570147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.216 [2024-04-18 11:59:55.570161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.216 [2024-04-18 11:59:55.570172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.216 [2024-04-18 11:59:55.570186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.216 [2024-04-18 11:59:55.570197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.216 [2024-04-18 11:59:55.570211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.216 [2024-04-18 11:59:55.570223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.216 [2024-04-18 11:59:55.570236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.216 [2024-04-18 11:59:55.570249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.216 [2024-04-18 11:59:55.570262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.216 [2024-04-18 11:59:55.570273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.216 [2024-04-18 11:59:55.570286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.216 [2024-04-18 11:59:55.570298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.216 [2024-04-18 11:59:55.570311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.216 [2024-04-18 11:59:55.570323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.216 [2024-04-18 11:59:55.570336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.216 [2024-04-18 11:59:55.570348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.217 [2024-04-18 11:59:55.570361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.217 [2024-04-18 11:59:55.570373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.217 [2024-04-18 11:59:55.570386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.217 [2024-04-18 11:59:55.570398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.217 [2024-04-18 11:59:55.570411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.217 [2024-04-18 11:59:55.570423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.217 [2024-04-18 11:59:55.570438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.217 [2024-04-18 11:59:55.570457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.217 [2024-04-18 11:59:55.570471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.217 [2024-04-18 11:59:55.570483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.217 [2024-04-18 11:59:55.570497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.217 [2024-04-18 11:59:55.570509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.217 [2024-04-18 11:59:55.570522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.217 [2024-04-18 11:59:55.570534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.217 [2024-04-18 11:59:55.570548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.217 [2024-04-18 11:59:55.570560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.217 [2024-04-18 11:59:55.570574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.217 [2024-04-18 11:59:55.570585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.217 [2024-04-18 11:59:55.570599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.217 [2024-04-18 11:59:55.570610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.217 [2024-04-18 11:59:55.570624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.217 [2024-04-18 11:59:55.570636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.217 [2024-04-18 11:59:55.570649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.217 [2024-04-18 11:59:55.570662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.217 [2024-04-18 11:59:55.570703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.217 [2024-04-18 11:59:55.570715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.217 [2024-04-18 11:59:55.570729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.217 [2024-04-18 11:59:55.570741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.217 [2024-04-18 11:59:55.570754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.217 [2024-04-18 11:59:55.570766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.217 [2024-04-18 11:59:55.570779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.217 [2024-04-18 11:59:55.570792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.217 [2024-04-18 11:59:55.570805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.217 [2024-04-18 11:59:55.570817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.217 [2024-04-18 11:59:55.570830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.217 [2024-04-18 11:59:55.570842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.217 [2024-04-18 11:59:55.570855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.217 [2024-04-18 11:59:55.570867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.217 [2024-04-18 11:59:55.570881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.217 [2024-04-18 11:59:55.570893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.217 [2024-04-18 11:59:55.571170] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61400001c840 was disconnected and freed. reset controller. 00:24:05.217 [2024-04-18 11:59:55.571301] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:24:05.217 [2024-04-18 11:59:55.571351] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61400000d240 (9): Bad file descriptor 00:24:05.217 [2024-04-18 11:59:55.573578] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:24:05.217 [2024-04-18 11:59:55.573640] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:24:05.217 [2024-04-18 11:59:55.573679] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000012c40 (9): Bad file descriptor 00:24:05.217 [2024-04-18 11:59:55.573697] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61400000f040 (9): Bad file descriptor 00:24:05.217 [2024-04-18 11:59:55.573733] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:24:05.217 [2024-04-18 11:59:55.573758] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000007840 (9): Bad file descriptor 00:24:05.217 [2024-04-18 11:59:55.573799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.217 [2024-04-18 11:59:55.573815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.217 [2024-04-18 11:59:55.573830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.217 [2024-04-18 11:59:55.573842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.217 [2024-04-18 11:59:55.573855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.217 [2024-04-18 11:59:55.573867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.217 [2024-04-18 11:59:55.573880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.217 [2024-04-18 11:59:55.573892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.217 [2024-04-18 11:59:55.573907] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000010e40 is same with the state(5) to be set 00:24:05.217 [2024-04-18 11:59:55.573931] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61400000b440 (9): Bad file descriptor 00:24:05.217 [2024-04-18 11:59:55.573953] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000009640 (9): Bad file descriptor 00:24:05.217 [2024-04-18 11:59:55.573974] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000016840 (9): Bad file descriptor 00:24:05.217 [2024-04-18 11:59:55.574013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.217 [2024-04-18 11:59:55.574027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.217 [2024-04-18 11:59:55.574040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.217 [2024-04-18 11:59:55.574052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.217 [2024-04-18 11:59:55.574064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.217 [2024-04-18 11:59:55.574077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.217 [2024-04-18 11:59:55.574089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.217 [2024-04-18 11:59:55.574101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.217 [2024-04-18 11:59:55.574112] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000014a40 is same with the state(5) to be set 00:24:05.217 [2024-04-18 11:59:55.574938] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:05.217 [2024-04-18 11:59:55.575217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:05.217 [2024-04-18 11:59:55.575461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:05.217 [2024-04-18 11:59:55.575479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61400000d240 with addr=10.0.0.2, port=4420 00:24:05.217 [2024-04-18 11:59:55.575494] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61400000d240 is same with the state(5) to be set 00:24:05.217 [2024-04-18 11:59:55.575571] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:05.217 [2024-04-18 11:59:55.575627] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:05.217 [2024-04-18 11:59:55.575679] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:05.217 [2024-04-18 11:59:55.575732] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:05.217 [2024-04-18 11:59:55.577486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:05.217 [2024-04-18 11:59:55.577729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:05.217 [2024-04-18 11:59:55.577746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61400000f040 with addr=10.0.0.2, port=4420 00:24:05.217 [2024-04-18 11:59:55.577760] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61400000f040 is same with the state(5) to be set 00:24:05.217 [2024-04-18 11:59:55.578111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:05.218 [2024-04-18 11:59:55.578459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:05.218 [2024-04-18 11:59:55.578473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000012c40 with addr=10.0.0.2, port=4420 00:24:05.218 [2024-04-18 11:59:55.578485] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000012c40 is same with the state(5) to be set 00:24:05.218 [2024-04-18 11:59:55.578505] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61400000d240 (9): Bad file descriptor 00:24:05.218 [2024-04-18 11:59:55.578651] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:05.218 [2024-04-18 11:59:55.578721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.218 [2024-04-18 11:59:55.578739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.218 [2024-04-18 11:59:55.578764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.218 [2024-04-18 11:59:55.578776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.218 [2024-04-18 11:59:55.578790] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61400001ce40 is same with the state(5) to be set 00:24:05.218 [2024-04-18 11:59:55.579041] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61400001ce40 was disconnected and freed. reset controller. 00:24:05.218 [2024-04-18 11:59:55.579077] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61400000f040 (9): Bad file descriptor 00:24:05.218 [2024-04-18 11:59:55.579093] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000012c40 (9): Bad file descriptor 00:24:05.218 [2024-04-18 11:59:55.579106] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:24:05.218 [2024-04-18 11:59:55.579118] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:24:05.218 [2024-04-18 11:59:55.579131] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:24:05.218 [2024-04-18 11:59:55.579913] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:05.218 [2024-04-18 11:59:55.579931] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:24:05.218 [2024-04-18 11:59:55.579950] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000014a40 (9): Bad file descriptor 00:24:05.218 [2024-04-18 11:59:55.579965] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:24:05.218 [2024-04-18 11:59:55.579976] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:24:05.218 [2024-04-18 11:59:55.579988] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:24:05.218 [2024-04-18 11:59:55.580005] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:24:05.218 [2024-04-18 11:59:55.580016] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:24:05.218 [2024-04-18 11:59:55.580026] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:24:05.218 [2024-04-18 11:59:55.580093] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:05.218 [2024-04-18 11:59:55.580105] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:05.218 [2024-04-18 11:59:55.580846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:05.218 [2024-04-18 11:59:55.581136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:05.218 [2024-04-18 11:59:55.581153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000014a40 with addr=10.0.0.2, port=4420 00:24:05.218 [2024-04-18 11:59:55.581166] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000014a40 is same with the state(5) to be set 00:24:05.218 [2024-04-18 11:59:55.581242] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000014a40 (9): Bad file descriptor 00:24:05.218 [2024-04-18 11:59:55.581317] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:24:05.218 [2024-04-18 11:59:55.581331] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:24:05.218 [2024-04-18 11:59:55.581343] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:24:05.218 [2024-04-18 11:59:55.581399] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:05.218 [2024-04-18 11:59:55.583634] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000010e40 (9): Bad file descriptor 00:24:05.218 [2024-04-18 11:59:55.583799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.218 [2024-04-18 11:59:55.583819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.218 [2024-04-18 11:59:55.583845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.218 [2024-04-18 11:59:55.583859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.218 [2024-04-18 11:59:55.583873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.218 [2024-04-18 11:59:55.583886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.218 [2024-04-18 11:59:55.583900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.218 [2024-04-18 11:59:55.583912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.218 [2024-04-18 11:59:55.583926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.218 [2024-04-18 11:59:55.583938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.218 [2024-04-18 11:59:55.583953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.218 [2024-04-18 11:59:55.583964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.218 [2024-04-18 11:59:55.583979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.218 [2024-04-18 11:59:55.583991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.218 [2024-04-18 11:59:55.584005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.218 [2024-04-18 11:59:55.584017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.218 [2024-04-18 11:59:55.584030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.218 [2024-04-18 11:59:55.584042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.218 [2024-04-18 11:59:55.584055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.218 [2024-04-18 11:59:55.584067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.218 [2024-04-18 11:59:55.584080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.218 [2024-04-18 11:59:55.584095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.218 [2024-04-18 11:59:55.584109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.218 [2024-04-18 11:59:55.584120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.218 [2024-04-18 11:59:55.584134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.218 [2024-04-18 11:59:55.584145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.218 [2024-04-18 11:59:55.584160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.218 [2024-04-18 11:59:55.584172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.218 [2024-04-18 11:59:55.584186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.218 [2024-04-18 11:59:55.584197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.218 [2024-04-18 11:59:55.584211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.218 [2024-04-18 11:59:55.584230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.218 [2024-04-18 11:59:55.584243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.218 [2024-04-18 11:59:55.584255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.218 [2024-04-18 11:59:55.584269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.218 [2024-04-18 11:59:55.584281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.218 [2024-04-18 11:59:55.584295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.218 [2024-04-18 11:59:55.584306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.218 [2024-04-18 11:59:55.584320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.218 [2024-04-18 11:59:55.584332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.218 [2024-04-18 11:59:55.584346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.218 [2024-04-18 11:59:55.584358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.218 [2024-04-18 11:59:55.584371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.218 [2024-04-18 11:59:55.584383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.218 [2024-04-18 11:59:55.584396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.219 [2024-04-18 11:59:55.584408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.219 [2024-04-18 11:59:55.584422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.219 [2024-04-18 11:59:55.584434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.219 [2024-04-18 11:59:55.584448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.219 [2024-04-18 11:59:55.584472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.219 [2024-04-18 11:59:55.584486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.219 [2024-04-18 11:59:55.584497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.219 [2024-04-18 11:59:55.584512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.219 [2024-04-18 11:59:55.584524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.219 [2024-04-18 11:59:55.584538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.219 [2024-04-18 11:59:55.584549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.219 [2024-04-18 11:59:55.584563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.219 [2024-04-18 11:59:55.584575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.219 [2024-04-18 11:59:55.584588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.219 [2024-04-18 11:59:55.584600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.219 [2024-04-18 11:59:55.584613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.219 [2024-04-18 11:59:55.584625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.219 [2024-04-18 11:59:55.584638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.219 [2024-04-18 11:59:55.584650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.219 [2024-04-18 11:59:55.584664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.219 [2024-04-18 11:59:55.584675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.219 [2024-04-18 11:59:55.584689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.219 [2024-04-18 11:59:55.584700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.219 [2024-04-18 11:59:55.584714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.219 [2024-04-18 11:59:55.584725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.219 [2024-04-18 11:59:55.584739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.219 [2024-04-18 11:59:55.584752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.219 [2024-04-18 11:59:55.584766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.219 [2024-04-18 11:59:55.584778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.219 [2024-04-18 11:59:55.584791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.219 [2024-04-18 11:59:55.584802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.219 [2024-04-18 11:59:55.584816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.219 [2024-04-18 11:59:55.584827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.219 [2024-04-18 11:59:55.584841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.219 [2024-04-18 11:59:55.584853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.219 [2024-04-18 11:59:55.584866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.219 [2024-04-18 11:59:55.584877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.219 [2024-04-18 11:59:55.584891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.219 [2024-04-18 11:59:55.584902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.219 [2024-04-18 11:59:55.584915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.219 [2024-04-18 11:59:55.584927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.219 [2024-04-18 11:59:55.584941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.219 [2024-04-18 11:59:55.584953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.219 [2024-04-18 11:59:55.584966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.219 [2024-04-18 11:59:55.584978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.219 [2024-04-18 11:59:55.584991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.219 [2024-04-18 11:59:55.585003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.219 [2024-04-18 11:59:55.585016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.219 [2024-04-18 11:59:55.585028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.219 [2024-04-18 11:59:55.585041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.219 [2024-04-18 11:59:55.585053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.219 [2024-04-18 11:59:55.585067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.219 [2024-04-18 11:59:55.585079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.219 [2024-04-18 11:59:55.585092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.219 [2024-04-18 11:59:55.585104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.219 [2024-04-18 11:59:55.585117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.219 [2024-04-18 11:59:55.585129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.219 [2024-04-18 11:59:55.585142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.219 [2024-04-18 11:59:55.585153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.219 [2024-04-18 11:59:55.585168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.219 [2024-04-18 11:59:55.585179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.219 [2024-04-18 11:59:55.585193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.219 [2024-04-18 11:59:55.585204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.219 [2024-04-18 11:59:55.585217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.219 [2024-04-18 11:59:55.585229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.219 [2024-04-18 11:59:55.585242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.219 [2024-04-18 11:59:55.585254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.220 [2024-04-18 11:59:55.585267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.220 [2024-04-18 11:59:55.585279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.220 [2024-04-18 11:59:55.585292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.220 [2024-04-18 11:59:55.585304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.220 [2024-04-18 11:59:55.585317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.220 [2024-04-18 11:59:55.585328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.220 [2024-04-18 11:59:55.585342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.220 [2024-04-18 11:59:55.585353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.220 [2024-04-18 11:59:55.585367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.220 [2024-04-18 11:59:55.585380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.220 [2024-04-18 11:59:55.585393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.220 [2024-04-18 11:59:55.585405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.220 [2024-04-18 11:59:55.585419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.220 [2024-04-18 11:59:55.585430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.220 [2024-04-18 11:59:55.585443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.220 [2024-04-18 11:59:55.585458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.220 [2024-04-18 11:59:55.585471] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000019e40 is same with the state(5) to be set 00:24:05.220 [2024-04-18 11:59:55.586768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.220 [2024-04-18 11:59:55.586789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.220 [2024-04-18 11:59:55.586806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.220 [2024-04-18 11:59:55.586818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.220 [2024-04-18 11:59:55.586833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.220 [2024-04-18 11:59:55.586845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.220 [2024-04-18 11:59:55.586859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.220 [2024-04-18 11:59:55.586871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.220 [2024-04-18 11:59:55.586885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.220 [2024-04-18 11:59:55.586897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.220 [2024-04-18 11:59:55.586911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.220 [2024-04-18 11:59:55.586923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.220 [2024-04-18 11:59:55.586937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.220 [2024-04-18 11:59:55.586949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.220 [2024-04-18 11:59:55.586963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.220 [2024-04-18 11:59:55.586975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.220 [2024-04-18 11:59:55.586988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.220 [2024-04-18 11:59:55.587000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.220 [2024-04-18 11:59:55.587016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.220 [2024-04-18 11:59:55.587028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.220 [2024-04-18 11:59:55.587042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.220 [2024-04-18 11:59:55.587054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.220 [2024-04-18 11:59:55.587067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.220 [2024-04-18 11:59:55.587079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.220 [2024-04-18 11:59:55.587092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.220 [2024-04-18 11:59:55.587104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.220 [2024-04-18 11:59:55.587118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.220 [2024-04-18 11:59:55.587129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.220 [2024-04-18 11:59:55.587142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.220 [2024-04-18 11:59:55.587154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.220 [2024-04-18 11:59:55.587174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.220 [2024-04-18 11:59:55.587186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.220 [2024-04-18 11:59:55.587200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.220 [2024-04-18 11:59:55.587212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.220 [2024-04-18 11:59:55.587226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.220 [2024-04-18 11:59:55.587238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.220 [2024-04-18 11:59:55.587251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.220 [2024-04-18 11:59:55.587263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.220 [2024-04-18 11:59:55.587276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.220 [2024-04-18 11:59:55.587288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.220 [2024-04-18 11:59:55.587302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.220 [2024-04-18 11:59:55.587313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.220 [2024-04-18 11:59:55.587327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.220 [2024-04-18 11:59:55.587343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.220 [2024-04-18 11:59:55.587356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.220 [2024-04-18 11:59:55.587368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.220 [2024-04-18 11:59:55.587382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.220 [2024-04-18 11:59:55.587394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.220 [2024-04-18 11:59:55.587407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.220 [2024-04-18 11:59:55.587419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.220 [2024-04-18 11:59:55.587432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.220 [2024-04-18 11:59:55.587444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.220 [2024-04-18 11:59:55.587462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.220 [2024-04-18 11:59:55.587474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.220 [2024-04-18 11:59:55.587488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.220 [2024-04-18 11:59:55.587499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.220 [2024-04-18 11:59:55.587513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.220 [2024-04-18 11:59:55.587525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.220 [2024-04-18 11:59:55.587539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.220 [2024-04-18 11:59:55.587550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.220 [2024-04-18 11:59:55.587563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.220 [2024-04-18 11:59:55.587575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.221 [2024-04-18 11:59:55.587589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.221 [2024-04-18 11:59:55.587600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.221 [2024-04-18 11:59:55.587613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.221 [2024-04-18 11:59:55.587625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.221 [2024-04-18 11:59:55.587638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.221 [2024-04-18 11:59:55.587649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.221 [2024-04-18 11:59:55.587664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.221 [2024-04-18 11:59:55.587676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.221 [2024-04-18 11:59:55.587689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.221 [2024-04-18 11:59:55.587700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.221 [2024-04-18 11:59:55.587713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.221 [2024-04-18 11:59:55.587725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.221 [2024-04-18 11:59:55.587738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.221 [2024-04-18 11:59:55.587749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.221 [2024-04-18 11:59:55.587762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.221 [2024-04-18 11:59:55.587774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.221 [2024-04-18 11:59:55.587788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.221 [2024-04-18 11:59:55.587799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.221 [2024-04-18 11:59:55.587812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.221 [2024-04-18 11:59:55.587824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.221 [2024-04-18 11:59:55.587837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.221 [2024-04-18 11:59:55.587849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.221 [2024-04-18 11:59:55.587862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.221 [2024-04-18 11:59:55.587873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.221 [2024-04-18 11:59:55.587886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.221 [2024-04-18 11:59:55.587898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.221 [2024-04-18 11:59:55.587911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.221 [2024-04-18 11:59:55.587923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.221 [2024-04-18 11:59:55.587936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.221 [2024-04-18 11:59:55.587947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.221 [2024-04-18 11:59:55.587961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.221 [2024-04-18 11:59:55.587974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.221 [2024-04-18 11:59:55.587987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.221 [2024-04-18 11:59:55.587998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.221 [2024-04-18 11:59:55.588011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.221 [2024-04-18 11:59:55.588023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.221 [2024-04-18 11:59:55.588037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.221 [2024-04-18 11:59:55.588048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.221 [2024-04-18 11:59:55.588062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.221 [2024-04-18 11:59:55.588073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.221 [2024-04-18 11:59:55.588086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.221 [2024-04-18 11:59:55.588098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.221 [2024-04-18 11:59:55.588111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.221 [2024-04-18 11:59:55.588123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.221 [2024-04-18 11:59:55.588136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.221 [2024-04-18 11:59:55.588148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.221 [2024-04-18 11:59:55.588162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.221 [2024-04-18 11:59:55.588174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.221 [2024-04-18 11:59:55.588187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.221 [2024-04-18 11:59:55.588198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.221 [2024-04-18 11:59:55.588212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.221 [2024-04-18 11:59:55.588224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.221 [2024-04-18 11:59:55.588237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.221 [2024-04-18 11:59:55.588249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.221 [2024-04-18 11:59:55.588262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.221 [2024-04-18 11:59:55.588274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.221 [2024-04-18 11:59:55.588288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.221 [2024-04-18 11:59:55.588300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.221 [2024-04-18 11:59:55.588313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.221 [2024-04-18 11:59:55.588325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.221 [2024-04-18 11:59:55.588338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.221 [2024-04-18 11:59:55.588349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.221 [2024-04-18 11:59:55.588363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.221 [2024-04-18 11:59:55.588375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.221 [2024-04-18 11:59:55.588389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.221 [2024-04-18 11:59:55.588401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.221 [2024-04-18 11:59:55.588413] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61400001a440 is same with the state(5) to be set 00:24:05.221 [2024-04-18 11:59:55.589679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.221 [2024-04-18 11:59:55.589699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.221 [2024-04-18 11:59:55.589716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.221 [2024-04-18 11:59:55.589728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.221 [2024-04-18 11:59:55.589741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.221 [2024-04-18 11:59:55.589753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.221 [2024-04-18 11:59:55.589768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.221 [2024-04-18 11:59:55.589779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.221 [2024-04-18 11:59:55.589793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.221 [2024-04-18 11:59:55.589804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.221 [2024-04-18 11:59:55.589818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.221 [2024-04-18 11:59:55.589830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.221 [2024-04-18 11:59:55.589844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.222 [2024-04-18 11:59:55.589855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.222 [2024-04-18 11:59:55.589872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.222 [2024-04-18 11:59:55.589884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.222 [2024-04-18 11:59:55.589898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.222 [2024-04-18 11:59:55.589909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.222 [2024-04-18 11:59:55.589923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.222 [2024-04-18 11:59:55.589935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.222 [2024-04-18 11:59:55.589948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.222 [2024-04-18 11:59:55.589960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.222 [2024-04-18 11:59:55.589974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.222 [2024-04-18 11:59:55.589985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.222 [2024-04-18 11:59:55.589998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.222 [2024-04-18 11:59:55.590010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.222 [2024-04-18 11:59:55.590024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.222 [2024-04-18 11:59:55.590035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.222 [2024-04-18 11:59:55.590049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.222 [2024-04-18 11:59:55.590067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.222 [2024-04-18 11:59:55.590080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.222 [2024-04-18 11:59:55.590092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.222 [2024-04-18 11:59:55.590105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.222 [2024-04-18 11:59:55.590117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.222 [2024-04-18 11:59:55.590131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.222 [2024-04-18 11:59:55.590143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.222 [2024-04-18 11:59:55.590156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.222 [2024-04-18 11:59:55.590168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.222 [2024-04-18 11:59:55.590181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.222 [2024-04-18 11:59:55.590195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.222 [2024-04-18 11:59:55.590208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.222 [2024-04-18 11:59:55.590220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.222 [2024-04-18 11:59:55.590233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.222 [2024-04-18 11:59:55.590245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.222 [2024-04-18 11:59:55.590258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.222 [2024-04-18 11:59:55.590269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.222 [2024-04-18 11:59:55.590283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.222 [2024-04-18 11:59:55.590294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.222 [2024-04-18 11:59:55.590307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.222 [2024-04-18 11:59:55.590319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.222 [2024-04-18 11:59:55.590332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.222 [2024-04-18 11:59:55.590344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.222 [2024-04-18 11:59:55.590357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.222 [2024-04-18 11:59:55.590369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.222 [2024-04-18 11:59:55.590382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.222 [2024-04-18 11:59:55.590393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.222 [2024-04-18 11:59:55.590406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.222 [2024-04-18 11:59:55.590418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.222 [2024-04-18 11:59:55.590431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.222 [2024-04-18 11:59:55.590447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.222 [2024-04-18 11:59:55.590467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.222 [2024-04-18 11:59:55.590493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.222 [2024-04-18 11:59:55.590507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.222 [2024-04-18 11:59:55.590518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.222 [2024-04-18 11:59:55.590534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.222 [2024-04-18 11:59:55.590546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.222 [2024-04-18 11:59:55.590559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.222 [2024-04-18 11:59:55.590571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.222 [2024-04-18 11:59:55.590584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.222 [2024-04-18 11:59:55.590596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.222 [2024-04-18 11:59:55.590610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.222 [2024-04-18 11:59:55.590621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.222 [2024-04-18 11:59:55.590634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.222 [2024-04-18 11:59:55.590646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.222 [2024-04-18 11:59:55.590659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.222 [2024-04-18 11:59:55.590671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.222 [2024-04-18 11:59:55.590684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.222 [2024-04-18 11:59:55.590696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.222 [2024-04-18 11:59:55.590709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.222 [2024-04-18 11:59:55.590721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.222 [2024-04-18 11:59:55.590734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.222 [2024-04-18 11:59:55.590745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.222 [2024-04-18 11:59:55.590758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.222 [2024-04-18 11:59:55.590770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.222 [2024-04-18 11:59:55.590783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.222 [2024-04-18 11:59:55.590795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.222 [2024-04-18 11:59:55.590808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.222 [2024-04-18 11:59:55.590819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.222 [2024-04-18 11:59:55.590833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.222 [2024-04-18 11:59:55.590846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.222 [2024-04-18 11:59:55.590859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.222 [2024-04-18 11:59:55.590870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.223 [2024-04-18 11:59:55.590883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.223 [2024-04-18 11:59:55.590895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.223 [2024-04-18 11:59:55.590909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.223 [2024-04-18 11:59:55.590921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.223 [2024-04-18 11:59:55.590934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.223 [2024-04-18 11:59:55.590945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.223 [2024-04-18 11:59:55.590958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.223 [2024-04-18 11:59:55.590970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.223 [2024-04-18 11:59:55.590984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.223 [2024-04-18 11:59:55.590995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.223 [2024-04-18 11:59:55.591008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.223 [2024-04-18 11:59:55.591020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.223 [2024-04-18 11:59:55.591034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.223 [2024-04-18 11:59:55.591046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.223 [2024-04-18 11:59:55.591059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.223 [2024-04-18 11:59:55.591071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.223 [2024-04-18 11:59:55.591085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.223 [2024-04-18 11:59:55.591096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.223 [2024-04-18 11:59:55.591109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.223 [2024-04-18 11:59:55.591121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.223 [2024-04-18 11:59:55.591134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.223 [2024-04-18 11:59:55.591146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.223 [2024-04-18 11:59:55.591161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.223 [2024-04-18 11:59:55.591172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.223 [2024-04-18 11:59:55.591185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.223 [2024-04-18 11:59:55.591197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.223 [2024-04-18 11:59:55.591210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.223 [2024-04-18 11:59:55.591221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.223 [2024-04-18 11:59:55.591236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.223 [2024-04-18 11:59:55.591247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.223 [2024-04-18 11:59:55.591261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.223 [2024-04-18 11:59:55.591272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.223 [2024-04-18 11:59:55.591286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.223 [2024-04-18 11:59:55.591297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.223 [2024-04-18 11:59:55.591311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.223 [2024-04-18 11:59:55.591322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.223 [2024-04-18 11:59:55.591334] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61400001aa40 is same with the state(5) to be set 00:24:05.223 [2024-04-18 11:59:55.592628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.223 [2024-04-18 11:59:55.592647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.223 [2024-04-18 11:59:55.592664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.223 [2024-04-18 11:59:55.592676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.223 [2024-04-18 11:59:55.592690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.223 [2024-04-18 11:59:55.592702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.223 [2024-04-18 11:59:55.592716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.223 [2024-04-18 11:59:55.592728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.223 [2024-04-18 11:59:55.592742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.223 [2024-04-18 11:59:55.592754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.223 [2024-04-18 11:59:55.592771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.223 [2024-04-18 11:59:55.592783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.223 [2024-04-18 11:59:55.592796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.223 [2024-04-18 11:59:55.592808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.223 [2024-04-18 11:59:55.592822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.223 [2024-04-18 11:59:55.592834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.223 [2024-04-18 11:59:55.592847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.223 [2024-04-18 11:59:55.592859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.223 [2024-04-18 11:59:55.592872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.223 [2024-04-18 11:59:55.592884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.223 [2024-04-18 11:59:55.592897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.223 [2024-04-18 11:59:55.592909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.223 [2024-04-18 11:59:55.592922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.223 [2024-04-18 11:59:55.592934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.223 [2024-04-18 11:59:55.592948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.223 [2024-04-18 11:59:55.592959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.223 [2024-04-18 11:59:55.592973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.223 [2024-04-18 11:59:55.592984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.223 [2024-04-18 11:59:55.593004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.223 [2024-04-18 11:59:55.593016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.223 [2024-04-18 11:59:55.593029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.223 [2024-04-18 11:59:55.593042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.223 [2024-04-18 11:59:55.593055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.223 [2024-04-18 11:59:55.593066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.223 [2024-04-18 11:59:55.593080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.223 [2024-04-18 11:59:55.593093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.223 [2024-04-18 11:59:55.593107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.223 [2024-04-18 11:59:55.593119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.223 [2024-04-18 11:59:55.593132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.223 [2024-04-18 11:59:55.593143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.223 [2024-04-18 11:59:55.593157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.224 [2024-04-18 11:59:55.593168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.224 [2024-04-18 11:59:55.593181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.224 [2024-04-18 11:59:55.593193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.224 [2024-04-18 11:59:55.593206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.224 [2024-04-18 11:59:55.593218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.224 [2024-04-18 11:59:55.593231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.224 [2024-04-18 11:59:55.593243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.224 [2024-04-18 11:59:55.593256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.224 [2024-04-18 11:59:55.593268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.224 [2024-04-18 11:59:55.593281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.224 [2024-04-18 11:59:55.593292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.224 [2024-04-18 11:59:55.593306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.224 [2024-04-18 11:59:55.593318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.224 [2024-04-18 11:59:55.593331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.224 [2024-04-18 11:59:55.593342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.224 [2024-04-18 11:59:55.593355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.224 [2024-04-18 11:59:55.593367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.224 [2024-04-18 11:59:55.593380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.224 [2024-04-18 11:59:55.593391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.224 [2024-04-18 11:59:55.593409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.224 [2024-04-18 11:59:55.593421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.224 [2024-04-18 11:59:55.593435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.224 [2024-04-18 11:59:55.593446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.224 [2024-04-18 11:59:55.593464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.224 [2024-04-18 11:59:55.593475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.224 [2024-04-18 11:59:55.593489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.224 [2024-04-18 11:59:55.593501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.224 [2024-04-18 11:59:55.593515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.224 [2024-04-18 11:59:55.593526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.224 [2024-04-18 11:59:55.593540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.224 [2024-04-18 11:59:55.593551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.224 [2024-04-18 11:59:55.593565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.224 [2024-04-18 11:59:55.593576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.224 [2024-04-18 11:59:55.593589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.224 [2024-04-18 11:59:55.593600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.224 [2024-04-18 11:59:55.593614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.224 [2024-04-18 11:59:55.593625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.224 [2024-04-18 11:59:55.593638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.224 [2024-04-18 11:59:55.593650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.224 [2024-04-18 11:59:55.593663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.224 [2024-04-18 11:59:55.593674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.224 [2024-04-18 11:59:55.593687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.224 [2024-04-18 11:59:55.593699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.224 [2024-04-18 11:59:55.593712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.224 [2024-04-18 11:59:55.593727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.224 [2024-04-18 11:59:55.593740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.224 [2024-04-18 11:59:55.593752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.224 [2024-04-18 11:59:55.593765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.224 [2024-04-18 11:59:55.593777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.224 [2024-04-18 11:59:55.593790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.224 [2024-04-18 11:59:55.593802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.224 [2024-04-18 11:59:55.593816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.224 [2024-04-18 11:59:55.593827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.224 [2024-04-18 11:59:55.593841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.224 [2024-04-18 11:59:55.593853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.224 [2024-04-18 11:59:55.593866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.224 [2024-04-18 11:59:55.593878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.224 [2024-04-18 11:59:55.593891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.224 [2024-04-18 11:59:55.593902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.224 [2024-04-18 11:59:55.593915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.224 [2024-04-18 11:59:55.593927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.224 [2024-04-18 11:59:55.593940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.224 [2024-04-18 11:59:55.593952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.224 [2024-04-18 11:59:55.593965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.224 [2024-04-18 11:59:55.593977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.224 [2024-04-18 11:59:55.593990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.224 [2024-04-18 11:59:55.594001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.225 [2024-04-18 11:59:55.594015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.225 [2024-04-18 11:59:55.594027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.225 [2024-04-18 11:59:55.594042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.225 [2024-04-18 11:59:55.594054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.225 [2024-04-18 11:59:55.594067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.225 [2024-04-18 11:59:55.594079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.225 [2024-04-18 11:59:55.594092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.225 [2024-04-18 11:59:55.594104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.225 [2024-04-18 11:59:55.594118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.225 [2024-04-18 11:59:55.594129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.225 [2024-04-18 11:59:55.594143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.225 [2024-04-18 11:59:55.594155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.225 [2024-04-18 11:59:55.594168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.225 [2024-04-18 11:59:55.594180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.225 [2024-04-18 11:59:55.594193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.225 [2024-04-18 11:59:55.594204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.225 [2024-04-18 11:59:55.594217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.225 [2024-04-18 11:59:55.594229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.225 [2024-04-18 11:59:55.594242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.225 [2024-04-18 11:59:55.594253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.225 [2024-04-18 11:59:55.594265] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61400001b040 is same with the state(5) to be set 00:24:05.225 [2024-04-18 11:59:55.595558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.225 [2024-04-18 11:59:55.595579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.225 [2024-04-18 11:59:55.595596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.225 [2024-04-18 11:59:55.595609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.225 [2024-04-18 11:59:55.595623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.225 [2024-04-18 11:59:55.595635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.225 [2024-04-18 11:59:55.595649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.225 [2024-04-18 11:59:55.595663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.225 [2024-04-18 11:59:55.595677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.225 [2024-04-18 11:59:55.595688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.225 [2024-04-18 11:59:55.595703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.225 [2024-04-18 11:59:55.595715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.225 [2024-04-18 11:59:55.595729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.225 [2024-04-18 11:59:55.595740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.225 [2024-04-18 11:59:55.595754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.225 [2024-04-18 11:59:55.595766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.225 [2024-04-18 11:59:55.595780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.225 [2024-04-18 11:59:55.595791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.225 [2024-04-18 11:59:55.595805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.225 [2024-04-18 11:59:55.595817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.225 [2024-04-18 11:59:55.595830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.225 [2024-04-18 11:59:55.595842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.225 [2024-04-18 11:59:55.595855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.225 [2024-04-18 11:59:55.595867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.225 [2024-04-18 11:59:55.595880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.225 [2024-04-18 11:59:55.595892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.225 [2024-04-18 11:59:55.595905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.225 [2024-04-18 11:59:55.595923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.225 [2024-04-18 11:59:55.595936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.225 [2024-04-18 11:59:55.595948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.225 [2024-04-18 11:59:55.595961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.225 [2024-04-18 11:59:55.595973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.225 [2024-04-18 11:59:55.595988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.225 [2024-04-18 11:59:55.596000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.225 [2024-04-18 11:59:55.596014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.225 [2024-04-18 11:59:55.596025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.225 [2024-04-18 11:59:55.596039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.225 [2024-04-18 11:59:55.596050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.225 [2024-04-18 11:59:55.596064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.225 [2024-04-18 11:59:55.596075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.225 [2024-04-18 11:59:55.596088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.225 [2024-04-18 11:59:55.596100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.225 [2024-04-18 11:59:55.596114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.225 [2024-04-18 11:59:55.596125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.225 [2024-04-18 11:59:55.596138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.225 [2024-04-18 11:59:55.596150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.225 [2024-04-18 11:59:55.596163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.225 [2024-04-18 11:59:55.596174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.225 [2024-04-18 11:59:55.596187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.225 [2024-04-18 11:59:55.596199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.225 [2024-04-18 11:59:55.596212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.225 [2024-04-18 11:59:55.596224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.225 [2024-04-18 11:59:55.596237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.225 [2024-04-18 11:59:55.596248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.225 [2024-04-18 11:59:55.596262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.225 [2024-04-18 11:59:55.596273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.225 [2024-04-18 11:59:55.596286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.225 [2024-04-18 11:59:55.596299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.225 [2024-04-18 11:59:55.596312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.226 [2024-04-18 11:59:55.596324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.226 [2024-04-18 11:59:55.596337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.226 [2024-04-18 11:59:55.596349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.226 [2024-04-18 11:59:55.596362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.226 [2024-04-18 11:59:55.596374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.226 [2024-04-18 11:59:55.596387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.226 [2024-04-18 11:59:55.596399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.226 [2024-04-18 11:59:55.596412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.226 [2024-04-18 11:59:55.596424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.226 [2024-04-18 11:59:55.596437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.226 [2024-04-18 11:59:55.596459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.226 [2024-04-18 11:59:55.596473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.226 [2024-04-18 11:59:55.596485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.226 [2024-04-18 11:59:55.596498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.226 [2024-04-18 11:59:55.596510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.226 [2024-04-18 11:59:55.596524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.226 [2024-04-18 11:59:55.596536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.226 [2024-04-18 11:59:55.596549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.226 [2024-04-18 11:59:55.596560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.226 [2024-04-18 11:59:55.596573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.226 [2024-04-18 11:59:55.596585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.226 [2024-04-18 11:59:55.596599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.226 [2024-04-18 11:59:55.596610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.226 [2024-04-18 11:59:55.596625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.226 [2024-04-18 11:59:55.596636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.226 [2024-04-18 11:59:55.596649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.226 [2024-04-18 11:59:55.596661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.226 [2024-04-18 11:59:55.596675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.226 [2024-04-18 11:59:55.596686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.226 [2024-04-18 11:59:55.596699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.226 [2024-04-18 11:59:55.596710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.226 [2024-04-18 11:59:55.596723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.226 [2024-04-18 11:59:55.596735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.226 [2024-04-18 11:59:55.596748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.226 [2024-04-18 11:59:55.596760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.226 [2024-04-18 11:59:55.596773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.226 [2024-04-18 11:59:55.596785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.226 [2024-04-18 11:59:55.596797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.226 [2024-04-18 11:59:55.596809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.226 [2024-04-18 11:59:55.596822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.226 [2024-04-18 11:59:55.596834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.226 [2024-04-18 11:59:55.596847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.226 [2024-04-18 11:59:55.596859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.226 [2024-04-18 11:59:55.596874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.226 [2024-04-18 11:59:55.596886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.226 [2024-04-18 11:59:55.596899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.226 [2024-04-18 11:59:55.596911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.226 [2024-04-18 11:59:55.596925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.226 [2024-04-18 11:59:55.596938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.226 [2024-04-18 11:59:55.596952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.226 [2024-04-18 11:59:55.596964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.226 [2024-04-18 11:59:55.596977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.226 [2024-04-18 11:59:55.596989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.226 [2024-04-18 11:59:55.597003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.226 [2024-04-18 11:59:55.597014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.226 [2024-04-18 11:59:55.597027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.226 [2024-04-18 11:59:55.597039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.226 [2024-04-18 11:59:55.597052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.226 [2024-04-18 11:59:55.597063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.226 [2024-04-18 11:59:55.597077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.226 [2024-04-18 11:59:55.597088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.226 [2024-04-18 11:59:55.597101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.226 [2024-04-18 11:59:55.597113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.226 [2024-04-18 11:59:55.597126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.226 [2024-04-18 11:59:55.597137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.226 [2024-04-18 11:59:55.597150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.226 [2024-04-18 11:59:55.597162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.226 [2024-04-18 11:59:55.597175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.226 [2024-04-18 11:59:55.597186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.226 [2024-04-18 11:59:55.597198] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61400001d440 is same with the state(5) to be set 00:24:05.226 [2024-04-18 11:59:55.602357] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:05.226 [2024-04-18 11:59:55.602384] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:24:05.226 [2024-04-18 11:59:55.602398] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:24:05.226 [2024-04-18 11:59:55.602418] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:24:05.226 [2024-04-18 11:59:55.602714] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:05.226 [2024-04-18 11:59:55.602809] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:24:05.226 [2024-04-18 11:59:55.603226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:05.226 [2024-04-18 11:59:55.603576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:05.226 [2024-04-18 11:59:55.603592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:24:05.226 [2024-04-18 11:59:55.603605] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:24:05.226 [2024-04-18 11:59:55.603862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:05.226 [2024-04-18 11:59:55.604037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:05.227 [2024-04-18 11:59:55.604050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000007840 with addr=10.0.0.2, port=4420 00:24:05.227 [2024-04-18 11:59:55.604062] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007840 is same with the state(5) to be set 00:24:05.227 [2024-04-18 11:59:55.604277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:05.227 [2024-04-18 11:59:55.604527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:05.227 [2024-04-18 11:59:55.604542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000009640 with addr=10.0.0.2, port=4420 00:24:05.227 [2024-04-18 11:59:55.604553] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000009640 is same with the state(5) to be set 00:24:05.227 [2024-04-18 11:59:55.604828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:05.227 [2024-04-18 11:59:55.605079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:05.227 [2024-04-18 11:59:55.605094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61400000b440 with addr=10.0.0.2, port=4420 00:24:05.227 [2024-04-18 11:59:55.605105] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61400000b440 is same with the state(5) to be set 00:24:05.227 [2024-04-18 11:59:55.606625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.227 [2024-04-18 11:59:55.606650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.227 [2024-04-18 11:59:55.606670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.227 [2024-04-18 11:59:55.606682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.227 [2024-04-18 11:59:55.606703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.227 [2024-04-18 11:59:55.606715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.227 [2024-04-18 11:59:55.606729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.227 [2024-04-18 11:59:55.606741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.227 [2024-04-18 11:59:55.606755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.227 [2024-04-18 11:59:55.606767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.227 [2024-04-18 11:59:55.606783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.227 [2024-04-18 11:59:55.606795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.227 [2024-04-18 11:59:55.606809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.227 [2024-04-18 11:59:55.606821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.227 [2024-04-18 11:59:55.606834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.227 [2024-04-18 11:59:55.606846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.227 [2024-04-18 11:59:55.606861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.227 [2024-04-18 11:59:55.606872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.227 [2024-04-18 11:59:55.606886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.227 [2024-04-18 11:59:55.606898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.227 [2024-04-18 11:59:55.606911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.227 [2024-04-18 11:59:55.606922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.227 [2024-04-18 11:59:55.606936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.227 [2024-04-18 11:59:55.606947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.227 [2024-04-18 11:59:55.606961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.227 [2024-04-18 11:59:55.606973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.227 [2024-04-18 11:59:55.606986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.227 [2024-04-18 11:59:55.606997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.227 [2024-04-18 11:59:55.607010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.227 [2024-04-18 11:59:55.607022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.227 [2024-04-18 11:59:55.607035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.227 [2024-04-18 11:59:55.607046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.227 [2024-04-18 11:59:55.607060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.227 [2024-04-18 11:59:55.607071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.227 [2024-04-18 11:59:55.607085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.227 [2024-04-18 11:59:55.607096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.227 [2024-04-18 11:59:55.607111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.227 [2024-04-18 11:59:55.607122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.227 [2024-04-18 11:59:55.607136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.227 [2024-04-18 11:59:55.607147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.227 [2024-04-18 11:59:55.607160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.227 [2024-04-18 11:59:55.607172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.227 [2024-04-18 11:59:55.607185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.227 [2024-04-18 11:59:55.607197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.227 [2024-04-18 11:59:55.607210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.227 [2024-04-18 11:59:55.607222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.227 [2024-04-18 11:59:55.607235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.227 [2024-04-18 11:59:55.607247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.227 [2024-04-18 11:59:55.607260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.227 [2024-04-18 11:59:55.607271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.227 [2024-04-18 11:59:55.607284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.227 [2024-04-18 11:59:55.607296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.227 [2024-04-18 11:59:55.607309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.227 [2024-04-18 11:59:55.607321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.227 [2024-04-18 11:59:55.607334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.227 [2024-04-18 11:59:55.607345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.227 [2024-04-18 11:59:55.607359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.227 [2024-04-18 11:59:55.607370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.227 [2024-04-18 11:59:55.607384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.227 [2024-04-18 11:59:55.607395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.227 [2024-04-18 11:59:55.607409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.227 [2024-04-18 11:59:55.607422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.227 [2024-04-18 11:59:55.607435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.227 [2024-04-18 11:59:55.607447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.227 [2024-04-18 11:59:55.607467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.227 [2024-04-18 11:59:55.607478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.227 [2024-04-18 11:59:55.607492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.227 [2024-04-18 11:59:55.607503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.227 [2024-04-18 11:59:55.607517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.227 [2024-04-18 11:59:55.607529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.227 [2024-04-18 11:59:55.607542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.228 [2024-04-18 11:59:55.607553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.228 [2024-04-18 11:59:55.607567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.228 [2024-04-18 11:59:55.607579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.228 [2024-04-18 11:59:55.607593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.228 [2024-04-18 11:59:55.607604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.228 [2024-04-18 11:59:55.607617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.228 [2024-04-18 11:59:55.607629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.228 [2024-04-18 11:59:55.607643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.228 [2024-04-18 11:59:55.607654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.228 [2024-04-18 11:59:55.607668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.228 [2024-04-18 11:59:55.607679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.228 [2024-04-18 11:59:55.607692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.228 [2024-04-18 11:59:55.607704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.228 [2024-04-18 11:59:55.607717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.228 [2024-04-18 11:59:55.607729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.228 [2024-04-18 11:59:55.607743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.228 [2024-04-18 11:59:55.607755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.228 [2024-04-18 11:59:55.607768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.228 [2024-04-18 11:59:55.607780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.228 [2024-04-18 11:59:55.607793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.228 [2024-04-18 11:59:55.607804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.228 [2024-04-18 11:59:55.607818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.228 [2024-04-18 11:59:55.607829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.228 [2024-04-18 11:59:55.607842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.228 [2024-04-18 11:59:55.607854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.228 [2024-04-18 11:59:55.607867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.228 [2024-04-18 11:59:55.607879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.228 [2024-04-18 11:59:55.607893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.228 [2024-04-18 11:59:55.607904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.228 [2024-04-18 11:59:55.607917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.228 [2024-04-18 11:59:55.607929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.228 [2024-04-18 11:59:55.607942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.228 [2024-04-18 11:59:55.607954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.228 [2024-04-18 11:59:55.607967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.228 [2024-04-18 11:59:55.607979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.228 [2024-04-18 11:59:55.607992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.228 [2024-04-18 11:59:55.608004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.228 [2024-04-18 11:59:55.608017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.228 [2024-04-18 11:59:55.608028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.228 [2024-04-18 11:59:55.608042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.228 [2024-04-18 11:59:55.608054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.228 [2024-04-18 11:59:55.608067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.228 [2024-04-18 11:59:55.608079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.228 [2024-04-18 11:59:55.608092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.228 [2024-04-18 11:59:55.608103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.228 [2024-04-18 11:59:55.608116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.228 [2024-04-18 11:59:55.608128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.228 [2024-04-18 11:59:55.608141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.228 [2024-04-18 11:59:55.608152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.228 [2024-04-18 11:59:55.608165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.228 [2024-04-18 11:59:55.608177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.228 [2024-04-18 11:59:55.608191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.228 [2024-04-18 11:59:55.608202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.228 [2024-04-18 11:59:55.608215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.228 [2024-04-18 11:59:55.608226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.228 [2024-04-18 11:59:55.608239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.228 [2024-04-18 11:59:55.608251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.228 [2024-04-18 11:59:55.608263] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61400001c240 is same with the state(5) to be set 00:24:05.228 [2024-04-18 11:59:55.613588] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:24:05.228 [2024-04-18 11:59:55.613617] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:24:05.228 [2024-04-18 11:59:55.613631] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:24:05.228 [2024-04-18 11:59:55.613653] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:24:05.228 task offset: 22272 on job bdev=Nvme5n1 fails 00:24:05.228 00:24:05.228 Latency(us) 00:24:05.228 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:05.228 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:05.228 Job: Nvme1n1 ended in about 0.69 seconds with error 00:24:05.228 Verification LBA range: start 0x0 length 0x400 00:24:05.228 Nvme1n1 : 0.69 184.56 11.54 92.28 0.00 228211.92 21705.52 205520.90 00:24:05.228 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:05.228 Job: Nvme2n1 ended in about 0.70 seconds with error 00:24:05.228 Verification LBA range: start 0x0 length 0x400 00:24:05.228 Nvme2n1 : 0.70 183.78 11.49 91.89 0.00 223755.74 37329.31 191260.26 00:24:05.228 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:05.228 Job: Nvme3n1 ended in about 0.70 seconds with error 00:24:05.228 Verification LBA range: start 0x0 length 0x400 00:24:05.229 Nvme3n1 : 0.70 183.02 11.44 91.51 0.00 219369.20 18874.37 231525.58 00:24:05.229 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:05.229 Job: Nvme4n1 ended in about 0.70 seconds with error 00:24:05.229 Verification LBA range: start 0x0 length 0x400 00:24:05.229 Nvme4n1 : 0.70 189.38 11.84 91.13 0.00 209465.83 19293.80 228170.14 00:24:05.229 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:05.229 Job: Nvme5n1 ended in about 0.67 seconds with error 00:24:05.229 Verification LBA range: start 0x0 length 0x400 00:24:05.229 Nvme5n1 : 0.67 189.86 11.87 94.93 0.00 200136.70 3591.37 212231.78 00:24:05.229 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:05.229 Job: Nvme6n1 ended in about 0.68 seconds with error 00:24:05.229 Verification LBA range: start 0x0 length 0x400 00:24:05.229 Nvme6n1 : 0.68 188.34 11.77 94.17 0.00 196619.67 6501.17 239914.19 00:24:05.229 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:05.229 Job: Nvme7n1 ended in about 0.72 seconds with error 00:24:05.229 Verification LBA range: start 0x0 length 0x400 00:24:05.229 Nvme7n1 : 0.72 178.70 11.17 89.35 0.00 203425.93 38377.88 192099.12 00:24:05.229 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:05.229 Job: Nvme8n1 ended in about 0.68 seconds with error 00:24:05.229 Verification LBA range: start 0x0 length 0x400 00:24:05.229 Nvme8n1 : 0.68 188.07 11.75 94.03 0.00 186140.26 7025.46 238236.47 00:24:05.229 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:05.229 Job: Nvme9n1 ended in about 0.69 seconds with error 00:24:05.229 Verification LBA range: start 0x0 length 0x400 00:24:05.229 Nvme9n1 : 0.69 183.40 11.46 2.91 0.00 273501.39 39426.46 239914.19 00:24:05.229 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:05.229 Job: Nvme10n1 ended in about 0.71 seconds with error 00:24:05.229 Verification LBA range: start 0x0 length 0x400 00:24:05.229 Nvme10n1 : 0.71 90.75 5.67 90.75 0.00 275696.84 21390.95 258369.13 00:24:05.229 =================================================================================================================== 00:24:05.229 Total : 1759.86 109.99 832.96 0.00 217825.68 3591.37 258369.13 00:24:05.229 [2024-04-18 11:59:55.700214] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:05.229 [2024-04-18 11:59:55.700269] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:24:05.229 [2024-04-18 11:59:55.700745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:05.229 [2024-04-18 11:59:55.701033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:05.229 [2024-04-18 11:59:55.701050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000016840 with addr=10.0.0.2, port=4420 00:24:05.229 [2024-04-18 11:59:55.701068] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000016840 is same with the state(5) to be set 00:24:05.229 [2024-04-18 11:59:55.701089] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:24:05.229 [2024-04-18 11:59:55.701106] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000007840 (9): Bad file descriptor 00:24:05.229 [2024-04-18 11:59:55.701121] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000009640 (9): Bad file descriptor 00:24:05.229 [2024-04-18 11:59:55.701139] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61400000b440 (9): Bad file descriptor 00:24:05.229 [2024-04-18 11:59:55.701183] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:05.229 [2024-04-18 11:59:55.701199] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:05.229 [2024-04-18 11:59:55.701215] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:05.229 [2024-04-18 11:59:55.701228] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:05.229 [2024-04-18 11:59:55.701243] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000016840 (9): Bad file descriptor 00:24:05.229 [2024-04-18 11:59:55.701653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:05.229 [2024-04-18 11:59:55.702007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:05.229 [2024-04-18 11:59:55.702023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61400000d240 with addr=10.0.0.2, port=4420 00:24:05.229 [2024-04-18 11:59:55.702036] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61400000d240 is same with the state(5) to be set 00:24:05.229 [2024-04-18 11:59:55.702396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:05.229 [2024-04-18 11:59:55.702678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:05.229 [2024-04-18 11:59:55.702693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000012c40 with addr=10.0.0.2, port=4420 00:24:05.229 [2024-04-18 11:59:55.702705] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000012c40 is same with the state(5) to be set 00:24:05.229 [2024-04-18 11:59:55.702976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:05.229 [2024-04-18 11:59:55.703320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:05.229 [2024-04-18 11:59:55.703334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61400000f040 with addr=10.0.0.2, port=4420 00:24:05.229 [2024-04-18 11:59:55.703346] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61400000f040 is same with the state(5) to be set 00:24:05.229 [2024-04-18 11:59:55.703565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:05.229 [2024-04-18 11:59:55.703836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:05.229 [2024-04-18 11:59:55.703851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000014a40 with addr=10.0.0.2, port=4420 00:24:05.229 [2024-04-18 11:59:55.703862] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000014a40 is same with the state(5) to be set 00:24:05.229 [2024-04-18 11:59:55.704207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:05.229 [2024-04-18 11:59:55.704549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:05.229 [2024-04-18 11:59:55.704564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010e40 with addr=10.0.0.2, port=4420 00:24:05.229 [2024-04-18 11:59:55.704576] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000010e40 is same with the state(5) to be set 00:24:05.229 [2024-04-18 11:59:55.704589] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:05.229 [2024-04-18 11:59:55.704601] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:05.229 [2024-04-18 11:59:55.704614] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:05.229 [2024-04-18 11:59:55.704633] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:05.229 [2024-04-18 11:59:55.704644] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:24:05.229 [2024-04-18 11:59:55.704658] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:05.229 [2024-04-18 11:59:55.704673] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:24:05.229 [2024-04-18 11:59:55.704683] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:24:05.229 [2024-04-18 11:59:55.704693] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:24:05.229 [2024-04-18 11:59:55.704709] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:24:05.229 [2024-04-18 11:59:55.704718] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:24:05.229 [2024-04-18 11:59:55.704729] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:24:05.229 [2024-04-18 11:59:55.704757] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:05.229 [2024-04-18 11:59:55.704773] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:05.229 [2024-04-18 11:59:55.704788] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:05.229 [2024-04-18 11:59:55.704802] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:05.229 [2024-04-18 11:59:55.704816] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:05.229 [2024-04-18 11:59:55.705292] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:05.229 [2024-04-18 11:59:55.705311] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:05.229 [2024-04-18 11:59:55.705320] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:05.229 [2024-04-18 11:59:55.705330] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:05.229 [2024-04-18 11:59:55.705344] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61400000d240 (9): Bad file descriptor 00:24:05.229 [2024-04-18 11:59:55.705361] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000012c40 (9): Bad file descriptor 00:24:05.229 [2024-04-18 11:59:55.705375] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61400000f040 (9): Bad file descriptor 00:24:05.229 [2024-04-18 11:59:55.705389] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000014a40 (9): Bad file descriptor 00:24:05.229 [2024-04-18 11:59:55.705403] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000010e40 (9): Bad file descriptor 00:24:05.229 [2024-04-18 11:59:55.705415] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:24:05.229 [2024-04-18 11:59:55.705426] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:24:05.229 [2024-04-18 11:59:55.705438] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:24:05.229 [2024-04-18 11:59:55.705513] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:05.229 [2024-04-18 11:59:55.705526] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:24:05.229 [2024-04-18 11:59:55.705536] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:24:05.229 [2024-04-18 11:59:55.705547] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:24:05.229 [2024-04-18 11:59:55.705561] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:24:05.229 [2024-04-18 11:59:55.705572] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:24:05.230 [2024-04-18 11:59:55.705585] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:24:05.230 [2024-04-18 11:59:55.705599] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:24:05.230 [2024-04-18 11:59:55.705610] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:24:05.230 [2024-04-18 11:59:55.705620] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:24:05.230 [2024-04-18 11:59:55.705633] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:24:05.230 [2024-04-18 11:59:55.705644] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:24:05.230 [2024-04-18 11:59:55.705656] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:24:05.230 [2024-04-18 11:59:55.705670] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:24:05.230 [2024-04-18 11:59:55.705680] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:24:05.230 [2024-04-18 11:59:55.705691] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:24:05.230 [2024-04-18 11:59:55.705748] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:05.230 [2024-04-18 11:59:55.705761] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:05.230 [2024-04-18 11:59:55.705770] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:05.230 [2024-04-18 11:59:55.705780] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:05.230 [2024-04-18 11:59:55.705789] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.521 11:59:58 -- target/shutdown.sh@136 -- # nvmfpid= 00:24:08.521 11:59:58 -- target/shutdown.sh@139 -- # sleep 1 00:24:09.459 11:59:59 -- target/shutdown.sh@142 -- # kill -9 2559930 00:24:09.459 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (2559930) - No such process 00:24:09.459 11:59:59 -- target/shutdown.sh@142 -- # true 00:24:09.459 11:59:59 -- target/shutdown.sh@144 -- # stoptarget 00:24:09.459 11:59:59 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:09.459 11:59:59 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:09.459 11:59:59 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:09.459 11:59:59 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:09.459 11:59:59 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:09.459 11:59:59 -- nvmf/common.sh@117 -- # sync 00:24:09.459 11:59:59 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:09.459 11:59:59 -- nvmf/common.sh@120 -- # set +e 00:24:09.459 11:59:59 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:09.459 11:59:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:09.459 rmmod nvme_tcp 00:24:09.459 rmmod nvme_fabrics 00:24:09.459 rmmod nvme_keyring 00:24:09.459 11:59:59 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:09.459 11:59:59 -- nvmf/common.sh@124 -- # set -e 00:24:09.459 11:59:59 -- nvmf/common.sh@125 -- # return 0 00:24:09.459 11:59:59 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:24:09.459 11:59:59 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:09.459 11:59:59 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:09.459 11:59:59 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:09.459 11:59:59 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:09.459 11:59:59 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:09.459 11:59:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:09.459 11:59:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:09.459 11:59:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:11.992 12:00:01 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:11.992 00:24:11.992 real 0m11.957s 00:24:11.992 user 0m33.958s 00:24:11.992 sys 0m1.958s 00:24:11.992 12:00:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:11.992 12:00:01 -- common/autotest_common.sh@10 -- # set +x 00:24:11.992 ************************************ 00:24:11.992 END TEST nvmf_shutdown_tc3 00:24:11.992 ************************************ 00:24:11.992 12:00:01 -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:24:11.992 00:24:11.992 real 0m48.285s 00:24:11.992 user 2m16.259s 00:24:11.992 sys 0m11.687s 00:24:11.992 12:00:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:11.992 12:00:01 -- common/autotest_common.sh@10 -- # set +x 00:24:11.992 ************************************ 00:24:11.992 END TEST nvmf_shutdown 00:24:11.992 ************************************ 00:24:11.992 12:00:02 -- nvmf/nvmf.sh@84 -- # timing_exit target 00:24:11.992 12:00:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:11.992 12:00:02 -- common/autotest_common.sh@10 -- # set +x 00:24:11.992 12:00:02 -- nvmf/nvmf.sh@86 -- # timing_enter host 00:24:11.992 12:00:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:11.992 12:00:02 -- common/autotest_common.sh@10 -- # set +x 00:24:11.992 12:00:02 -- nvmf/nvmf.sh@88 -- # [[ 0 -eq 0 ]] 00:24:11.992 12:00:02 -- nvmf/nvmf.sh@89 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:11.993 12:00:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:11.993 12:00:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:11.993 12:00:02 -- common/autotest_common.sh@10 -- # set +x 00:24:11.993 ************************************ 00:24:11.993 START TEST nvmf_multicontroller 00:24:11.993 ************************************ 00:24:11.993 12:00:02 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:11.993 * Looking for test storage... 00:24:11.993 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:11.993 12:00:02 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:11.993 12:00:02 -- nvmf/common.sh@7 -- # uname -s 00:24:11.993 12:00:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:11.993 12:00:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:11.993 12:00:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:11.993 12:00:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:11.993 12:00:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:11.993 12:00:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:11.993 12:00:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:11.993 12:00:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:11.993 12:00:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:11.993 12:00:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:11.993 12:00:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:11.993 12:00:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:24:11.993 12:00:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:11.993 12:00:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:11.993 12:00:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:11.993 12:00:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:11.993 12:00:02 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:11.993 12:00:02 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:11.993 12:00:02 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:11.993 12:00:02 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:11.993 12:00:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.993 12:00:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.993 12:00:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.993 12:00:02 -- paths/export.sh@5 -- # export PATH 00:24:11.993 12:00:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.993 12:00:02 -- nvmf/common.sh@47 -- # : 0 00:24:11.993 12:00:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:11.993 12:00:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:11.993 12:00:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:11.993 12:00:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:11.993 12:00:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:11.993 12:00:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:11.993 12:00:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:11.993 12:00:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:11.993 12:00:02 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:11.993 12:00:02 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:11.993 12:00:02 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:24:11.993 12:00:02 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:24:11.993 12:00:02 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:11.993 12:00:02 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:24:11.993 12:00:02 -- host/multicontroller.sh@23 -- # nvmftestinit 00:24:11.993 12:00:02 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:11.993 12:00:02 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:11.993 12:00:02 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:11.993 12:00:02 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:11.993 12:00:02 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:11.993 12:00:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:11.993 12:00:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:11.993 12:00:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:11.993 12:00:02 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:24:11.993 12:00:02 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:24:11.993 12:00:02 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:11.993 12:00:02 -- common/autotest_common.sh@10 -- # set +x 00:24:18.585 12:00:08 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:18.585 12:00:08 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:18.585 12:00:08 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:18.585 12:00:08 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:18.585 12:00:08 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:18.585 12:00:08 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:18.585 12:00:08 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:18.585 12:00:08 -- nvmf/common.sh@295 -- # net_devs=() 00:24:18.585 12:00:08 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:18.585 12:00:08 -- nvmf/common.sh@296 -- # e810=() 00:24:18.585 12:00:08 -- nvmf/common.sh@296 -- # local -ga e810 00:24:18.585 12:00:08 -- nvmf/common.sh@297 -- # x722=() 00:24:18.585 12:00:08 -- nvmf/common.sh@297 -- # local -ga x722 00:24:18.585 12:00:08 -- nvmf/common.sh@298 -- # mlx=() 00:24:18.585 12:00:08 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:18.585 12:00:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:18.585 12:00:08 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:18.585 12:00:08 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:18.585 12:00:08 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:18.585 12:00:08 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:18.585 12:00:08 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:18.585 12:00:08 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:18.585 12:00:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:18.585 12:00:08 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:18.585 12:00:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:18.585 12:00:08 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:18.585 12:00:08 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:18.585 12:00:08 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:18.585 12:00:08 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:18.585 12:00:08 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:18.585 12:00:08 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:18.585 12:00:08 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:18.585 12:00:08 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:18.585 12:00:08 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:18.585 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:18.585 12:00:08 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:18.585 12:00:08 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:18.585 12:00:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:18.585 12:00:08 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:18.585 12:00:08 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:18.585 12:00:08 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:18.585 12:00:08 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:18.585 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:18.585 12:00:08 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:18.585 12:00:08 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:18.585 12:00:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:18.585 12:00:08 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:18.585 12:00:08 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:18.585 12:00:08 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:18.585 12:00:08 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:18.585 12:00:08 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:18.585 12:00:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:18.585 12:00:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:18.585 12:00:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:18.585 12:00:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:18.585 12:00:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:18.585 Found net devices under 0000:af:00.0: cvl_0_0 00:24:18.585 12:00:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:18.585 12:00:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:18.585 12:00:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:18.585 12:00:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:18.585 12:00:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:18.585 12:00:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:18.585 Found net devices under 0000:af:00.1: cvl_0_1 00:24:18.585 12:00:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:18.585 12:00:08 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:24:18.585 12:00:08 -- nvmf/common.sh@403 -- # is_hw=yes 00:24:18.585 12:00:08 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:24:18.585 12:00:08 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:24:18.585 12:00:08 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:24:18.585 12:00:08 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:18.585 12:00:08 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:18.585 12:00:08 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:18.585 12:00:08 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:18.585 12:00:08 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:18.585 12:00:08 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:18.585 12:00:08 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:18.585 12:00:08 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:18.585 12:00:08 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:18.585 12:00:08 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:18.585 12:00:08 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:18.585 12:00:08 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:18.585 12:00:08 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:18.585 12:00:08 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:18.585 12:00:08 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:18.585 12:00:08 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:18.585 12:00:08 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:18.585 12:00:08 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:18.585 12:00:08 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:18.585 12:00:08 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:18.585 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:18.585 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:24:18.585 00:24:18.585 --- 10.0.0.2 ping statistics --- 00:24:18.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:18.585 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:24:18.585 12:00:08 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:18.585 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:18.585 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:24:18.585 00:24:18.585 --- 10.0.0.1 ping statistics --- 00:24:18.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:18.585 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:24:18.586 12:00:08 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:18.586 12:00:08 -- nvmf/common.sh@411 -- # return 0 00:24:18.586 12:00:08 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:18.586 12:00:08 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:18.586 12:00:08 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:18.586 12:00:08 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:18.586 12:00:08 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:18.586 12:00:08 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:18.586 12:00:08 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:18.586 12:00:08 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:24:18.586 12:00:08 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:18.586 12:00:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:18.586 12:00:08 -- common/autotest_common.sh@10 -- # set +x 00:24:18.586 12:00:08 -- nvmf/common.sh@470 -- # nvmfpid=2565315 00:24:18.586 12:00:08 -- nvmf/common.sh@471 -- # waitforlisten 2565315 00:24:18.586 12:00:08 -- common/autotest_common.sh@817 -- # '[' -z 2565315 ']' 00:24:18.586 12:00:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:18.586 12:00:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:18.586 12:00:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:18.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:18.586 12:00:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:18.586 12:00:08 -- common/autotest_common.sh@10 -- # set +x 00:24:18.586 12:00:08 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:18.586 [2024-04-18 12:00:08.781110] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:24:18.586 [2024-04-18 12:00:08.781199] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:18.586 EAL: No free 2048 kB hugepages reported on node 1 00:24:18.586 [2024-04-18 12:00:08.909159] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:18.586 [2024-04-18 12:00:09.111385] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:18.586 [2024-04-18 12:00:09.111426] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:18.586 [2024-04-18 12:00:09.111439] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:18.586 [2024-04-18 12:00:09.111456] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:18.586 [2024-04-18 12:00:09.111468] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:18.586 [2024-04-18 12:00:09.111529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:18.586 [2024-04-18 12:00:09.111594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:18.586 [2024-04-18 12:00:09.111600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:19.153 12:00:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:19.153 12:00:09 -- common/autotest_common.sh@850 -- # return 0 00:24:19.153 12:00:09 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:19.153 12:00:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:19.153 12:00:09 -- common/autotest_common.sh@10 -- # set +x 00:24:19.153 12:00:09 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:19.153 12:00:09 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:19.153 12:00:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.153 12:00:09 -- common/autotest_common.sh@10 -- # set +x 00:24:19.153 [2024-04-18 12:00:09.599648] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:19.153 12:00:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.153 12:00:09 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:19.153 12:00:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.153 12:00:09 -- common/autotest_common.sh@10 -- # set +x 00:24:19.411 Malloc0 00:24:19.411 12:00:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.411 12:00:09 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:19.411 12:00:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.411 12:00:09 -- common/autotest_common.sh@10 -- # set +x 00:24:19.411 12:00:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.411 12:00:09 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:19.411 12:00:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.411 12:00:09 -- common/autotest_common.sh@10 -- # set +x 00:24:19.411 12:00:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.411 12:00:09 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:19.411 12:00:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.411 12:00:09 -- common/autotest_common.sh@10 -- # set +x 00:24:19.411 [2024-04-18 12:00:09.728302] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:19.411 12:00:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.411 12:00:09 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:19.411 12:00:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.411 12:00:09 -- common/autotest_common.sh@10 -- # set +x 00:24:19.411 [2024-04-18 12:00:09.736239] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:19.411 12:00:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.411 12:00:09 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:19.411 12:00:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.411 12:00:09 -- common/autotest_common.sh@10 -- # set +x 00:24:19.411 Malloc1 00:24:19.411 12:00:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.411 12:00:09 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:24:19.411 12:00:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.411 12:00:09 -- common/autotest_common.sh@10 -- # set +x 00:24:19.411 12:00:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.411 12:00:09 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:24:19.411 12:00:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.411 12:00:09 -- common/autotest_common.sh@10 -- # set +x 00:24:19.411 12:00:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.411 12:00:09 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:19.411 12:00:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.411 12:00:09 -- common/autotest_common.sh@10 -- # set +x 00:24:19.411 12:00:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.411 12:00:09 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:24:19.411 12:00:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.411 12:00:09 -- common/autotest_common.sh@10 -- # set +x 00:24:19.411 12:00:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.411 12:00:09 -- host/multicontroller.sh@44 -- # bdevperf_pid=2565594 00:24:19.411 12:00:09 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:24:19.411 12:00:09 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:19.411 12:00:09 -- host/multicontroller.sh@47 -- # waitforlisten 2565594 /var/tmp/bdevperf.sock 00:24:19.411 12:00:09 -- common/autotest_common.sh@817 -- # '[' -z 2565594 ']' 00:24:19.411 12:00:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:19.411 12:00:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:19.411 12:00:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:19.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:19.411 12:00:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:19.411 12:00:09 -- common/autotest_common.sh@10 -- # set +x 00:24:20.352 12:00:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:20.352 12:00:10 -- common/autotest_common.sh@850 -- # return 0 00:24:20.352 12:00:10 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:24:20.352 12:00:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.352 12:00:10 -- common/autotest_common.sh@10 -- # set +x 00:24:20.352 NVMe0n1 00:24:20.352 12:00:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.352 12:00:10 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:20.352 12:00:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.352 12:00:10 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:24:20.352 12:00:10 -- common/autotest_common.sh@10 -- # set +x 00:24:20.352 12:00:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.352 1 00:24:20.352 12:00:10 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:20.352 12:00:10 -- common/autotest_common.sh@638 -- # local es=0 00:24:20.352 12:00:10 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:20.352 12:00:10 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:24:20.615 12:00:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:20.615 12:00:10 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:24:20.615 12:00:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:20.615 12:00:10 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:20.615 12:00:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.615 12:00:10 -- common/autotest_common.sh@10 -- # set +x 00:24:20.615 request: 00:24:20.615 { 00:24:20.615 "name": "NVMe0", 00:24:20.615 "trtype": "tcp", 00:24:20.615 "traddr": "10.0.0.2", 00:24:20.615 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:24:20.615 "hostaddr": "10.0.0.2", 00:24:20.615 "hostsvcid": "60000", 00:24:20.615 "adrfam": "ipv4", 00:24:20.615 "trsvcid": "4420", 00:24:20.615 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:20.615 "method": "bdev_nvme_attach_controller", 00:24:20.615 "req_id": 1 00:24:20.615 } 00:24:20.615 Got JSON-RPC error response 00:24:20.615 response: 00:24:20.615 { 00:24:20.615 "code": -114, 00:24:20.615 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:20.615 } 00:24:20.615 12:00:10 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:24:20.615 12:00:10 -- common/autotest_common.sh@641 -- # es=1 00:24:20.615 12:00:10 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:20.615 12:00:10 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:20.615 12:00:10 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:20.615 12:00:10 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:20.615 12:00:10 -- common/autotest_common.sh@638 -- # local es=0 00:24:20.615 12:00:10 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:20.615 12:00:10 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:24:20.615 12:00:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:20.615 12:00:10 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:24:20.615 12:00:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:20.615 12:00:10 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:20.615 12:00:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.615 12:00:10 -- common/autotest_common.sh@10 -- # set +x 00:24:20.615 request: 00:24:20.615 { 00:24:20.615 "name": "NVMe0", 00:24:20.615 "trtype": "tcp", 00:24:20.615 "traddr": "10.0.0.2", 00:24:20.615 "hostaddr": "10.0.0.2", 00:24:20.615 "hostsvcid": "60000", 00:24:20.615 "adrfam": "ipv4", 00:24:20.615 "trsvcid": "4420", 00:24:20.615 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:20.615 "method": "bdev_nvme_attach_controller", 00:24:20.615 "req_id": 1 00:24:20.615 } 00:24:20.615 Got JSON-RPC error response 00:24:20.615 response: 00:24:20.615 { 00:24:20.615 "code": -114, 00:24:20.615 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:20.615 } 00:24:20.615 12:00:10 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:24:20.615 12:00:10 -- common/autotest_common.sh@641 -- # es=1 00:24:20.615 12:00:10 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:20.615 12:00:10 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:20.615 12:00:10 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:20.615 12:00:10 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:20.615 12:00:10 -- common/autotest_common.sh@638 -- # local es=0 00:24:20.615 12:00:10 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:20.615 12:00:10 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:24:20.615 12:00:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:20.615 12:00:10 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:24:20.615 12:00:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:20.615 12:00:10 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:20.615 12:00:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.615 12:00:10 -- common/autotest_common.sh@10 -- # set +x 00:24:20.615 request: 00:24:20.615 { 00:24:20.615 "name": "NVMe0", 00:24:20.615 "trtype": "tcp", 00:24:20.616 "traddr": "10.0.0.2", 00:24:20.616 "hostaddr": "10.0.0.2", 00:24:20.616 "hostsvcid": "60000", 00:24:20.616 "adrfam": "ipv4", 00:24:20.616 "trsvcid": "4420", 00:24:20.616 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:20.616 "multipath": "disable", 00:24:20.616 "method": "bdev_nvme_attach_controller", 00:24:20.616 "req_id": 1 00:24:20.616 } 00:24:20.616 Got JSON-RPC error response 00:24:20.616 response: 00:24:20.616 { 00:24:20.616 "code": -114, 00:24:20.616 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:24:20.616 } 00:24:20.616 12:00:10 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:24:20.616 12:00:10 -- common/autotest_common.sh@641 -- # es=1 00:24:20.616 12:00:10 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:20.616 12:00:10 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:20.616 12:00:10 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:20.616 12:00:10 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:20.616 12:00:10 -- common/autotest_common.sh@638 -- # local es=0 00:24:20.616 12:00:10 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:20.616 12:00:10 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:24:20.616 12:00:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:20.616 12:00:10 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:24:20.616 12:00:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:20.616 12:00:10 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:20.616 12:00:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.616 12:00:10 -- common/autotest_common.sh@10 -- # set +x 00:24:20.616 request: 00:24:20.616 { 00:24:20.616 "name": "NVMe0", 00:24:20.616 "trtype": "tcp", 00:24:20.616 "traddr": "10.0.0.2", 00:24:20.616 "hostaddr": "10.0.0.2", 00:24:20.616 "hostsvcid": "60000", 00:24:20.616 "adrfam": "ipv4", 00:24:20.616 "trsvcid": "4420", 00:24:20.616 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:20.616 "multipath": "failover", 00:24:20.616 "method": "bdev_nvme_attach_controller", 00:24:20.616 "req_id": 1 00:24:20.616 } 00:24:20.616 Got JSON-RPC error response 00:24:20.616 response: 00:24:20.616 { 00:24:20.616 "code": -114, 00:24:20.616 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:20.616 } 00:24:20.616 12:00:10 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:24:20.616 12:00:10 -- common/autotest_common.sh@641 -- # es=1 00:24:20.616 12:00:10 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:20.616 12:00:10 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:20.616 12:00:10 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:20.616 12:00:10 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:20.616 12:00:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.616 12:00:10 -- common/autotest_common.sh@10 -- # set +x 00:24:20.616 00:24:20.616 12:00:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.616 12:00:11 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:20.616 12:00:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.616 12:00:11 -- common/autotest_common.sh@10 -- # set +x 00:24:20.616 12:00:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.616 12:00:11 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:24:20.616 12:00:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.616 12:00:11 -- common/autotest_common.sh@10 -- # set +x 00:24:20.874 00:24:20.874 12:00:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.874 12:00:11 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:20.874 12:00:11 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:24:20.874 12:00:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.874 12:00:11 -- common/autotest_common.sh@10 -- # set +x 00:24:20.874 12:00:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.874 12:00:11 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:24:20.874 12:00:11 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:22.252 0 00:24:22.252 12:00:12 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:24:22.252 12:00:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.252 12:00:12 -- common/autotest_common.sh@10 -- # set +x 00:24:22.252 12:00:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.252 12:00:12 -- host/multicontroller.sh@100 -- # killprocess 2565594 00:24:22.252 12:00:12 -- common/autotest_common.sh@936 -- # '[' -z 2565594 ']' 00:24:22.252 12:00:12 -- common/autotest_common.sh@940 -- # kill -0 2565594 00:24:22.252 12:00:12 -- common/autotest_common.sh@941 -- # uname 00:24:22.252 12:00:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:22.252 12:00:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2565594 00:24:22.252 12:00:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:22.252 12:00:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:22.252 12:00:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2565594' 00:24:22.252 killing process with pid 2565594 00:24:22.252 12:00:12 -- common/autotest_common.sh@955 -- # kill 2565594 00:24:22.252 12:00:12 -- common/autotest_common.sh@960 -- # wait 2565594 00:24:23.186 12:00:13 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:23.186 12:00:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.186 12:00:13 -- common/autotest_common.sh@10 -- # set +x 00:24:23.186 12:00:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.186 12:00:13 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:23.186 12:00:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.186 12:00:13 -- common/autotest_common.sh@10 -- # set +x 00:24:23.186 12:00:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.186 12:00:13 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:24:23.186 12:00:13 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:23.186 12:00:13 -- common/autotest_common.sh@1598 -- # read -r file 00:24:23.186 12:00:13 -- common/autotest_common.sh@1597 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:24:23.186 12:00:13 -- common/autotest_common.sh@1597 -- # sort -u 00:24:23.186 12:00:13 -- common/autotest_common.sh@1599 -- # cat 00:24:23.186 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:23.186 [2024-04-18 12:00:09.936891] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:24:23.186 [2024-04-18 12:00:09.936989] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2565594 ] 00:24:23.186 EAL: No free 2048 kB hugepages reported on node 1 00:24:23.186 [2024-04-18 12:00:10.063968] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:23.186 [2024-04-18 12:00:10.291868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:23.186 [2024-04-18 12:00:11.284465] bdev.c:4548:bdev_name_add: *ERROR*: Bdev name c7502e1e-5497-4784-a44e-5b8cd8405cf2 already exists 00:24:23.186 [2024-04-18 12:00:11.284509] bdev.c:7651:bdev_register: *ERROR*: Unable to add uuid:c7502e1e-5497-4784-a44e-5b8cd8405cf2 alias for bdev NVMe1n1 00:24:23.186 [2024-04-18 12:00:11.284528] bdev_nvme.c:4272:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:24:23.186 Running I/O for 1 seconds... 00:24:23.186 00:24:23.186 Latency(us) 00:24:23.186 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:23.186 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:24:23.186 NVMe0n1 : 1.00 21949.96 85.74 0.00 0.00 5818.11 3316.12 10957.62 00:24:23.186 =================================================================================================================== 00:24:23.186 Total : 21949.96 85.74 0.00 0.00 5818.11 3316.12 10957.62 00:24:23.187 Received shutdown signal, test time was about 1.000000 seconds 00:24:23.187 00:24:23.187 Latency(us) 00:24:23.187 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:23.187 =================================================================================================================== 00:24:23.187 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:23.187 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:23.187 12:00:13 -- common/autotest_common.sh@1604 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:23.187 12:00:13 -- common/autotest_common.sh@1598 -- # read -r file 00:24:23.187 12:00:13 -- host/multicontroller.sh@108 -- # nvmftestfini 00:24:23.187 12:00:13 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:23.187 12:00:13 -- nvmf/common.sh@117 -- # sync 00:24:23.187 12:00:13 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:23.187 12:00:13 -- nvmf/common.sh@120 -- # set +e 00:24:23.187 12:00:13 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:23.187 12:00:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:23.187 rmmod nvme_tcp 00:24:23.187 rmmod nvme_fabrics 00:24:23.187 rmmod nvme_keyring 00:24:23.187 12:00:13 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:23.187 12:00:13 -- nvmf/common.sh@124 -- # set -e 00:24:23.187 12:00:13 -- nvmf/common.sh@125 -- # return 0 00:24:23.187 12:00:13 -- nvmf/common.sh@478 -- # '[' -n 2565315 ']' 00:24:23.187 12:00:13 -- nvmf/common.sh@479 -- # killprocess 2565315 00:24:23.187 12:00:13 -- common/autotest_common.sh@936 -- # '[' -z 2565315 ']' 00:24:23.187 12:00:13 -- common/autotest_common.sh@940 -- # kill -0 2565315 00:24:23.187 12:00:13 -- common/autotest_common.sh@941 -- # uname 00:24:23.187 12:00:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:23.187 12:00:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2565315 00:24:23.187 12:00:13 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:23.187 12:00:13 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:23.187 12:00:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2565315' 00:24:23.187 killing process with pid 2565315 00:24:23.187 12:00:13 -- common/autotest_common.sh@955 -- # kill 2565315 00:24:23.187 12:00:13 -- common/autotest_common.sh@960 -- # wait 2565315 00:24:25.088 12:00:15 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:25.088 12:00:15 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:25.088 12:00:15 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:25.088 12:00:15 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:25.088 12:00:15 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:25.088 12:00:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:25.088 12:00:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:25.088 12:00:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:26.989 12:00:17 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:26.989 00:24:26.989 real 0m15.134s 00:24:26.989 user 0m23.007s 00:24:26.989 sys 0m6.022s 00:24:26.989 12:00:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:26.989 12:00:17 -- common/autotest_common.sh@10 -- # set +x 00:24:26.989 ************************************ 00:24:26.989 END TEST nvmf_multicontroller 00:24:26.989 ************************************ 00:24:26.989 12:00:17 -- nvmf/nvmf.sh@90 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:26.989 12:00:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:26.989 12:00:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:26.989 12:00:17 -- common/autotest_common.sh@10 -- # set +x 00:24:27.247 ************************************ 00:24:27.247 START TEST nvmf_aer 00:24:27.247 ************************************ 00:24:27.247 12:00:17 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:27.247 * Looking for test storage... 00:24:27.247 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:27.247 12:00:17 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:27.247 12:00:17 -- nvmf/common.sh@7 -- # uname -s 00:24:27.247 12:00:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:27.247 12:00:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:27.247 12:00:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:27.247 12:00:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:27.247 12:00:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:27.247 12:00:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:27.247 12:00:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:27.247 12:00:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:27.247 12:00:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:27.247 12:00:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:27.247 12:00:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:27.247 12:00:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:24:27.247 12:00:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:27.247 12:00:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:27.247 12:00:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:27.247 12:00:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:27.247 12:00:17 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:27.247 12:00:17 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:27.247 12:00:17 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:27.247 12:00:17 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:27.247 12:00:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.247 12:00:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.247 12:00:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.247 12:00:17 -- paths/export.sh@5 -- # export PATH 00:24:27.247 12:00:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.247 12:00:17 -- nvmf/common.sh@47 -- # : 0 00:24:27.247 12:00:17 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:27.247 12:00:17 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:27.247 12:00:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:27.247 12:00:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:27.247 12:00:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:27.247 12:00:17 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:27.247 12:00:17 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:27.247 12:00:17 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:27.247 12:00:17 -- host/aer.sh@11 -- # nvmftestinit 00:24:27.247 12:00:17 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:27.247 12:00:17 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:27.247 12:00:17 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:27.247 12:00:17 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:27.247 12:00:17 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:27.247 12:00:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:27.247 12:00:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:27.247 12:00:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:27.247 12:00:17 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:24:27.247 12:00:17 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:24:27.247 12:00:17 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:27.247 12:00:17 -- common/autotest_common.sh@10 -- # set +x 00:24:33.806 12:00:24 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:33.806 12:00:24 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:33.806 12:00:24 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:33.806 12:00:24 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:33.806 12:00:24 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:33.806 12:00:24 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:33.806 12:00:24 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:33.806 12:00:24 -- nvmf/common.sh@295 -- # net_devs=() 00:24:33.806 12:00:24 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:33.806 12:00:24 -- nvmf/common.sh@296 -- # e810=() 00:24:33.806 12:00:24 -- nvmf/common.sh@296 -- # local -ga e810 00:24:33.806 12:00:24 -- nvmf/common.sh@297 -- # x722=() 00:24:33.806 12:00:24 -- nvmf/common.sh@297 -- # local -ga x722 00:24:33.806 12:00:24 -- nvmf/common.sh@298 -- # mlx=() 00:24:33.806 12:00:24 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:33.806 12:00:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:33.806 12:00:24 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:33.806 12:00:24 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:33.806 12:00:24 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:33.806 12:00:24 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:33.806 12:00:24 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:33.806 12:00:24 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:33.806 12:00:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:33.806 12:00:24 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:33.806 12:00:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:33.806 12:00:24 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:33.806 12:00:24 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:33.806 12:00:24 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:33.806 12:00:24 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:33.806 12:00:24 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:33.806 12:00:24 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:33.806 12:00:24 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:33.806 12:00:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:33.806 12:00:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:33.806 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:33.806 12:00:24 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:33.806 12:00:24 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:33.806 12:00:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:33.806 12:00:24 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:33.806 12:00:24 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:33.806 12:00:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:33.806 12:00:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:33.806 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:33.806 12:00:24 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:33.806 12:00:24 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:33.806 12:00:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:33.806 12:00:24 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:33.806 12:00:24 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:33.806 12:00:24 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:33.806 12:00:24 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:33.806 12:00:24 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:33.806 12:00:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:33.806 12:00:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:33.806 12:00:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:33.806 12:00:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:33.806 12:00:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:33.806 Found net devices under 0000:af:00.0: cvl_0_0 00:24:33.806 12:00:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:33.806 12:00:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:33.806 12:00:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:33.806 12:00:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:33.806 12:00:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:33.806 12:00:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:33.806 Found net devices under 0000:af:00.1: cvl_0_1 00:24:33.806 12:00:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:33.806 12:00:24 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:24:33.806 12:00:24 -- nvmf/common.sh@403 -- # is_hw=yes 00:24:33.806 12:00:24 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:24:33.806 12:00:24 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:24:33.806 12:00:24 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:24:33.806 12:00:24 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:33.806 12:00:24 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:33.806 12:00:24 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:33.806 12:00:24 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:33.806 12:00:24 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:33.806 12:00:24 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:33.806 12:00:24 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:33.806 12:00:24 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:33.806 12:00:24 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:33.806 12:00:24 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:33.806 12:00:24 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:33.806 12:00:24 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:33.806 12:00:24 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:33.806 12:00:24 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:33.806 12:00:24 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:33.806 12:00:24 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:33.806 12:00:24 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:34.063 12:00:24 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:34.063 12:00:24 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:34.063 12:00:24 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:34.063 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:34.063 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:24:34.063 00:24:34.063 --- 10.0.0.2 ping statistics --- 00:24:34.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:34.063 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:24:34.063 12:00:24 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:34.063 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:34.063 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:24:34.063 00:24:34.063 --- 10.0.0.1 ping statistics --- 00:24:34.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:34.063 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:24:34.063 12:00:24 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:34.063 12:00:24 -- nvmf/common.sh@411 -- # return 0 00:24:34.063 12:00:24 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:34.063 12:00:24 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:34.063 12:00:24 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:34.063 12:00:24 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:34.063 12:00:24 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:34.063 12:00:24 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:34.063 12:00:24 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:34.063 12:00:24 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:34.063 12:00:24 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:34.063 12:00:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:34.063 12:00:24 -- common/autotest_common.sh@10 -- # set +x 00:24:34.063 12:00:24 -- nvmf/common.sh@470 -- # nvmfpid=2570109 00:24:34.063 12:00:24 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:34.063 12:00:24 -- nvmf/common.sh@471 -- # waitforlisten 2570109 00:24:34.063 12:00:24 -- common/autotest_common.sh@817 -- # '[' -z 2570109 ']' 00:24:34.063 12:00:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:34.063 12:00:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:34.063 12:00:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:34.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:34.063 12:00:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:34.063 12:00:24 -- common/autotest_common.sh@10 -- # set +x 00:24:34.063 [2024-04-18 12:00:24.549862] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:24:34.064 [2024-04-18 12:00:24.549947] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:34.321 EAL: No free 2048 kB hugepages reported on node 1 00:24:34.321 [2024-04-18 12:00:24.681388] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:34.578 [2024-04-18 12:00:24.905209] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:34.578 [2024-04-18 12:00:24.905257] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:34.578 [2024-04-18 12:00:24.905270] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:34.578 [2024-04-18 12:00:24.905285] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:34.578 [2024-04-18 12:00:24.905295] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:34.578 [2024-04-18 12:00:24.905378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:34.578 [2024-04-18 12:00:24.905458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:34.578 [2024-04-18 12:00:24.905475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:34.578 [2024-04-18 12:00:24.905478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:34.835 12:00:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:34.835 12:00:25 -- common/autotest_common.sh@850 -- # return 0 00:24:34.835 12:00:25 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:34.835 12:00:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:34.835 12:00:25 -- common/autotest_common.sh@10 -- # set +x 00:24:34.835 12:00:25 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:34.835 12:00:25 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:34.835 12:00:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.835 12:00:25 -- common/autotest_common.sh@10 -- # set +x 00:24:35.093 [2024-04-18 12:00:25.385732] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:35.093 12:00:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.093 12:00:25 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:35.093 12:00:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.093 12:00:25 -- common/autotest_common.sh@10 -- # set +x 00:24:35.093 Malloc0 00:24:35.093 12:00:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.093 12:00:25 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:35.093 12:00:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.093 12:00:25 -- common/autotest_common.sh@10 -- # set +x 00:24:35.093 12:00:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.093 12:00:25 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:35.093 12:00:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.093 12:00:25 -- common/autotest_common.sh@10 -- # set +x 00:24:35.093 12:00:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.093 12:00:25 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:35.093 12:00:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.093 12:00:25 -- common/autotest_common.sh@10 -- # set +x 00:24:35.093 [2024-04-18 12:00:25.513612] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:35.093 12:00:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.093 12:00:25 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:35.093 12:00:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.093 12:00:25 -- common/autotest_common.sh@10 -- # set +x 00:24:35.093 [2024-04-18 12:00:25.521260] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:24:35.093 [ 00:24:35.093 { 00:24:35.093 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:35.093 "subtype": "Discovery", 00:24:35.093 "listen_addresses": [], 00:24:35.093 "allow_any_host": true, 00:24:35.093 "hosts": [] 00:24:35.093 }, 00:24:35.093 { 00:24:35.093 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:35.093 "subtype": "NVMe", 00:24:35.093 "listen_addresses": [ 00:24:35.094 { 00:24:35.094 "transport": "TCP", 00:24:35.094 "trtype": "TCP", 00:24:35.094 "adrfam": "IPv4", 00:24:35.094 "traddr": "10.0.0.2", 00:24:35.094 "trsvcid": "4420" 00:24:35.094 } 00:24:35.094 ], 00:24:35.094 "allow_any_host": true, 00:24:35.094 "hosts": [], 00:24:35.094 "serial_number": "SPDK00000000000001", 00:24:35.094 "model_number": "SPDK bdev Controller", 00:24:35.094 "max_namespaces": 2, 00:24:35.094 "min_cntlid": 1, 00:24:35.094 "max_cntlid": 65519, 00:24:35.094 "namespaces": [ 00:24:35.094 { 00:24:35.094 "nsid": 1, 00:24:35.094 "bdev_name": "Malloc0", 00:24:35.094 "name": "Malloc0", 00:24:35.094 "nguid": "F11F1F7DE4BB40E28E27D816DB51A58D", 00:24:35.094 "uuid": "f11f1f7d-e4bb-40e2-8e27-d816db51a58d" 00:24:35.094 } 00:24:35.094 ] 00:24:35.094 } 00:24:35.094 ] 00:24:35.094 12:00:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.094 12:00:25 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:35.094 12:00:25 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:35.094 12:00:25 -- host/aer.sh@33 -- # aerpid=2570334 00:24:35.094 12:00:25 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:35.094 12:00:25 -- common/autotest_common.sh@1251 -- # local i=0 00:24:35.094 12:00:25 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:35.094 12:00:25 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:35.094 12:00:25 -- common/autotest_common.sh@1253 -- # '[' 0 -lt 200 ']' 00:24:35.094 12:00:25 -- common/autotest_common.sh@1254 -- # i=1 00:24:35.094 12:00:25 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:24:35.352 12:00:25 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:35.352 12:00:25 -- common/autotest_common.sh@1253 -- # '[' 1 -lt 200 ']' 00:24:35.352 12:00:25 -- common/autotest_common.sh@1254 -- # i=2 00:24:35.352 12:00:25 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:24:35.352 EAL: No free 2048 kB hugepages reported on node 1 00:24:35.352 12:00:25 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:35.352 12:00:25 -- common/autotest_common.sh@1253 -- # '[' 2 -lt 200 ']' 00:24:35.352 12:00:25 -- common/autotest_common.sh@1254 -- # i=3 00:24:35.352 12:00:25 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:24:35.352 12:00:25 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:35.352 12:00:25 -- common/autotest_common.sh@1253 -- # '[' 3 -lt 200 ']' 00:24:35.352 12:00:25 -- common/autotest_common.sh@1254 -- # i=4 00:24:35.352 12:00:25 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:24:35.611 12:00:25 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:35.611 12:00:25 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:35.611 12:00:25 -- common/autotest_common.sh@1262 -- # return 0 00:24:35.611 12:00:25 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:35.611 12:00:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.611 12:00:25 -- common/autotest_common.sh@10 -- # set +x 00:24:35.611 Malloc1 00:24:35.611 12:00:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.611 12:00:26 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:35.611 12:00:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.611 12:00:26 -- common/autotest_common.sh@10 -- # set +x 00:24:35.871 12:00:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.871 12:00:26 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:35.871 12:00:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.871 12:00:26 -- common/autotest_common.sh@10 -- # set +x 00:24:35.871 [ 00:24:35.871 { 00:24:35.871 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:35.871 "subtype": "Discovery", 00:24:35.871 "listen_addresses": [], 00:24:35.871 "allow_any_host": true, 00:24:35.871 "hosts": [] 00:24:35.871 }, 00:24:35.871 { 00:24:35.871 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:35.871 "subtype": "NVMe", 00:24:35.871 "listen_addresses": [ 00:24:35.871 { 00:24:35.871 "transport": "TCP", 00:24:35.871 "trtype": "TCP", 00:24:35.871 "adrfam": "IPv4", 00:24:35.871 "traddr": "10.0.0.2", 00:24:35.871 "trsvcid": "4420" 00:24:35.871 } 00:24:35.871 ], 00:24:35.871 "allow_any_host": true, 00:24:35.871 "hosts": [], 00:24:35.871 "serial_number": "SPDK00000000000001", 00:24:35.871 "model_number": "SPDK bdev Controller", 00:24:35.871 "max_namespaces": 2, 00:24:35.871 "min_cntlid": 1, 00:24:35.871 "max_cntlid": 65519, 00:24:35.871 "namespaces": [ 00:24:35.871 { 00:24:35.871 "nsid": 1, 00:24:35.871 "bdev_name": "Malloc0", 00:24:35.871 "name": "Malloc0", 00:24:35.871 "nguid": "F11F1F7DE4BB40E28E27D816DB51A58D", 00:24:35.871 "uuid": "f11f1f7d-e4bb-40e2-8e27-d816db51a58d" 00:24:35.871 }, 00:24:35.871 { 00:24:35.871 "nsid": 2, 00:24:35.871 "bdev_name": "Malloc1", 00:24:35.871 "name": "Malloc1", 00:24:35.871 "nguid": "AF5289944CF2469CB3A2305CF4DD5397", 00:24:35.871 "uuid": "af528994-4cf2-469c-b3a2-305cf4dd5397" 00:24:35.871 } 00:24:35.871 ] 00:24:35.871 } 00:24:35.871 ] 00:24:35.871 12:00:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.871 12:00:26 -- host/aer.sh@43 -- # wait 2570334 00:24:35.871 Asynchronous Event Request test 00:24:35.871 Attaching to 10.0.0.2 00:24:35.871 Attached to 10.0.0.2 00:24:35.871 Registering asynchronous event callbacks... 00:24:35.871 Starting namespace attribute notice tests for all controllers... 00:24:35.871 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:35.871 aer_cb - Changed Namespace 00:24:35.871 Cleaning up... 00:24:35.871 12:00:26 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:35.871 12:00:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.871 12:00:26 -- common/autotest_common.sh@10 -- # set +x 00:24:35.871 12:00:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.871 12:00:26 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:35.871 12:00:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.871 12:00:26 -- common/autotest_common.sh@10 -- # set +x 00:24:36.131 12:00:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.131 12:00:26 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:36.131 12:00:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.131 12:00:26 -- common/autotest_common.sh@10 -- # set +x 00:24:36.131 12:00:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.131 12:00:26 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:36.131 12:00:26 -- host/aer.sh@51 -- # nvmftestfini 00:24:36.131 12:00:26 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:36.131 12:00:26 -- nvmf/common.sh@117 -- # sync 00:24:36.131 12:00:26 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:36.131 12:00:26 -- nvmf/common.sh@120 -- # set +e 00:24:36.131 12:00:26 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:36.131 12:00:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:36.131 rmmod nvme_tcp 00:24:36.131 rmmod nvme_fabrics 00:24:36.131 rmmod nvme_keyring 00:24:36.131 12:00:26 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:36.131 12:00:26 -- nvmf/common.sh@124 -- # set -e 00:24:36.131 12:00:26 -- nvmf/common.sh@125 -- # return 0 00:24:36.131 12:00:26 -- nvmf/common.sh@478 -- # '[' -n 2570109 ']' 00:24:36.131 12:00:26 -- nvmf/common.sh@479 -- # killprocess 2570109 00:24:36.131 12:00:26 -- common/autotest_common.sh@936 -- # '[' -z 2570109 ']' 00:24:36.131 12:00:26 -- common/autotest_common.sh@940 -- # kill -0 2570109 00:24:36.389 12:00:26 -- common/autotest_common.sh@941 -- # uname 00:24:36.389 12:00:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:36.389 12:00:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2570109 00:24:36.389 12:00:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:36.389 12:00:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:36.389 12:00:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2570109' 00:24:36.389 killing process with pid 2570109 00:24:36.389 12:00:26 -- common/autotest_common.sh@955 -- # kill 2570109 00:24:36.389 [2024-04-18 12:00:26.738386] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:24:36.389 12:00:26 -- common/autotest_common.sh@960 -- # wait 2570109 00:24:37.788 12:00:28 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:37.788 12:00:28 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:37.788 12:00:28 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:37.788 12:00:28 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:37.788 12:00:28 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:37.788 12:00:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:37.788 12:00:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:37.788 12:00:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:39.707 12:00:30 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:39.707 00:24:39.707 real 0m12.543s 00:24:39.707 user 0m12.747s 00:24:39.707 sys 0m5.916s 00:24:39.707 12:00:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:39.707 12:00:30 -- common/autotest_common.sh@10 -- # set +x 00:24:39.707 ************************************ 00:24:39.707 END TEST nvmf_aer 00:24:39.707 ************************************ 00:24:39.707 12:00:30 -- nvmf/nvmf.sh@91 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:39.707 12:00:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:39.707 12:00:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:39.707 12:00:30 -- common/autotest_common.sh@10 -- # set +x 00:24:39.965 ************************************ 00:24:39.965 START TEST nvmf_async_init 00:24:39.965 ************************************ 00:24:39.965 12:00:30 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:39.965 * Looking for test storage... 00:24:39.965 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:39.965 12:00:30 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:39.965 12:00:30 -- nvmf/common.sh@7 -- # uname -s 00:24:39.965 12:00:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:39.965 12:00:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:39.965 12:00:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:39.965 12:00:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:39.965 12:00:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:39.965 12:00:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:39.965 12:00:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:39.965 12:00:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:39.965 12:00:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:39.965 12:00:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:39.965 12:00:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:39.965 12:00:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:24:39.965 12:00:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:39.965 12:00:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:39.965 12:00:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:39.965 12:00:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:39.965 12:00:30 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:39.965 12:00:30 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:39.965 12:00:30 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:39.965 12:00:30 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:39.965 12:00:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.965 12:00:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.965 12:00:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.965 12:00:30 -- paths/export.sh@5 -- # export PATH 00:24:39.965 12:00:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.965 12:00:30 -- nvmf/common.sh@47 -- # : 0 00:24:39.965 12:00:30 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:39.965 12:00:30 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:39.965 12:00:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:39.965 12:00:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:39.965 12:00:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:39.965 12:00:30 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:39.965 12:00:30 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:39.965 12:00:30 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:39.965 12:00:30 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:39.965 12:00:30 -- host/async_init.sh@14 -- # null_block_size=512 00:24:39.965 12:00:30 -- host/async_init.sh@15 -- # null_bdev=null0 00:24:39.965 12:00:30 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:39.965 12:00:30 -- host/async_init.sh@20 -- # uuidgen 00:24:39.965 12:00:30 -- host/async_init.sh@20 -- # tr -d - 00:24:39.965 12:00:30 -- host/async_init.sh@20 -- # nguid=12fea1b715b44be3a8f25525c571a64e 00:24:39.965 12:00:30 -- host/async_init.sh@22 -- # nvmftestinit 00:24:39.965 12:00:30 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:39.965 12:00:30 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:39.965 12:00:30 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:39.965 12:00:30 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:39.965 12:00:30 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:39.965 12:00:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:39.965 12:00:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:39.965 12:00:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:39.965 12:00:30 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:24:39.965 12:00:30 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:24:39.965 12:00:30 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:39.965 12:00:30 -- common/autotest_common.sh@10 -- # set +x 00:24:46.531 12:00:36 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:46.531 12:00:36 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:46.531 12:00:36 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:46.531 12:00:36 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:46.531 12:00:36 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:46.531 12:00:36 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:46.531 12:00:36 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:46.531 12:00:36 -- nvmf/common.sh@295 -- # net_devs=() 00:24:46.531 12:00:36 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:46.531 12:00:36 -- nvmf/common.sh@296 -- # e810=() 00:24:46.531 12:00:36 -- nvmf/common.sh@296 -- # local -ga e810 00:24:46.531 12:00:36 -- nvmf/common.sh@297 -- # x722=() 00:24:46.531 12:00:36 -- nvmf/common.sh@297 -- # local -ga x722 00:24:46.531 12:00:36 -- nvmf/common.sh@298 -- # mlx=() 00:24:46.531 12:00:36 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:46.531 12:00:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:46.531 12:00:36 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:46.531 12:00:36 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:46.531 12:00:36 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:46.531 12:00:36 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:46.531 12:00:36 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:46.531 12:00:36 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:46.531 12:00:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:46.531 12:00:36 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:46.531 12:00:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:46.531 12:00:36 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:46.531 12:00:36 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:46.531 12:00:36 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:46.531 12:00:36 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:46.531 12:00:36 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:46.531 12:00:36 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:46.531 12:00:36 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:46.531 12:00:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:46.531 12:00:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:46.531 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:46.531 12:00:36 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:46.531 12:00:36 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:46.531 12:00:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:46.531 12:00:36 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:46.531 12:00:36 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:46.531 12:00:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:46.531 12:00:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:46.531 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:46.531 12:00:36 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:46.531 12:00:36 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:46.531 12:00:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:46.531 12:00:36 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:46.531 12:00:36 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:46.531 12:00:36 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:46.531 12:00:36 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:46.531 12:00:36 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:46.531 12:00:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:46.531 12:00:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:46.531 12:00:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:46.531 12:00:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:46.531 12:00:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:46.531 Found net devices under 0000:af:00.0: cvl_0_0 00:24:46.531 12:00:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:46.531 12:00:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:46.531 12:00:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:46.531 12:00:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:46.531 12:00:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:46.531 12:00:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:46.531 Found net devices under 0000:af:00.1: cvl_0_1 00:24:46.531 12:00:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:46.531 12:00:36 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:24:46.531 12:00:36 -- nvmf/common.sh@403 -- # is_hw=yes 00:24:46.531 12:00:36 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:24:46.531 12:00:36 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:24:46.531 12:00:36 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:24:46.531 12:00:36 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:46.531 12:00:36 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:46.531 12:00:36 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:46.531 12:00:36 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:46.531 12:00:36 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:46.531 12:00:36 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:46.531 12:00:36 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:46.531 12:00:36 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:46.531 12:00:36 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:46.531 12:00:36 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:46.531 12:00:36 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:46.531 12:00:36 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:46.531 12:00:36 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:46.531 12:00:36 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:46.531 12:00:36 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:46.531 12:00:36 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:46.531 12:00:36 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:46.531 12:00:36 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:46.531 12:00:36 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:46.531 12:00:36 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:46.531 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:46.531 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:24:46.531 00:24:46.531 --- 10.0.0.2 ping statistics --- 00:24:46.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:46.532 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:24:46.532 12:00:36 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:46.532 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:46.532 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:24:46.532 00:24:46.532 --- 10.0.0.1 ping statistics --- 00:24:46.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:46.532 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:24:46.532 12:00:36 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:46.532 12:00:36 -- nvmf/common.sh@411 -- # return 0 00:24:46.532 12:00:36 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:46.532 12:00:36 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:46.532 12:00:36 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:46.532 12:00:36 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:46.532 12:00:36 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:46.532 12:00:36 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:46.532 12:00:36 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:46.532 12:00:36 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:46.532 12:00:36 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:46.532 12:00:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:46.532 12:00:36 -- common/autotest_common.sh@10 -- # set +x 00:24:46.532 12:00:36 -- nvmf/common.sh@470 -- # nvmfpid=2574358 00:24:46.532 12:00:36 -- nvmf/common.sh@471 -- # waitforlisten 2574358 00:24:46.532 12:00:36 -- common/autotest_common.sh@817 -- # '[' -z 2574358 ']' 00:24:46.532 12:00:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:46.532 12:00:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:46.532 12:00:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:46.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:46.532 12:00:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:46.532 12:00:36 -- common/autotest_common.sh@10 -- # set +x 00:24:46.532 12:00:36 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:46.532 [2024-04-18 12:00:36.984570] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:24:46.532 [2024-04-18 12:00:36.984658] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:46.532 EAL: No free 2048 kB hugepages reported on node 1 00:24:46.789 [2024-04-18 12:00:37.114086] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:46.789 [2024-04-18 12:00:37.319614] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:46.789 [2024-04-18 12:00:37.319661] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:46.789 [2024-04-18 12:00:37.319673] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:46.789 [2024-04-18 12:00:37.319701] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:46.789 [2024-04-18 12:00:37.319711] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:46.789 [2024-04-18 12:00:37.319748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:47.356 12:00:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:47.356 12:00:37 -- common/autotest_common.sh@850 -- # return 0 00:24:47.356 12:00:37 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:47.356 12:00:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:47.356 12:00:37 -- common/autotest_common.sh@10 -- # set +x 00:24:47.356 12:00:37 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:47.356 12:00:37 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:47.356 12:00:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.356 12:00:37 -- common/autotest_common.sh@10 -- # set +x 00:24:47.356 [2024-04-18 12:00:37.774677] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:47.356 12:00:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.356 12:00:37 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:47.356 12:00:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.356 12:00:37 -- common/autotest_common.sh@10 -- # set +x 00:24:47.356 null0 00:24:47.356 12:00:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.356 12:00:37 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:47.356 12:00:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.356 12:00:37 -- common/autotest_common.sh@10 -- # set +x 00:24:47.356 12:00:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.356 12:00:37 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:47.356 12:00:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.356 12:00:37 -- common/autotest_common.sh@10 -- # set +x 00:24:47.356 12:00:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.356 12:00:37 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 12fea1b715b44be3a8f25525c571a64e 00:24:47.356 12:00:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.356 12:00:37 -- common/autotest_common.sh@10 -- # set +x 00:24:47.356 12:00:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.356 12:00:37 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:47.356 12:00:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.356 12:00:37 -- common/autotest_common.sh@10 -- # set +x 00:24:47.356 [2024-04-18 12:00:37.814964] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:47.356 12:00:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.356 12:00:37 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:47.356 12:00:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.356 12:00:37 -- common/autotest_common.sh@10 -- # set +x 00:24:47.615 nvme0n1 00:24:47.615 12:00:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.615 12:00:38 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:47.615 12:00:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.615 12:00:38 -- common/autotest_common.sh@10 -- # set +x 00:24:47.615 [ 00:24:47.615 { 00:24:47.615 "name": "nvme0n1", 00:24:47.615 "aliases": [ 00:24:47.615 "12fea1b7-15b4-4be3-a8f2-5525c571a64e" 00:24:47.615 ], 00:24:47.615 "product_name": "NVMe disk", 00:24:47.615 "block_size": 512, 00:24:47.615 "num_blocks": 2097152, 00:24:47.615 "uuid": "12fea1b7-15b4-4be3-a8f2-5525c571a64e", 00:24:47.615 "assigned_rate_limits": { 00:24:47.615 "rw_ios_per_sec": 0, 00:24:47.615 "rw_mbytes_per_sec": 0, 00:24:47.615 "r_mbytes_per_sec": 0, 00:24:47.615 "w_mbytes_per_sec": 0 00:24:47.615 }, 00:24:47.615 "claimed": false, 00:24:47.615 "zoned": false, 00:24:47.615 "supported_io_types": { 00:24:47.615 "read": true, 00:24:47.615 "write": true, 00:24:47.615 "unmap": false, 00:24:47.615 "write_zeroes": true, 00:24:47.615 "flush": true, 00:24:47.615 "reset": true, 00:24:47.615 "compare": true, 00:24:47.615 "compare_and_write": true, 00:24:47.615 "abort": true, 00:24:47.615 "nvme_admin": true, 00:24:47.615 "nvme_io": true 00:24:47.615 }, 00:24:47.615 "memory_domains": [ 00:24:47.615 { 00:24:47.615 "dma_device_id": "system", 00:24:47.615 "dma_device_type": 1 00:24:47.615 } 00:24:47.615 ], 00:24:47.615 "driver_specific": { 00:24:47.615 "nvme": [ 00:24:47.615 { 00:24:47.615 "trid": { 00:24:47.615 "trtype": "TCP", 00:24:47.615 "adrfam": "IPv4", 00:24:47.615 "traddr": "10.0.0.2", 00:24:47.615 "trsvcid": "4420", 00:24:47.615 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:47.615 }, 00:24:47.615 "ctrlr_data": { 00:24:47.615 "cntlid": 1, 00:24:47.615 "vendor_id": "0x8086", 00:24:47.615 "model_number": "SPDK bdev Controller", 00:24:47.615 "serial_number": "00000000000000000000", 00:24:47.615 "firmware_revision": "24.05", 00:24:47.615 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:47.615 "oacs": { 00:24:47.615 "security": 0, 00:24:47.615 "format": 0, 00:24:47.615 "firmware": 0, 00:24:47.615 "ns_manage": 0 00:24:47.615 }, 00:24:47.615 "multi_ctrlr": true, 00:24:47.615 "ana_reporting": false 00:24:47.615 }, 00:24:47.615 "vs": { 00:24:47.615 "nvme_version": "1.3" 00:24:47.615 }, 00:24:47.615 "ns_data": { 00:24:47.615 "id": 1, 00:24:47.615 "can_share": true 00:24:47.615 } 00:24:47.615 } 00:24:47.615 ], 00:24:47.615 "mp_policy": "active_passive" 00:24:47.615 } 00:24:47.615 } 00:24:47.615 ] 00:24:47.615 12:00:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.615 12:00:38 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:47.615 12:00:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.615 12:00:38 -- common/autotest_common.sh@10 -- # set +x 00:24:47.615 [2024-04-18 12:00:38.068433] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:47.615 [2024-04-18 12:00:38.068522] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000006840 (9): Bad file descriptor 00:24:47.873 [2024-04-18 12:00:38.210569] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:47.873 12:00:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.873 12:00:38 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:47.873 12:00:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.873 12:00:38 -- common/autotest_common.sh@10 -- # set +x 00:24:47.874 [ 00:24:47.874 { 00:24:47.874 "name": "nvme0n1", 00:24:47.874 "aliases": [ 00:24:47.874 "12fea1b7-15b4-4be3-a8f2-5525c571a64e" 00:24:47.874 ], 00:24:47.874 "product_name": "NVMe disk", 00:24:47.874 "block_size": 512, 00:24:47.874 "num_blocks": 2097152, 00:24:47.874 "uuid": "12fea1b7-15b4-4be3-a8f2-5525c571a64e", 00:24:47.874 "assigned_rate_limits": { 00:24:47.874 "rw_ios_per_sec": 0, 00:24:47.874 "rw_mbytes_per_sec": 0, 00:24:47.874 "r_mbytes_per_sec": 0, 00:24:47.874 "w_mbytes_per_sec": 0 00:24:47.874 }, 00:24:47.874 "claimed": false, 00:24:47.874 "zoned": false, 00:24:47.874 "supported_io_types": { 00:24:47.874 "read": true, 00:24:47.874 "write": true, 00:24:47.874 "unmap": false, 00:24:47.874 "write_zeroes": true, 00:24:47.874 "flush": true, 00:24:47.874 "reset": true, 00:24:47.874 "compare": true, 00:24:47.874 "compare_and_write": true, 00:24:47.874 "abort": true, 00:24:47.874 "nvme_admin": true, 00:24:47.874 "nvme_io": true 00:24:47.874 }, 00:24:47.874 "memory_domains": [ 00:24:47.874 { 00:24:47.874 "dma_device_id": "system", 00:24:47.874 "dma_device_type": 1 00:24:47.874 } 00:24:47.874 ], 00:24:47.874 "driver_specific": { 00:24:47.874 "nvme": [ 00:24:47.874 { 00:24:47.874 "trid": { 00:24:47.874 "trtype": "TCP", 00:24:47.874 "adrfam": "IPv4", 00:24:47.874 "traddr": "10.0.0.2", 00:24:47.874 "trsvcid": "4420", 00:24:47.874 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:47.874 }, 00:24:47.874 "ctrlr_data": { 00:24:47.874 "cntlid": 2, 00:24:47.874 "vendor_id": "0x8086", 00:24:47.874 "model_number": "SPDK bdev Controller", 00:24:47.874 "serial_number": "00000000000000000000", 00:24:47.874 "firmware_revision": "24.05", 00:24:47.874 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:47.874 "oacs": { 00:24:47.874 "security": 0, 00:24:47.874 "format": 0, 00:24:47.874 "firmware": 0, 00:24:47.874 "ns_manage": 0 00:24:47.874 }, 00:24:47.874 "multi_ctrlr": true, 00:24:47.874 "ana_reporting": false 00:24:47.874 }, 00:24:47.874 "vs": { 00:24:47.874 "nvme_version": "1.3" 00:24:47.874 }, 00:24:47.874 "ns_data": { 00:24:47.874 "id": 1, 00:24:47.874 "can_share": true 00:24:47.874 } 00:24:47.874 } 00:24:47.874 ], 00:24:47.874 "mp_policy": "active_passive" 00:24:47.874 } 00:24:47.874 } 00:24:47.874 ] 00:24:47.874 12:00:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.874 12:00:38 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.874 12:00:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.874 12:00:38 -- common/autotest_common.sh@10 -- # set +x 00:24:47.874 12:00:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.874 12:00:38 -- host/async_init.sh@53 -- # mktemp 00:24:47.874 12:00:38 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.mxWkP1MTNe 00:24:47.874 12:00:38 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:47.874 12:00:38 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.mxWkP1MTNe 00:24:47.874 12:00:38 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:47.874 12:00:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.874 12:00:38 -- common/autotest_common.sh@10 -- # set +x 00:24:47.874 12:00:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.874 12:00:38 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:24:47.874 12:00:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.874 12:00:38 -- common/autotest_common.sh@10 -- # set +x 00:24:47.874 [2024-04-18 12:00:38.269594] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:47.874 [2024-04-18 12:00:38.269772] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:47.874 12:00:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.874 12:00:38 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.mxWkP1MTNe 00:24:47.874 12:00:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.874 12:00:38 -- common/autotest_common.sh@10 -- # set +x 00:24:47.874 [2024-04-18 12:00:38.277620] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:47.874 12:00:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.874 12:00:38 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.mxWkP1MTNe 00:24:47.874 12:00:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.874 12:00:38 -- common/autotest_common.sh@10 -- # set +x 00:24:47.874 [2024-04-18 12:00:38.285629] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:47.874 [2024-04-18 12:00:38.285736] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:47.874 nvme0n1 00:24:47.874 12:00:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.874 12:00:38 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:47.874 12:00:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.874 12:00:38 -- common/autotest_common.sh@10 -- # set +x 00:24:47.874 [ 00:24:47.874 { 00:24:47.874 "name": "nvme0n1", 00:24:47.874 "aliases": [ 00:24:47.874 "12fea1b7-15b4-4be3-a8f2-5525c571a64e" 00:24:47.874 ], 00:24:47.874 "product_name": "NVMe disk", 00:24:47.874 "block_size": 512, 00:24:47.874 "num_blocks": 2097152, 00:24:47.874 "uuid": "12fea1b7-15b4-4be3-a8f2-5525c571a64e", 00:24:47.874 "assigned_rate_limits": { 00:24:47.874 "rw_ios_per_sec": 0, 00:24:47.874 "rw_mbytes_per_sec": 0, 00:24:47.874 "r_mbytes_per_sec": 0, 00:24:47.874 "w_mbytes_per_sec": 0 00:24:47.874 }, 00:24:47.874 "claimed": false, 00:24:47.874 "zoned": false, 00:24:47.874 "supported_io_types": { 00:24:47.874 "read": true, 00:24:47.874 "write": true, 00:24:47.874 "unmap": false, 00:24:47.874 "write_zeroes": true, 00:24:47.874 "flush": true, 00:24:47.874 "reset": true, 00:24:47.874 "compare": true, 00:24:47.874 "compare_and_write": true, 00:24:47.874 "abort": true, 00:24:47.874 "nvme_admin": true, 00:24:47.874 "nvme_io": true 00:24:47.874 }, 00:24:47.874 "memory_domains": [ 00:24:47.874 { 00:24:47.874 "dma_device_id": "system", 00:24:47.874 "dma_device_type": 1 00:24:47.874 } 00:24:47.874 ], 00:24:47.874 "driver_specific": { 00:24:47.874 "nvme": [ 00:24:47.874 { 00:24:47.874 "trid": { 00:24:47.874 "trtype": "TCP", 00:24:47.874 "adrfam": "IPv4", 00:24:47.874 "traddr": "10.0.0.2", 00:24:47.874 "trsvcid": "4421", 00:24:47.874 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:47.874 }, 00:24:47.874 "ctrlr_data": { 00:24:47.874 "cntlid": 3, 00:24:47.874 "vendor_id": "0x8086", 00:24:47.874 "model_number": "SPDK bdev Controller", 00:24:47.874 "serial_number": "00000000000000000000", 00:24:47.874 "firmware_revision": "24.05", 00:24:47.874 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:47.874 "oacs": { 00:24:47.874 "security": 0, 00:24:47.874 "format": 0, 00:24:47.874 "firmware": 0, 00:24:47.874 "ns_manage": 0 00:24:47.874 }, 00:24:47.874 "multi_ctrlr": true, 00:24:47.874 "ana_reporting": false 00:24:47.874 }, 00:24:47.874 "vs": { 00:24:47.874 "nvme_version": "1.3" 00:24:47.874 }, 00:24:47.874 "ns_data": { 00:24:47.874 "id": 1, 00:24:47.874 "can_share": true 00:24:47.874 } 00:24:47.874 } 00:24:47.874 ], 00:24:47.874 "mp_policy": "active_passive" 00:24:47.874 } 00:24:47.874 } 00:24:47.874 ] 00:24:47.874 12:00:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.874 12:00:38 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.874 12:00:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.874 12:00:38 -- common/autotest_common.sh@10 -- # set +x 00:24:47.874 12:00:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.874 12:00:38 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.mxWkP1MTNe 00:24:47.874 12:00:38 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:24:47.874 12:00:38 -- host/async_init.sh@78 -- # nvmftestfini 00:24:47.874 12:00:38 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:47.874 12:00:38 -- nvmf/common.sh@117 -- # sync 00:24:47.874 12:00:38 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:47.874 12:00:38 -- nvmf/common.sh@120 -- # set +e 00:24:47.874 12:00:38 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:47.874 12:00:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:47.874 rmmod nvme_tcp 00:24:47.874 rmmod nvme_fabrics 00:24:48.132 rmmod nvme_keyring 00:24:48.132 12:00:38 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:48.132 12:00:38 -- nvmf/common.sh@124 -- # set -e 00:24:48.132 12:00:38 -- nvmf/common.sh@125 -- # return 0 00:24:48.132 12:00:38 -- nvmf/common.sh@478 -- # '[' -n 2574358 ']' 00:24:48.132 12:00:38 -- nvmf/common.sh@479 -- # killprocess 2574358 00:24:48.132 12:00:38 -- common/autotest_common.sh@936 -- # '[' -z 2574358 ']' 00:24:48.132 12:00:38 -- common/autotest_common.sh@940 -- # kill -0 2574358 00:24:48.132 12:00:38 -- common/autotest_common.sh@941 -- # uname 00:24:48.132 12:00:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:48.132 12:00:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2574358 00:24:48.132 12:00:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:48.132 12:00:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:48.132 12:00:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2574358' 00:24:48.132 killing process with pid 2574358 00:24:48.132 12:00:38 -- common/autotest_common.sh@955 -- # kill 2574358 00:24:48.132 [2024-04-18 12:00:38.502960] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:48.132 [2024-04-18 12:00:38.502995] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:48.132 12:00:38 -- common/autotest_common.sh@960 -- # wait 2574358 00:24:49.507 12:00:39 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:49.507 12:00:39 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:49.507 12:00:39 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:49.507 12:00:39 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:49.507 12:00:39 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:49.507 12:00:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:49.507 12:00:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:49.507 12:00:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:51.426 12:00:41 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:51.426 00:24:51.426 real 0m11.460s 00:24:51.426 user 0m4.493s 00:24:51.426 sys 0m5.422s 00:24:51.426 12:00:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:51.426 12:00:41 -- common/autotest_common.sh@10 -- # set +x 00:24:51.426 ************************************ 00:24:51.426 END TEST nvmf_async_init 00:24:51.426 ************************************ 00:24:51.426 12:00:41 -- nvmf/nvmf.sh@92 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:51.426 12:00:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:51.426 12:00:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:51.426 12:00:41 -- common/autotest_common.sh@10 -- # set +x 00:24:51.426 ************************************ 00:24:51.426 START TEST dma 00:24:51.426 ************************************ 00:24:51.426 12:00:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:51.684 * Looking for test storage... 00:24:51.684 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:51.684 12:00:42 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:51.684 12:00:42 -- nvmf/common.sh@7 -- # uname -s 00:24:51.684 12:00:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:51.684 12:00:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:51.684 12:00:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:51.684 12:00:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:51.684 12:00:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:51.684 12:00:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:51.684 12:00:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:51.684 12:00:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:51.684 12:00:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:51.684 12:00:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:51.684 12:00:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:51.684 12:00:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:24:51.684 12:00:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:51.684 12:00:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:51.684 12:00:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:51.684 12:00:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:51.684 12:00:42 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:51.684 12:00:42 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:51.684 12:00:42 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:51.684 12:00:42 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:51.685 12:00:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.685 12:00:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.685 12:00:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.685 12:00:42 -- paths/export.sh@5 -- # export PATH 00:24:51.685 12:00:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.685 12:00:42 -- nvmf/common.sh@47 -- # : 0 00:24:51.685 12:00:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:51.685 12:00:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:51.685 12:00:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:51.685 12:00:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:51.685 12:00:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:51.685 12:00:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:51.685 12:00:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:51.685 12:00:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:51.685 12:00:42 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:51.685 12:00:42 -- host/dma.sh@13 -- # exit 0 00:24:51.685 00:24:51.685 real 0m0.143s 00:24:51.685 user 0m0.059s 00:24:51.685 sys 0m0.093s 00:24:51.685 12:00:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:51.685 12:00:42 -- common/autotest_common.sh@10 -- # set +x 00:24:51.685 ************************************ 00:24:51.685 END TEST dma 00:24:51.685 ************************************ 00:24:51.685 12:00:42 -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:51.685 12:00:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:51.685 12:00:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:51.685 12:00:42 -- common/autotest_common.sh@10 -- # set +x 00:24:51.942 ************************************ 00:24:51.942 START TEST nvmf_identify 00:24:51.942 ************************************ 00:24:51.942 12:00:42 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:51.942 * Looking for test storage... 00:24:51.942 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:51.942 12:00:42 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:51.942 12:00:42 -- nvmf/common.sh@7 -- # uname -s 00:24:51.942 12:00:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:51.942 12:00:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:51.943 12:00:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:51.943 12:00:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:51.943 12:00:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:51.943 12:00:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:51.943 12:00:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:51.943 12:00:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:51.943 12:00:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:51.943 12:00:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:51.943 12:00:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:51.943 12:00:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:24:51.943 12:00:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:51.943 12:00:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:51.943 12:00:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:51.943 12:00:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:51.943 12:00:42 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:51.943 12:00:42 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:51.943 12:00:42 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:51.943 12:00:42 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:51.943 12:00:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.943 12:00:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.943 12:00:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.943 12:00:42 -- paths/export.sh@5 -- # export PATH 00:24:51.943 12:00:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.943 12:00:42 -- nvmf/common.sh@47 -- # : 0 00:24:51.943 12:00:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:51.943 12:00:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:51.943 12:00:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:51.943 12:00:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:51.943 12:00:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:51.943 12:00:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:51.943 12:00:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:51.943 12:00:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:51.943 12:00:42 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:51.943 12:00:42 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:51.943 12:00:42 -- host/identify.sh@14 -- # nvmftestinit 00:24:51.943 12:00:42 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:51.943 12:00:42 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:51.943 12:00:42 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:51.943 12:00:42 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:51.943 12:00:42 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:51.943 12:00:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:51.943 12:00:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:51.943 12:00:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:51.943 12:00:42 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:24:51.943 12:00:42 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:24:51.943 12:00:42 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:51.943 12:00:42 -- common/autotest_common.sh@10 -- # set +x 00:24:58.501 12:00:48 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:58.501 12:00:48 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:58.501 12:00:48 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:58.501 12:00:48 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:58.501 12:00:48 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:58.501 12:00:48 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:58.501 12:00:48 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:58.501 12:00:48 -- nvmf/common.sh@295 -- # net_devs=() 00:24:58.501 12:00:48 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:58.501 12:00:48 -- nvmf/common.sh@296 -- # e810=() 00:24:58.501 12:00:48 -- nvmf/common.sh@296 -- # local -ga e810 00:24:58.501 12:00:48 -- nvmf/common.sh@297 -- # x722=() 00:24:58.501 12:00:48 -- nvmf/common.sh@297 -- # local -ga x722 00:24:58.501 12:00:48 -- nvmf/common.sh@298 -- # mlx=() 00:24:58.501 12:00:48 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:58.501 12:00:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:58.501 12:00:48 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:58.501 12:00:48 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:58.501 12:00:48 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:58.501 12:00:48 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:58.501 12:00:48 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:58.501 12:00:48 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:58.501 12:00:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:58.501 12:00:48 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:58.501 12:00:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:58.501 12:00:48 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:58.501 12:00:48 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:58.501 12:00:48 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:58.501 12:00:48 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:58.501 12:00:48 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:58.501 12:00:48 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:58.501 12:00:48 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:58.501 12:00:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:58.501 12:00:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:58.501 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:58.501 12:00:48 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:58.501 12:00:48 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:58.502 12:00:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:58.502 12:00:48 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:58.502 12:00:48 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:58.502 12:00:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:58.502 12:00:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:58.502 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:58.502 12:00:48 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:58.502 12:00:48 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:58.502 12:00:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:58.502 12:00:48 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:58.502 12:00:48 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:58.502 12:00:48 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:58.502 12:00:48 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:58.502 12:00:48 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:58.502 12:00:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:58.502 12:00:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:58.502 12:00:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:58.502 12:00:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:58.502 12:00:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:58.502 Found net devices under 0000:af:00.0: cvl_0_0 00:24:58.502 12:00:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:58.502 12:00:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:58.502 12:00:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:58.502 12:00:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:58.502 12:00:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:58.502 12:00:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:58.502 Found net devices under 0000:af:00.1: cvl_0_1 00:24:58.502 12:00:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:58.502 12:00:48 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:24:58.502 12:00:48 -- nvmf/common.sh@403 -- # is_hw=yes 00:24:58.502 12:00:48 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:24:58.502 12:00:48 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:24:58.502 12:00:48 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:24:58.502 12:00:48 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:58.502 12:00:48 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:58.502 12:00:48 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:58.502 12:00:48 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:58.502 12:00:48 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:58.502 12:00:48 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:58.502 12:00:48 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:58.502 12:00:48 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:58.502 12:00:48 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:58.502 12:00:48 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:58.502 12:00:48 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:58.502 12:00:48 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:58.502 12:00:48 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:58.502 12:00:48 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:58.502 12:00:48 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:58.502 12:00:48 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:58.502 12:00:48 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:58.502 12:00:48 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:58.502 12:00:48 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:58.502 12:00:48 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:58.502 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:58.502 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:24:58.502 00:24:58.502 --- 10.0.0.2 ping statistics --- 00:24:58.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.502 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:24:58.502 12:00:48 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:58.502 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:58.502 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:24:58.502 00:24:58.502 --- 10.0.0.1 ping statistics --- 00:24:58.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.502 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:24:58.502 12:00:48 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:58.502 12:00:48 -- nvmf/common.sh@411 -- # return 0 00:24:58.502 12:00:48 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:58.502 12:00:48 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:58.502 12:00:48 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:58.502 12:00:48 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:58.502 12:00:48 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:58.502 12:00:48 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:58.502 12:00:48 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:58.502 12:00:48 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:58.502 12:00:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:58.502 12:00:48 -- common/autotest_common.sh@10 -- # set +x 00:24:58.502 12:00:48 -- host/identify.sh@19 -- # nvmfpid=2578405 00:24:58.502 12:00:48 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:58.502 12:00:48 -- host/identify.sh@23 -- # waitforlisten 2578405 00:24:58.502 12:00:48 -- common/autotest_common.sh@817 -- # '[' -z 2578405 ']' 00:24:58.502 12:00:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:58.502 12:00:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:58.502 12:00:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:58.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:58.502 12:00:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:58.502 12:00:48 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:58.502 12:00:48 -- common/autotest_common.sh@10 -- # set +x 00:24:58.502 [2024-04-18 12:00:48.576766] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:24:58.502 [2024-04-18 12:00:48.576856] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:58.502 EAL: No free 2048 kB hugepages reported on node 1 00:24:58.502 [2024-04-18 12:00:48.705616] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:58.502 [2024-04-18 12:00:48.914435] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:58.502 [2024-04-18 12:00:48.914486] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:58.502 [2024-04-18 12:00:48.914498] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:58.502 [2024-04-18 12:00:48.914528] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:58.502 [2024-04-18 12:00:48.914538] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:58.502 [2024-04-18 12:00:48.914660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:58.502 [2024-04-18 12:00:48.914736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:58.502 [2024-04-18 12:00:48.914797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:58.502 [2024-04-18 12:00:48.914806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:59.069 12:00:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:59.069 12:00:49 -- common/autotest_common.sh@850 -- # return 0 00:24:59.069 12:00:49 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:59.069 12:00:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:59.069 12:00:49 -- common/autotest_common.sh@10 -- # set +x 00:24:59.069 [2024-04-18 12:00:49.352144] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:59.069 12:00:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:59.069 12:00:49 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:59.069 12:00:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:59.069 12:00:49 -- common/autotest_common.sh@10 -- # set +x 00:24:59.069 12:00:49 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:59.069 12:00:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:59.069 12:00:49 -- common/autotest_common.sh@10 -- # set +x 00:24:59.069 Malloc0 00:24:59.069 12:00:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:59.069 12:00:49 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:59.069 12:00:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:59.069 12:00:49 -- common/autotest_common.sh@10 -- # set +x 00:24:59.069 12:00:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:59.069 12:00:49 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:59.069 12:00:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:59.069 12:00:49 -- common/autotest_common.sh@10 -- # set +x 00:24:59.069 12:00:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:59.069 12:00:49 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:59.069 12:00:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:59.069 12:00:49 -- common/autotest_common.sh@10 -- # set +x 00:24:59.069 [2024-04-18 12:00:49.524220] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:59.069 12:00:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:59.069 12:00:49 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:59.069 12:00:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:59.069 12:00:49 -- common/autotest_common.sh@10 -- # set +x 00:24:59.069 12:00:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:59.069 12:00:49 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:59.069 12:00:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:59.069 12:00:49 -- common/autotest_common.sh@10 -- # set +x 00:24:59.069 [2024-04-18 12:00:49.539982] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:24:59.069 [ 00:24:59.069 { 00:24:59.069 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:59.069 "subtype": "Discovery", 00:24:59.069 "listen_addresses": [ 00:24:59.069 { 00:24:59.069 "transport": "TCP", 00:24:59.069 "trtype": "TCP", 00:24:59.069 "adrfam": "IPv4", 00:24:59.069 "traddr": "10.0.0.2", 00:24:59.069 "trsvcid": "4420" 00:24:59.069 } 00:24:59.069 ], 00:24:59.069 "allow_any_host": true, 00:24:59.069 "hosts": [] 00:24:59.069 }, 00:24:59.069 { 00:24:59.069 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:59.069 "subtype": "NVMe", 00:24:59.069 "listen_addresses": [ 00:24:59.069 { 00:24:59.069 "transport": "TCP", 00:24:59.069 "trtype": "TCP", 00:24:59.069 "adrfam": "IPv4", 00:24:59.069 "traddr": "10.0.0.2", 00:24:59.069 "trsvcid": "4420" 00:24:59.069 } 00:24:59.069 ], 00:24:59.069 "allow_any_host": true, 00:24:59.069 "hosts": [], 00:24:59.069 "serial_number": "SPDK00000000000001", 00:24:59.069 "model_number": "SPDK bdev Controller", 00:24:59.069 "max_namespaces": 32, 00:24:59.069 "min_cntlid": 1, 00:24:59.069 "max_cntlid": 65519, 00:24:59.069 "namespaces": [ 00:24:59.069 { 00:24:59.069 "nsid": 1, 00:24:59.069 "bdev_name": "Malloc0", 00:24:59.069 "name": "Malloc0", 00:24:59.069 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:59.069 "eui64": "ABCDEF0123456789", 00:24:59.069 "uuid": "cee5bf67-3934-4a36-8afa-2d38bd360a91" 00:24:59.069 } 00:24:59.069 ] 00:24:59.069 } 00:24:59.069 ] 00:24:59.069 12:00:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:59.069 12:00:49 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:59.069 [2024-04-18 12:00:49.605395] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:24:59.069 [2024-04-18 12:00:49.605470] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2578684 ] 00:24:59.329 EAL: No free 2048 kB hugepages reported on node 1 00:24:59.329 [2024-04-18 12:00:49.653903] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:24:59.329 [2024-04-18 12:00:49.654013] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:59.329 [2024-04-18 12:00:49.654025] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:59.329 [2024-04-18 12:00:49.654046] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:59.329 [2024-04-18 12:00:49.654061] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:59.329 [2024-04-18 12:00:49.654552] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:24:59.329 [2024-04-18 12:00:49.654595] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x614000002040 0 00:24:59.329 [2024-04-18 12:00:49.665469] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:59.329 [2024-04-18 12:00:49.665497] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:59.329 [2024-04-18 12:00:49.665505] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:59.329 [2024-04-18 12:00:49.665513] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:59.329 [2024-04-18 12:00:49.665572] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.329 [2024-04-18 12:00:49.665582] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.329 [2024-04-18 12:00:49.665590] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:24:59.329 [2024-04-18 12:00:49.665614] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:59.329 [2024-04-18 12:00:49.665640] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:24:59.329 [2024-04-18 12:00:49.673471] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.329 [2024-04-18 12:00:49.673493] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.329 [2024-04-18 12:00:49.673500] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.329 [2024-04-18 12:00:49.673509] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:24:59.329 [2024-04-18 12:00:49.673532] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:59.329 [2024-04-18 12:00:49.673546] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:24:59.329 [2024-04-18 12:00:49.673557] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:24:59.329 [2024-04-18 12:00:49.673576] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.329 [2024-04-18 12:00:49.673584] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.329 [2024-04-18 12:00:49.673592] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:24:59.329 [2024-04-18 12:00:49.673609] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.329 [2024-04-18 12:00:49.673631] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:24:59.329 [2024-04-18 12:00:49.673878] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.329 [2024-04-18 12:00:49.673890] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.329 [2024-04-18 12:00:49.673897] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.329 [2024-04-18 12:00:49.673907] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:24:59.329 [2024-04-18 12:00:49.673921] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:24:59.329 [2024-04-18 12:00:49.673935] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:24:59.329 [2024-04-18 12:00:49.673947] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.329 [2024-04-18 12:00:49.673958] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.329 [2024-04-18 12:00:49.673965] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:24:59.329 [2024-04-18 12:00:49.673980] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.329 [2024-04-18 12:00:49.674002] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:24:59.329 [2024-04-18 12:00:49.674114] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.329 [2024-04-18 12:00:49.674127] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.329 [2024-04-18 12:00:49.674133] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.329 [2024-04-18 12:00:49.674140] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:24:59.329 [2024-04-18 12:00:49.674150] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:24:59.329 [2024-04-18 12:00:49.674164] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:24:59.330 [2024-04-18 12:00:49.674176] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.330 [2024-04-18 12:00:49.674183] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.330 [2024-04-18 12:00:49.674191] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:24:59.330 [2024-04-18 12:00:49.674208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.330 [2024-04-18 12:00:49.674225] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:24:59.330 [2024-04-18 12:00:49.674330] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.330 [2024-04-18 12:00:49.674340] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.330 [2024-04-18 12:00:49.674347] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.330 [2024-04-18 12:00:49.674358] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:24:59.330 [2024-04-18 12:00:49.674368] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:59.330 [2024-04-18 12:00:49.674384] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.330 [2024-04-18 12:00:49.674391] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.330 [2024-04-18 12:00:49.674399] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:24:59.330 [2024-04-18 12:00:49.674411] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.330 [2024-04-18 12:00:49.674427] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:24:59.330 [2024-04-18 12:00:49.674583] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.330 [2024-04-18 12:00:49.674594] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.330 [2024-04-18 12:00:49.674601] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.330 [2024-04-18 12:00:49.674608] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:24:59.330 [2024-04-18 12:00:49.674617] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:24:59.330 [2024-04-18 12:00:49.674627] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:24:59.330 [2024-04-18 12:00:49.674642] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:59.330 [2024-04-18 12:00:49.674754] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:24:59.330 [2024-04-18 12:00:49.674767] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:59.330 [2024-04-18 12:00:49.674782] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.330 [2024-04-18 12:00:49.674790] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.330 [2024-04-18 12:00:49.674797] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:24:59.330 [2024-04-18 12:00:49.674817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.330 [2024-04-18 12:00:49.674835] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:24:59.330 [2024-04-18 12:00:49.674990] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.330 [2024-04-18 12:00:49.675000] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.330 [2024-04-18 12:00:49.675006] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.330 [2024-04-18 12:00:49.675013] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:24:59.330 [2024-04-18 12:00:49.675022] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:59.330 [2024-04-18 12:00:49.675043] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.330 [2024-04-18 12:00:49.675053] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.330 [2024-04-18 12:00:49.675060] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:24:59.330 [2024-04-18 12:00:49.675072] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.330 [2024-04-18 12:00:49.675089] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:24:59.330 [2024-04-18 12:00:49.675297] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.330 [2024-04-18 12:00:49.675307] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.330 [2024-04-18 12:00:49.675313] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.330 [2024-04-18 12:00:49.675319] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:24:59.330 [2024-04-18 12:00:49.675329] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:59.330 [2024-04-18 12:00:49.675338] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:24:59.330 [2024-04-18 12:00:49.675351] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:24:59.330 [2024-04-18 12:00:49.675365] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:24:59.330 [2024-04-18 12:00:49.675381] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.330 [2024-04-18 12:00:49.675389] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:24:59.330 [2024-04-18 12:00:49.675401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.330 [2024-04-18 12:00:49.675420] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:24:59.330 [2024-04-18 12:00:49.675586] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:59.330 [2024-04-18 12:00:49.675597] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:59.330 [2024-04-18 12:00:49.675604] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:59.330 [2024-04-18 12:00:49.675612] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=4096, cccid=0 00:24:59.330 [2024-04-18 12:00:49.675621] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x614000002040): expected_datao=0, payload_size=4096 00:24:59.330 [2024-04-18 12:00:49.675633] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.330 [2024-04-18 12:00:49.675647] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:59.330 [2024-04-18 12:00:49.675657] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:59.330 [2024-04-18 12:00:49.675779] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.330 [2024-04-18 12:00:49.675788] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.330 [2024-04-18 12:00:49.675794] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.330 [2024-04-18 12:00:49.675801] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:24:59.330 [2024-04-18 12:00:49.675817] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:24:59.330 [2024-04-18 12:00:49.675827] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:24:59.330 [2024-04-18 12:00:49.675835] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:24:59.330 [2024-04-18 12:00:49.675847] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:24:59.330 [2024-04-18 12:00:49.675856] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:24:59.330 [2024-04-18 12:00:49.675865] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:24:59.330 [2024-04-18 12:00:49.675893] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:24:59.330 [2024-04-18 12:00:49.675906] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.330 [2024-04-18 12:00:49.675914] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.330 [2024-04-18 12:00:49.675922] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:24:59.330 [2024-04-18 12:00:49.675935] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:59.330 [2024-04-18 12:00:49.675953] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:24:59.330 [2024-04-18 12:00:49.676112] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.330 [2024-04-18 12:00:49.676122] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.330 [2024-04-18 12:00:49.676128] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.330 [2024-04-18 12:00:49.676135] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:24:59.330 [2024-04-18 12:00:49.676147] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.330 [2024-04-18 12:00:49.676155] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.330 [2024-04-18 12:00:49.676167] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:24:59.330 [2024-04-18 12:00:49.676181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.330 [2024-04-18 12:00:49.676192] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.330 [2024-04-18 12:00:49.676198] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.330 [2024-04-18 12:00:49.676205] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x614000002040) 00:24:59.330 [2024-04-18 12:00:49.676215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.330 [2024-04-18 12:00:49.676224] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.330 [2024-04-18 12:00:49.676230] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.330 [2024-04-18 12:00:49.676237] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x614000002040) 00:24:59.330 [2024-04-18 12:00:49.676247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.330 [2024-04-18 12:00:49.676258] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.330 [2024-04-18 12:00:49.676264] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.330 [2024-04-18 12:00:49.676273] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:24:59.330 [2024-04-18 12:00:49.676283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.330 [2024-04-18 12:00:49.676292] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:24:59.331 [2024-04-18 12:00:49.676307] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:59.331 [2024-04-18 12:00:49.676318] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.331 [2024-04-18 12:00:49.676325] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:24:59.331 [2024-04-18 12:00:49.676337] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.331 [2024-04-18 12:00:49.676356] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:24:59.331 [2024-04-18 12:00:49.676364] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b260, cid 1, qid 0 00:24:59.331 [2024-04-18 12:00:49.676371] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b3c0, cid 2, qid 0 00:24:59.331 [2024-04-18 12:00:49.676379] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:59.331 [2024-04-18 12:00:49.676386] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:24:59.331 [2024-04-18 12:00:49.676542] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.331 [2024-04-18 12:00:49.676557] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.331 [2024-04-18 12:00:49.676563] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.331 [2024-04-18 12:00:49.676570] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:24:59.331 [2024-04-18 12:00:49.676580] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:24:59.331 [2024-04-18 12:00:49.676589] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:24:59.331 [2024-04-18 12:00:49.676624] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.331 [2024-04-18 12:00:49.676632] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:24:59.331 [2024-04-18 12:00:49.676644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.331 [2024-04-18 12:00:49.676661] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:24:59.331 [2024-04-18 12:00:49.676788] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:59.331 [2024-04-18 12:00:49.676799] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:59.331 [2024-04-18 12:00:49.676806] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:59.331 [2024-04-18 12:00:49.676813] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=4096, cccid=4 00:24:59.331 [2024-04-18 12:00:49.676822] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=4096 00:24:59.331 [2024-04-18 12:00:49.676833] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.331 [2024-04-18 12:00:49.676951] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:59.331 [2024-04-18 12:00:49.676964] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:59.331 [2024-04-18 12:00:49.720462] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.331 [2024-04-18 12:00:49.720483] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.331 [2024-04-18 12:00:49.720490] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.331 [2024-04-18 12:00:49.720498] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:24:59.331 [2024-04-18 12:00:49.720531] nvme_ctrlr.c:4036:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:24:59.331 [2024-04-18 12:00:49.720572] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.331 [2024-04-18 12:00:49.720580] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:24:59.331 [2024-04-18 12:00:49.720594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.331 [2024-04-18 12:00:49.720605] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.331 [2024-04-18 12:00:49.720615] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.331 [2024-04-18 12:00:49.720623] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x614000002040) 00:24:59.331 [2024-04-18 12:00:49.720633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.331 [2024-04-18 12:00:49.720655] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:24:59.331 [2024-04-18 12:00:49.720664] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:24:59.331 [2024-04-18 12:00:49.721028] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:59.331 [2024-04-18 12:00:49.721039] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:59.331 [2024-04-18 12:00:49.721046] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:59.331 [2024-04-18 12:00:49.721054] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=1024, cccid=4 00:24:59.331 [2024-04-18 12:00:49.721063] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=1024 00:24:59.331 [2024-04-18 12:00:49.721071] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.331 [2024-04-18 12:00:49.721082] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:59.331 [2024-04-18 12:00:49.721092] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:59.331 [2024-04-18 12:00:49.721106] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.331 [2024-04-18 12:00:49.721117] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.331 [2024-04-18 12:00:49.721124] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.331 [2024-04-18 12:00:49.721131] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x614000002040 00:24:59.331 [2024-04-18 12:00:49.761632] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.331 [2024-04-18 12:00:49.761652] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.331 [2024-04-18 12:00:49.761659] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.331 [2024-04-18 12:00:49.761666] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:24:59.331 [2024-04-18 12:00:49.761700] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.331 [2024-04-18 12:00:49.761709] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:24:59.331 [2024-04-18 12:00:49.761735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.331 [2024-04-18 12:00:49.761759] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:24:59.331 [2024-04-18 12:00:49.762065] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:59.331 [2024-04-18 12:00:49.762078] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:59.331 [2024-04-18 12:00:49.762084] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:59.331 [2024-04-18 12:00:49.762091] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=3072, cccid=4 00:24:59.331 [2024-04-18 12:00:49.762099] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=3072 00:24:59.331 [2024-04-18 12:00:49.762106] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.331 [2024-04-18 12:00:49.762116] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:59.331 [2024-04-18 12:00:49.762122] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:59.331 [2024-04-18 12:00:49.762172] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.331 [2024-04-18 12:00:49.762181] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.331 [2024-04-18 12:00:49.762187] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.331 [2024-04-18 12:00:49.762193] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:24:59.331 [2024-04-18 12:00:49.762212] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.331 [2024-04-18 12:00:49.762220] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:24:59.331 [2024-04-18 12:00:49.762232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.331 [2024-04-18 12:00:49.762258] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:24:59.331 [2024-04-18 12:00:49.762442] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:59.331 [2024-04-18 12:00:49.762457] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:59.331 [2024-04-18 12:00:49.762464] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:59.331 [2024-04-18 12:00:49.762471] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=8, cccid=4 00:24:59.331 [2024-04-18 12:00:49.762495] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=8 00:24:59.331 [2024-04-18 12:00:49.762503] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.331 [2024-04-18 12:00:49.762513] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:59.331 [2024-04-18 12:00:49.762520] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:59.331 [2024-04-18 12:00:49.802633] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.331 [2024-04-18 12:00:49.802657] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.331 [2024-04-18 12:00:49.802664] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.331 [2024-04-18 12:00:49.802672] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:24:59.331 ===================================================== 00:24:59.331 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:59.331 ===================================================== 00:24:59.331 Controller Capabilities/Features 00:24:59.331 ================================ 00:24:59.331 Vendor ID: 0000 00:24:59.331 Subsystem Vendor ID: 0000 00:24:59.331 Serial Number: .................... 00:24:59.331 Model Number: ........................................ 00:24:59.331 Firmware Version: 24.05 00:24:59.331 Recommended Arb Burst: 0 00:24:59.331 IEEE OUI Identifier: 00 00 00 00:24:59.331 Multi-path I/O 00:24:59.331 May have multiple subsystem ports: No 00:24:59.331 May have multiple controllers: No 00:24:59.331 Associated with SR-IOV VF: No 00:24:59.331 Max Data Transfer Size: 131072 00:24:59.332 Max Number of Namespaces: 0 00:24:59.332 Max Number of I/O Queues: 1024 00:24:59.332 NVMe Specification Version (VS): 1.3 00:24:59.332 NVMe Specification Version (Identify): 1.3 00:24:59.332 Maximum Queue Entries: 128 00:24:59.332 Contiguous Queues Required: Yes 00:24:59.332 Arbitration Mechanisms Supported 00:24:59.332 Weighted Round Robin: Not Supported 00:24:59.332 Vendor Specific: Not Supported 00:24:59.332 Reset Timeout: 15000 ms 00:24:59.332 Doorbell Stride: 4 bytes 00:24:59.332 NVM Subsystem Reset: Not Supported 00:24:59.332 Command Sets Supported 00:24:59.332 NVM Command Set: Supported 00:24:59.332 Boot Partition: Not Supported 00:24:59.332 Memory Page Size Minimum: 4096 bytes 00:24:59.332 Memory Page Size Maximum: 4096 bytes 00:24:59.332 Persistent Memory Region: Not Supported 00:24:59.332 Optional Asynchronous Events Supported 00:24:59.332 Namespace Attribute Notices: Not Supported 00:24:59.332 Firmware Activation Notices: Not Supported 00:24:59.332 ANA Change Notices: Not Supported 00:24:59.332 PLE Aggregate Log Change Notices: Not Supported 00:24:59.332 LBA Status Info Alert Notices: Not Supported 00:24:59.332 EGE Aggregate Log Change Notices: Not Supported 00:24:59.332 Normal NVM Subsystem Shutdown event: Not Supported 00:24:59.332 Zone Descriptor Change Notices: Not Supported 00:24:59.332 Discovery Log Change Notices: Supported 00:24:59.332 Controller Attributes 00:24:59.332 128-bit Host Identifier: Not Supported 00:24:59.332 Non-Operational Permissive Mode: Not Supported 00:24:59.332 NVM Sets: Not Supported 00:24:59.332 Read Recovery Levels: Not Supported 00:24:59.332 Endurance Groups: Not Supported 00:24:59.332 Predictable Latency Mode: Not Supported 00:24:59.332 Traffic Based Keep ALive: Not Supported 00:24:59.332 Namespace Granularity: Not Supported 00:24:59.332 SQ Associations: Not Supported 00:24:59.332 UUID List: Not Supported 00:24:59.332 Multi-Domain Subsystem: Not Supported 00:24:59.332 Fixed Capacity Management: Not Supported 00:24:59.332 Variable Capacity Management: Not Supported 00:24:59.332 Delete Endurance Group: Not Supported 00:24:59.332 Delete NVM Set: Not Supported 00:24:59.332 Extended LBA Formats Supported: Not Supported 00:24:59.332 Flexible Data Placement Supported: Not Supported 00:24:59.332 00:24:59.332 Controller Memory Buffer Support 00:24:59.332 ================================ 00:24:59.332 Supported: No 00:24:59.332 00:24:59.332 Persistent Memory Region Support 00:24:59.332 ================================ 00:24:59.332 Supported: No 00:24:59.332 00:24:59.332 Admin Command Set Attributes 00:24:59.332 ============================ 00:24:59.332 Security Send/Receive: Not Supported 00:24:59.332 Format NVM: Not Supported 00:24:59.332 Firmware Activate/Download: Not Supported 00:24:59.332 Namespace Management: Not Supported 00:24:59.332 Device Self-Test: Not Supported 00:24:59.332 Directives: Not Supported 00:24:59.332 NVMe-MI: Not Supported 00:24:59.332 Virtualization Management: Not Supported 00:24:59.332 Doorbell Buffer Config: Not Supported 00:24:59.332 Get LBA Status Capability: Not Supported 00:24:59.332 Command & Feature Lockdown Capability: Not Supported 00:24:59.332 Abort Command Limit: 1 00:24:59.332 Async Event Request Limit: 4 00:24:59.332 Number of Firmware Slots: N/A 00:24:59.332 Firmware Slot 1 Read-Only: N/A 00:24:59.332 Firmware Activation Without Reset: N/A 00:24:59.332 Multiple Update Detection Support: N/A 00:24:59.332 Firmware Update Granularity: No Information Provided 00:24:59.332 Per-Namespace SMART Log: No 00:24:59.332 Asymmetric Namespace Access Log Page: Not Supported 00:24:59.332 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:59.332 Command Effects Log Page: Not Supported 00:24:59.332 Get Log Page Extended Data: Supported 00:24:59.332 Telemetry Log Pages: Not Supported 00:24:59.332 Persistent Event Log Pages: Not Supported 00:24:59.332 Supported Log Pages Log Page: May Support 00:24:59.332 Commands Supported & Effects Log Page: Not Supported 00:24:59.332 Feature Identifiers & Effects Log Page:May Support 00:24:59.332 NVMe-MI Commands & Effects Log Page: May Support 00:24:59.332 Data Area 4 for Telemetry Log: Not Supported 00:24:59.332 Error Log Page Entries Supported: 128 00:24:59.332 Keep Alive: Not Supported 00:24:59.332 00:24:59.332 NVM Command Set Attributes 00:24:59.332 ========================== 00:24:59.332 Submission Queue Entry Size 00:24:59.332 Max: 1 00:24:59.332 Min: 1 00:24:59.332 Completion Queue Entry Size 00:24:59.332 Max: 1 00:24:59.332 Min: 1 00:24:59.332 Number of Namespaces: 0 00:24:59.332 Compare Command: Not Supported 00:24:59.332 Write Uncorrectable Command: Not Supported 00:24:59.332 Dataset Management Command: Not Supported 00:24:59.332 Write Zeroes Command: Not Supported 00:24:59.332 Set Features Save Field: Not Supported 00:24:59.332 Reservations: Not Supported 00:24:59.332 Timestamp: Not Supported 00:24:59.332 Copy: Not Supported 00:24:59.332 Volatile Write Cache: Not Present 00:24:59.332 Atomic Write Unit (Normal): 1 00:24:59.332 Atomic Write Unit (PFail): 1 00:24:59.332 Atomic Compare & Write Unit: 1 00:24:59.332 Fused Compare & Write: Supported 00:24:59.332 Scatter-Gather List 00:24:59.332 SGL Command Set: Supported 00:24:59.332 SGL Keyed: Supported 00:24:59.332 SGL Bit Bucket Descriptor: Not Supported 00:24:59.332 SGL Metadata Pointer: Not Supported 00:24:59.332 Oversized SGL: Not Supported 00:24:59.332 SGL Metadata Address: Not Supported 00:24:59.332 SGL Offset: Supported 00:24:59.332 Transport SGL Data Block: Not Supported 00:24:59.332 Replay Protected Memory Block: Not Supported 00:24:59.332 00:24:59.332 Firmware Slot Information 00:24:59.332 ========================= 00:24:59.332 Active slot: 0 00:24:59.332 00:24:59.332 00:24:59.332 Error Log 00:24:59.332 ========= 00:24:59.332 00:24:59.332 Active Namespaces 00:24:59.332 ================= 00:24:59.332 Discovery Log Page 00:24:59.332 ================== 00:24:59.332 Generation Counter: 2 00:24:59.332 Number of Records: 2 00:24:59.332 Record Format: 0 00:24:59.332 00:24:59.332 Discovery Log Entry 0 00:24:59.332 ---------------------- 00:24:59.332 Transport Type: 3 (TCP) 00:24:59.332 Address Family: 1 (IPv4) 00:24:59.332 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:59.332 Entry Flags: 00:24:59.332 Duplicate Returned Information: 1 00:24:59.332 Explicit Persistent Connection Support for Discovery: 1 00:24:59.332 Transport Requirements: 00:24:59.332 Secure Channel: Not Required 00:24:59.332 Port ID: 0 (0x0000) 00:24:59.332 Controller ID: 65535 (0xffff) 00:24:59.332 Admin Max SQ Size: 128 00:24:59.332 Transport Service Identifier: 4420 00:24:59.332 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:59.332 Transport Address: 10.0.0.2 00:24:59.332 Discovery Log Entry 1 00:24:59.332 ---------------------- 00:24:59.332 Transport Type: 3 (TCP) 00:24:59.332 Address Family: 1 (IPv4) 00:24:59.332 Subsystem Type: 2 (NVM Subsystem) 00:24:59.332 Entry Flags: 00:24:59.332 Duplicate Returned Information: 0 00:24:59.332 Explicit Persistent Connection Support for Discovery: 0 00:24:59.332 Transport Requirements: 00:24:59.332 Secure Channel: Not Required 00:24:59.332 Port ID: 0 (0x0000) 00:24:59.332 Controller ID: 65535 (0xffff) 00:24:59.332 Admin Max SQ Size: 128 00:24:59.332 Transport Service Identifier: 4420 00:24:59.332 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:59.332 Transport Address: 10.0.0.2 [2024-04-18 12:00:49.802804] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:24:59.332 [2024-04-18 12:00:49.802824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.332 [2024-04-18 12:00:49.802836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.332 [2024-04-18 12:00:49.802846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.332 [2024-04-18 12:00:49.802855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.332 [2024-04-18 12:00:49.802868] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.332 [2024-04-18 12:00:49.802876] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.332 [2024-04-18 12:00:49.802883] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:24:59.332 [2024-04-18 12:00:49.802898] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.332 [2024-04-18 12:00:49.802919] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:59.332 [2024-04-18 12:00:49.803076] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.332 [2024-04-18 12:00:49.803087] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.332 [2024-04-18 12:00:49.803098] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.333 [2024-04-18 12:00:49.803106] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:24:59.333 [2024-04-18 12:00:49.803119] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.333 [2024-04-18 12:00:49.803126] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.333 [2024-04-18 12:00:49.803133] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:24:59.333 [2024-04-18 12:00:49.803146] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.333 [2024-04-18 12:00:49.803167] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:59.333 [2024-04-18 12:00:49.803287] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.333 [2024-04-18 12:00:49.803297] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.333 [2024-04-18 12:00:49.803303] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.333 [2024-04-18 12:00:49.803310] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:24:59.333 [2024-04-18 12:00:49.803322] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:24:59.333 [2024-04-18 12:00:49.803331] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:24:59.333 [2024-04-18 12:00:49.803351] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.333 [2024-04-18 12:00:49.803359] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.333 [2024-04-18 12:00:49.803366] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:24:59.333 [2024-04-18 12:00:49.803378] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.333 [2024-04-18 12:00:49.803395] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:59.333 [2024-04-18 12:00:49.803557] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.333 [2024-04-18 12:00:49.803567] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.333 [2024-04-18 12:00:49.803573] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.333 [2024-04-18 12:00:49.803580] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:24:59.333 [2024-04-18 12:00:49.803597] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.333 [2024-04-18 12:00:49.803604] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.333 [2024-04-18 12:00:49.803610] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:24:59.333 [2024-04-18 12:00:49.803621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.333 [2024-04-18 12:00:49.803638] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:59.333 [2024-04-18 12:00:49.803863] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.333 [2024-04-18 12:00:49.803871] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.333 [2024-04-18 12:00:49.803878] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.333 [2024-04-18 12:00:49.803884] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:24:59.333 [2024-04-18 12:00:49.803902] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.333 [2024-04-18 12:00:49.803909] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.333 [2024-04-18 12:00:49.803915] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:24:59.333 [2024-04-18 12:00:49.803929] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.333 [2024-04-18 12:00:49.803945] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:59.333 [2024-04-18 12:00:49.804056] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.333 [2024-04-18 12:00:49.804066] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.333 [2024-04-18 12:00:49.804072] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.333 [2024-04-18 12:00:49.804079] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:24:59.333 [2024-04-18 12:00:49.804093] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.333 [2024-04-18 12:00:49.804100] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.333 [2024-04-18 12:00:49.804107] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:24:59.333 [2024-04-18 12:00:49.804118] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.333 [2024-04-18 12:00:49.804133] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:59.333 [2024-04-18 12:00:49.804232] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.333 [2024-04-18 12:00:49.804241] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.333 [2024-04-18 12:00:49.804247] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.333 [2024-04-18 12:00:49.804254] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:24:59.333 [2024-04-18 12:00:49.804268] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.333 [2024-04-18 12:00:49.804275] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.333 [2024-04-18 12:00:49.804281] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:24:59.333 [2024-04-18 12:00:49.804292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.333 [2024-04-18 12:00:49.804307] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:59.333 [2024-04-18 12:00:49.804409] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.333 [2024-04-18 12:00:49.804418] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.333 [2024-04-18 12:00:49.804425] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.333 [2024-04-18 12:00:49.804431] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:24:59.333 [2024-04-18 12:00:49.804446] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.333 [2024-04-18 12:00:49.808469] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.333 [2024-04-18 12:00:49.808479] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:24:59.333 [2024-04-18 12:00:49.808491] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.333 [2024-04-18 12:00:49.808513] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:59.333 [2024-04-18 12:00:49.808728] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.333 [2024-04-18 12:00:49.808738] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.333 [2024-04-18 12:00:49.808744] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.333 [2024-04-18 12:00:49.808751] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:24:59.333 [2024-04-18 12:00:49.808767] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:24:59.333 00:24:59.333 12:00:49 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:59.592 [2024-04-18 12:00:49.903703] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:24:59.592 [2024-04-18 12:00:49.903773] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2578691 ] 00:24:59.593 EAL: No free 2048 kB hugepages reported on node 1 00:24:59.593 [2024-04-18 12:00:49.950749] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:24:59.593 [2024-04-18 12:00:49.950866] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:59.593 [2024-04-18 12:00:49.950881] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:59.593 [2024-04-18 12:00:49.950902] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:59.593 [2024-04-18 12:00:49.950917] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:59.593 [2024-04-18 12:00:49.954504] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:24:59.593 [2024-04-18 12:00:49.954553] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x614000002040 0 00:24:59.593 [2024-04-18 12:00:49.962470] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:59.593 [2024-04-18 12:00:49.962492] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:59.593 [2024-04-18 12:00:49.962500] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:59.593 [2024-04-18 12:00:49.962507] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:59.593 [2024-04-18 12:00:49.962558] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.593 [2024-04-18 12:00:49.962568] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.593 [2024-04-18 12:00:49.962576] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:24:59.593 [2024-04-18 12:00:49.962601] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:59.593 [2024-04-18 12:00:49.962625] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:24:59.593 [2024-04-18 12:00:49.970474] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.593 [2024-04-18 12:00:49.970494] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.593 [2024-04-18 12:00:49.970500] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.593 [2024-04-18 12:00:49.970508] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:24:59.593 [2024-04-18 12:00:49.970527] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:59.593 [2024-04-18 12:00:49.970541] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:24:59.593 [2024-04-18 12:00:49.970550] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:24:59.593 [2024-04-18 12:00:49.970570] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.593 [2024-04-18 12:00:49.970579] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.593 [2024-04-18 12:00:49.970586] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:24:59.593 [2024-04-18 12:00:49.970611] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.593 [2024-04-18 12:00:49.970632] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:24:59.593 [2024-04-18 12:00:49.970848] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.593 [2024-04-18 12:00:49.970859] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.593 [2024-04-18 12:00:49.970866] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.593 [2024-04-18 12:00:49.970873] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:24:59.593 [2024-04-18 12:00:49.970887] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:24:59.593 [2024-04-18 12:00:49.970901] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:24:59.593 [2024-04-18 12:00:49.970912] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.593 [2024-04-18 12:00:49.970922] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.593 [2024-04-18 12:00:49.970929] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:24:59.593 [2024-04-18 12:00:49.970944] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.593 [2024-04-18 12:00:49.970962] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:24:59.593 [2024-04-18 12:00:49.971075] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.593 [2024-04-18 12:00:49.971087] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.593 [2024-04-18 12:00:49.971093] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.593 [2024-04-18 12:00:49.971100] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:24:59.593 [2024-04-18 12:00:49.971109] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:24:59.593 [2024-04-18 12:00:49.971126] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:24:59.593 [2024-04-18 12:00:49.971137] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.593 [2024-04-18 12:00:49.971145] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.593 [2024-04-18 12:00:49.971152] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:24:59.593 [2024-04-18 12:00:49.971166] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.593 [2024-04-18 12:00:49.971185] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:24:59.593 [2024-04-18 12:00:49.971394] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.593 [2024-04-18 12:00:49.971404] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.593 [2024-04-18 12:00:49.971410] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.593 [2024-04-18 12:00:49.971416] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:24:59.593 [2024-04-18 12:00:49.971425] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:59.593 [2024-04-18 12:00:49.971441] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.593 [2024-04-18 12:00:49.971448] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.593 [2024-04-18 12:00:49.971464] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:24:59.593 [2024-04-18 12:00:49.971476] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.593 [2024-04-18 12:00:49.971494] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:24:59.593 [2024-04-18 12:00:49.971693] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.593 [2024-04-18 12:00:49.971705] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.593 [2024-04-18 12:00:49.971711] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.593 [2024-04-18 12:00:49.971718] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:24:59.593 [2024-04-18 12:00:49.971726] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:24:59.593 [2024-04-18 12:00:49.971735] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:24:59.593 [2024-04-18 12:00:49.971748] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:59.593 [2024-04-18 12:00:49.971857] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:24:59.593 [2024-04-18 12:00:49.971867] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:59.593 [2024-04-18 12:00:49.971882] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.593 [2024-04-18 12:00:49.971889] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.593 [2024-04-18 12:00:49.971896] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:24:59.593 [2024-04-18 12:00:49.971914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.593 [2024-04-18 12:00:49.971930] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:24:59.593 [2024-04-18 12:00:49.972123] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.593 [2024-04-18 12:00:49.972135] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.593 [2024-04-18 12:00:49.972142] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.593 [2024-04-18 12:00:49.972148] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:24:59.593 [2024-04-18 12:00:49.972157] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:59.593 [2024-04-18 12:00:49.972172] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.593 [2024-04-18 12:00:49.972180] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.593 [2024-04-18 12:00:49.972187] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:24:59.593 [2024-04-18 12:00:49.972200] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.593 [2024-04-18 12:00:49.972216] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:24:59.593 [2024-04-18 12:00:49.972329] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.593 [2024-04-18 12:00:49.972339] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.593 [2024-04-18 12:00:49.972345] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.593 [2024-04-18 12:00:49.972352] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:24:59.593 [2024-04-18 12:00:49.972360] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:59.593 [2024-04-18 12:00:49.972369] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:24:59.593 [2024-04-18 12:00:49.972382] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:24:59.593 [2024-04-18 12:00:49.972397] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:24:59.593 [2024-04-18 12:00:49.972417] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.593 [2024-04-18 12:00:49.972425] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:24:59.593 [2024-04-18 12:00:49.972438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.593 [2024-04-18 12:00:49.972460] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:24:59.593 [2024-04-18 12:00:49.972652] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:59.593 [2024-04-18 12:00:49.972663] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:59.594 [2024-04-18 12:00:49.972669] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:59.594 [2024-04-18 12:00:49.972677] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=4096, cccid=0 00:24:59.594 [2024-04-18 12:00:49.972688] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x614000002040): expected_datao=0, payload_size=4096 00:24:59.594 [2024-04-18 12:00:49.972696] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.594 [2024-04-18 12:00:49.972708] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:59.594 [2024-04-18 12:00:49.972715] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:59.594 [2024-04-18 12:00:50.013748] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.594 [2024-04-18 12:00:50.013768] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.594 [2024-04-18 12:00:50.013776] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.594 [2024-04-18 12:00:50.013783] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:24:59.594 [2024-04-18 12:00:50.013801] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:24:59.594 [2024-04-18 12:00:50.013810] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:24:59.594 [2024-04-18 12:00:50.013819] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:24:59.594 [2024-04-18 12:00:50.013826] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:24:59.594 [2024-04-18 12:00:50.013835] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:24:59.594 [2024-04-18 12:00:50.013844] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:24:59.594 [2024-04-18 12:00:50.013866] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:24:59.594 [2024-04-18 12:00:50.013881] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.594 [2024-04-18 12:00:50.013889] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.594 [2024-04-18 12:00:50.013896] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:24:59.594 [2024-04-18 12:00:50.013910] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:59.594 [2024-04-18 12:00:50.013930] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:24:59.594 [2024-04-18 12:00:50.014133] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.594 [2024-04-18 12:00:50.014143] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.594 [2024-04-18 12:00:50.014149] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.594 [2024-04-18 12:00:50.014155] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:24:59.594 [2024-04-18 12:00:50.014167] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.594 [2024-04-18 12:00:50.014177] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.594 [2024-04-18 12:00:50.014185] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:24:59.594 [2024-04-18 12:00:50.014198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.594 [2024-04-18 12:00:50.014210] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.594 [2024-04-18 12:00:50.014217] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.594 [2024-04-18 12:00:50.014223] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x614000002040) 00:24:59.594 [2024-04-18 12:00:50.014233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.594 [2024-04-18 12:00:50.014242] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.594 [2024-04-18 12:00:50.014248] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.594 [2024-04-18 12:00:50.014254] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x614000002040) 00:24:59.594 [2024-04-18 12:00:50.014264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.594 [2024-04-18 12:00:50.014273] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.594 [2024-04-18 12:00:50.014279] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.594 [2024-04-18 12:00:50.014286] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:24:59.594 [2024-04-18 12:00:50.014295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.594 [2024-04-18 12:00:50.014303] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:59.594 [2024-04-18 12:00:50.014319] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:59.594 [2024-04-18 12:00:50.014329] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.594 [2024-04-18 12:00:50.014336] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:24:59.594 [2024-04-18 12:00:50.014348] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.594 [2024-04-18 12:00:50.014367] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:24:59.594 [2024-04-18 12:00:50.014375] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b260, cid 1, qid 0 00:24:59.594 [2024-04-18 12:00:50.014382] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b3c0, cid 2, qid 0 00:24:59.594 [2024-04-18 12:00:50.014389] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:59.594 [2024-04-18 12:00:50.014396] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:24:59.594 [2024-04-18 12:00:50.018463] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.594 [2024-04-18 12:00:50.018480] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.594 [2024-04-18 12:00:50.018487] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.594 [2024-04-18 12:00:50.018494] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:24:59.594 [2024-04-18 12:00:50.018503] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:24:59.594 [2024-04-18 12:00:50.018514] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:59.594 [2024-04-18 12:00:50.018527] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:24:59.594 [2024-04-18 12:00:50.018541] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:59.594 [2024-04-18 12:00:50.018552] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.594 [2024-04-18 12:00:50.018564] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.594 [2024-04-18 12:00:50.018571] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:24:59.594 [2024-04-18 12:00:50.018584] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:59.594 [2024-04-18 12:00:50.018603] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:24:59.594 [2024-04-18 12:00:50.018824] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.594 [2024-04-18 12:00:50.018834] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.594 [2024-04-18 12:00:50.018840] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.594 [2024-04-18 12:00:50.018847] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:24:59.594 [2024-04-18 12:00:50.018908] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:24:59.594 [2024-04-18 12:00:50.018929] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:59.594 [2024-04-18 12:00:50.018943] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.594 [2024-04-18 12:00:50.018951] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:24:59.594 [2024-04-18 12:00:50.018964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.594 [2024-04-18 12:00:50.018981] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:24:59.594 [2024-04-18 12:00:50.019119] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:59.594 [2024-04-18 12:00:50.019129] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:59.594 [2024-04-18 12:00:50.019135] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:59.594 [2024-04-18 12:00:50.019142] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=4096, cccid=4 00:24:59.594 [2024-04-18 12:00:50.019151] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=4096 00:24:59.594 [2024-04-18 12:00:50.019158] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.594 [2024-04-18 12:00:50.019273] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:59.594 [2024-04-18 12:00:50.019280] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:59.594 [2024-04-18 12:00:50.059652] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.594 [2024-04-18 12:00:50.059672] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.594 [2024-04-18 12:00:50.059679] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.594 [2024-04-18 12:00:50.059686] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:24:59.594 [2024-04-18 12:00:50.059712] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:24:59.594 [2024-04-18 12:00:50.059736] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:24:59.594 [2024-04-18 12:00:50.059752] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:24:59.594 [2024-04-18 12:00:50.059769] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.594 [2024-04-18 12:00:50.059777] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:24:59.594 [2024-04-18 12:00:50.059794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.594 [2024-04-18 12:00:50.059814] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:24:59.594 [2024-04-18 12:00:50.059962] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:59.594 [2024-04-18 12:00:50.059976] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:59.594 [2024-04-18 12:00:50.059982] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:59.594 [2024-04-18 12:00:50.059989] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=4096, cccid=4 00:24:59.594 [2024-04-18 12:00:50.059998] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=4096 00:24:59.595 [2024-04-18 12:00:50.060006] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.595 [2024-04-18 12:00:50.060129] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:59.595 [2024-04-18 12:00:50.060136] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:59.595 [2024-04-18 12:00:50.100615] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.595 [2024-04-18 12:00:50.100635] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.595 [2024-04-18 12:00:50.100642] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.595 [2024-04-18 12:00:50.100649] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:24:59.595 [2024-04-18 12:00:50.100674] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:59.595 [2024-04-18 12:00:50.100690] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:59.595 [2024-04-18 12:00:50.100707] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.595 [2024-04-18 12:00:50.100715] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:24:59.595 [2024-04-18 12:00:50.100728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.595 [2024-04-18 12:00:50.100752] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:24:59.595 [2024-04-18 12:00:50.100884] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:59.595 [2024-04-18 12:00:50.100894] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:59.595 [2024-04-18 12:00:50.100900] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:59.595 [2024-04-18 12:00:50.100907] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=4096, cccid=4 00:24:59.595 [2024-04-18 12:00:50.100915] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=4096 00:24:59.595 [2024-04-18 12:00:50.100923] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.595 [2024-04-18 12:00:50.101050] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:59.595 [2024-04-18 12:00:50.101057] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:59.856 [2024-04-18 12:00:50.145468] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.856 [2024-04-18 12:00:50.145491] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.856 [2024-04-18 12:00:50.145498] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.856 [2024-04-18 12:00:50.145505] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:24:59.856 [2024-04-18 12:00:50.145528] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:59.856 [2024-04-18 12:00:50.145544] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:24:59.856 [2024-04-18 12:00:50.145561] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:24:59.856 [2024-04-18 12:00:50.145571] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:59.856 [2024-04-18 12:00:50.145581] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:24:59.856 [2024-04-18 12:00:50.145590] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:24:59.856 [2024-04-18 12:00:50.145598] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:24:59.856 [2024-04-18 12:00:50.145607] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:24:59.856 [2024-04-18 12:00:50.145639] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.856 [2024-04-18 12:00:50.145648] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:24:59.856 [2024-04-18 12:00:50.145667] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.856 [2024-04-18 12:00:50.145681] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.856 [2024-04-18 12:00:50.145688] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.856 [2024-04-18 12:00:50.145695] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x614000002040) 00:24:59.856 [2024-04-18 12:00:50.145706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.856 [2024-04-18 12:00:50.145728] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:24:59.856 [2024-04-18 12:00:50.145737] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:24:59.856 [2024-04-18 12:00:50.145870] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.856 [2024-04-18 12:00:50.145880] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.856 [2024-04-18 12:00:50.145887] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.856 [2024-04-18 12:00:50.145894] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:24:59.856 [2024-04-18 12:00:50.145907] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.856 [2024-04-18 12:00:50.145918] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.856 [2024-04-18 12:00:50.145924] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.856 [2024-04-18 12:00:50.145931] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x614000002040 00:24:59.856 [2024-04-18 12:00:50.145946] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.856 [2024-04-18 12:00:50.145953] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x614000002040) 00:24:59.856 [2024-04-18 12:00:50.145964] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.856 [2024-04-18 12:00:50.145981] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:24:59.856 [2024-04-18 12:00:50.146097] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.856 [2024-04-18 12:00:50.146107] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.856 [2024-04-18 12:00:50.146113] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.856 [2024-04-18 12:00:50.146119] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x614000002040 00:24:59.856 [2024-04-18 12:00:50.146134] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.857 [2024-04-18 12:00:50.146141] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x614000002040) 00:24:59.857 [2024-04-18 12:00:50.146155] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.857 [2024-04-18 12:00:50.146170] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:24:59.857 [2024-04-18 12:00:50.146277] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.857 [2024-04-18 12:00:50.146286] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.857 [2024-04-18 12:00:50.146292] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.857 [2024-04-18 12:00:50.146298] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x614000002040 00:24:59.857 [2024-04-18 12:00:50.146312] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.857 [2024-04-18 12:00:50.146319] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x614000002040) 00:24:59.857 [2024-04-18 12:00:50.146329] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.857 [2024-04-18 12:00:50.146344] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:24:59.857 [2024-04-18 12:00:50.146477] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.857 [2024-04-18 12:00:50.146498] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.857 [2024-04-18 12:00:50.146508] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.857 [2024-04-18 12:00:50.146519] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x614000002040 00:24:59.857 [2024-04-18 12:00:50.146558] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.857 [2024-04-18 12:00:50.146567] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x614000002040) 00:24:59.857 [2024-04-18 12:00:50.146583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.857 [2024-04-18 12:00:50.146596] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.857 [2024-04-18 12:00:50.146603] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:24:59.857 [2024-04-18 12:00:50.146614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.857 [2024-04-18 12:00:50.146625] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.857 [2024-04-18 12:00:50.146632] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x614000002040) 00:24:59.857 [2024-04-18 12:00:50.146643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.857 [2024-04-18 12:00:50.146657] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.857 [2024-04-18 12:00:50.146664] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x614000002040) 00:24:59.857 [2024-04-18 12:00:50.146678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.857 [2024-04-18 12:00:50.146699] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:24:59.857 [2024-04-18 12:00:50.146711] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:24:59.857 [2024-04-18 12:00:50.146718] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b940, cid 6, qid 0 00:24:59.857 [2024-04-18 12:00:50.146725] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001baa0, cid 7, qid 0 00:24:59.857 [2024-04-18 12:00:50.147026] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:59.857 [2024-04-18 12:00:50.147036] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:59.857 [2024-04-18 12:00:50.147045] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:59.857 [2024-04-18 12:00:50.147053] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=8192, cccid=5 00:24:59.857 [2024-04-18 12:00:50.147061] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b7e0) on tqpair(0x614000002040): expected_datao=0, payload_size=8192 00:24:59.857 [2024-04-18 12:00:50.147069] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.857 [2024-04-18 12:00:50.147289] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:59.857 [2024-04-18 12:00:50.147297] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:59.857 [2024-04-18 12:00:50.147306] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:59.857 [2024-04-18 12:00:50.147315] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:59.857 [2024-04-18 12:00:50.147321] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:59.857 [2024-04-18 12:00:50.147327] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=512, cccid=4 00:24:59.857 [2024-04-18 12:00:50.147335] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=512 00:24:59.857 [2024-04-18 12:00:50.147342] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.857 [2024-04-18 12:00:50.147351] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:59.857 [2024-04-18 12:00:50.147357] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:59.857 [2024-04-18 12:00:50.147370] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:59.857 [2024-04-18 12:00:50.147378] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:59.857 [2024-04-18 12:00:50.147384] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:59.857 [2024-04-18 12:00:50.147390] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=512, cccid=6 00:24:59.857 [2024-04-18 12:00:50.147398] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b940) on tqpair(0x614000002040): expected_datao=0, payload_size=512 00:24:59.857 [2024-04-18 12:00:50.147405] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.857 [2024-04-18 12:00:50.147414] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:59.857 [2024-04-18 12:00:50.147420] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:59.857 [2024-04-18 12:00:50.147428] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:59.857 [2024-04-18 12:00:50.147436] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:59.857 [2024-04-18 12:00:50.147442] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:59.857 [2024-04-18 12:00:50.147448] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=4096, cccid=7 00:24:59.857 [2024-04-18 12:00:50.147462] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001baa0) on tqpair(0x614000002040): expected_datao=0, payload_size=4096 00:24:59.857 [2024-04-18 12:00:50.147469] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.857 [2024-04-18 12:00:50.147482] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:59.857 [2024-04-18 12:00:50.147488] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:59.857 [2024-04-18 12:00:50.147499] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.857 [2024-04-18 12:00:50.147508] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.857 [2024-04-18 12:00:50.147513] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.857 [2024-04-18 12:00:50.147520] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x614000002040 00:24:59.857 [2024-04-18 12:00:50.147544] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.857 [2024-04-18 12:00:50.147553] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.857 [2024-04-18 12:00:50.147559] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.857 [2024-04-18 12:00:50.147569] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:24:59.857 [2024-04-18 12:00:50.147586] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.857 [2024-04-18 12:00:50.147601] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.857 [2024-04-18 12:00:50.147607] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.857 [2024-04-18 12:00:50.147613] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b940) on tqpair=0x614000002040 00:24:59.857 [2024-04-18 12:00:50.147627] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.857 [2024-04-18 12:00:50.147636] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.857 [2024-04-18 12:00:50.147642] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.857 [2024-04-18 12:00:50.147648] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001baa0) on tqpair=0x614000002040 00:24:59.857 ===================================================== 00:24:59.857 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:59.857 ===================================================== 00:24:59.857 Controller Capabilities/Features 00:24:59.857 ================================ 00:24:59.857 Vendor ID: 8086 00:24:59.857 Subsystem Vendor ID: 8086 00:24:59.857 Serial Number: SPDK00000000000001 00:24:59.857 Model Number: SPDK bdev Controller 00:24:59.857 Firmware Version: 24.05 00:24:59.857 Recommended Arb Burst: 6 00:24:59.857 IEEE OUI Identifier: e4 d2 5c 00:24:59.857 Multi-path I/O 00:24:59.857 May have multiple subsystem ports: Yes 00:24:59.857 May have multiple controllers: Yes 00:24:59.857 Associated with SR-IOV VF: No 00:24:59.857 Max Data Transfer Size: 131072 00:24:59.857 Max Number of Namespaces: 32 00:24:59.857 Max Number of I/O Queues: 127 00:24:59.857 NVMe Specification Version (VS): 1.3 00:24:59.857 NVMe Specification Version (Identify): 1.3 00:24:59.857 Maximum Queue Entries: 128 00:24:59.857 Contiguous Queues Required: Yes 00:24:59.857 Arbitration Mechanisms Supported 00:24:59.857 Weighted Round Robin: Not Supported 00:24:59.857 Vendor Specific: Not Supported 00:24:59.857 Reset Timeout: 15000 ms 00:24:59.857 Doorbell Stride: 4 bytes 00:24:59.857 NVM Subsystem Reset: Not Supported 00:24:59.857 Command Sets Supported 00:24:59.857 NVM Command Set: Supported 00:24:59.857 Boot Partition: Not Supported 00:24:59.857 Memory Page Size Minimum: 4096 bytes 00:24:59.857 Memory Page Size Maximum: 4096 bytes 00:24:59.857 Persistent Memory Region: Not Supported 00:24:59.857 Optional Asynchronous Events Supported 00:24:59.857 Namespace Attribute Notices: Supported 00:24:59.857 Firmware Activation Notices: Not Supported 00:24:59.857 ANA Change Notices: Not Supported 00:24:59.857 PLE Aggregate Log Change Notices: Not Supported 00:24:59.857 LBA Status Info Alert Notices: Not Supported 00:24:59.857 EGE Aggregate Log Change Notices: Not Supported 00:24:59.857 Normal NVM Subsystem Shutdown event: Not Supported 00:24:59.857 Zone Descriptor Change Notices: Not Supported 00:24:59.857 Discovery Log Change Notices: Not Supported 00:24:59.857 Controller Attributes 00:24:59.857 128-bit Host Identifier: Supported 00:24:59.857 Non-Operational Permissive Mode: Not Supported 00:24:59.857 NVM Sets: Not Supported 00:24:59.857 Read Recovery Levels: Not Supported 00:24:59.857 Endurance Groups: Not Supported 00:24:59.857 Predictable Latency Mode: Not Supported 00:24:59.857 Traffic Based Keep ALive: Not Supported 00:24:59.857 Namespace Granularity: Not Supported 00:24:59.857 SQ Associations: Not Supported 00:24:59.857 UUID List: Not Supported 00:24:59.857 Multi-Domain Subsystem: Not Supported 00:24:59.857 Fixed Capacity Management: Not Supported 00:24:59.857 Variable Capacity Management: Not Supported 00:24:59.857 Delete Endurance Group: Not Supported 00:24:59.857 Delete NVM Set: Not Supported 00:24:59.857 Extended LBA Formats Supported: Not Supported 00:24:59.857 Flexible Data Placement Supported: Not Supported 00:24:59.857 00:24:59.857 Controller Memory Buffer Support 00:24:59.857 ================================ 00:24:59.857 Supported: No 00:24:59.857 00:24:59.857 Persistent Memory Region Support 00:24:59.857 ================================ 00:24:59.857 Supported: No 00:24:59.857 00:24:59.857 Admin Command Set Attributes 00:24:59.857 ============================ 00:24:59.857 Security Send/Receive: Not Supported 00:24:59.857 Format NVM: Not Supported 00:24:59.857 Firmware Activate/Download: Not Supported 00:24:59.857 Namespace Management: Not Supported 00:24:59.857 Device Self-Test: Not Supported 00:24:59.857 Directives: Not Supported 00:24:59.857 NVMe-MI: Not Supported 00:24:59.857 Virtualization Management: Not Supported 00:24:59.857 Doorbell Buffer Config: Not Supported 00:24:59.857 Get LBA Status Capability: Not Supported 00:24:59.857 Command & Feature Lockdown Capability: Not Supported 00:24:59.857 Abort Command Limit: 4 00:24:59.857 Async Event Request Limit: 4 00:24:59.857 Number of Firmware Slots: N/A 00:24:59.857 Firmware Slot 1 Read-Only: N/A 00:24:59.857 Firmware Activation Without Reset: N/A 00:24:59.857 Multiple Update Detection Support: N/A 00:24:59.857 Firmware Update Granularity: No Information Provided 00:24:59.857 Per-Namespace SMART Log: No 00:24:59.857 Asymmetric Namespace Access Log Page: Not Supported 00:24:59.857 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:59.857 Command Effects Log Page: Supported 00:24:59.857 Get Log Page Extended Data: Supported 00:24:59.857 Telemetry Log Pages: Not Supported 00:24:59.857 Persistent Event Log Pages: Not Supported 00:24:59.857 Supported Log Pages Log Page: May Support 00:24:59.857 Commands Supported & Effects Log Page: Not Supported 00:24:59.857 Feature Identifiers & Effects Log Page:May Support 00:24:59.857 NVMe-MI Commands & Effects Log Page: May Support 00:24:59.857 Data Area 4 for Telemetry Log: Not Supported 00:24:59.857 Error Log Page Entries Supported: 128 00:24:59.857 Keep Alive: Supported 00:24:59.857 Keep Alive Granularity: 10000 ms 00:24:59.857 00:24:59.857 NVM Command Set Attributes 00:24:59.857 ========================== 00:24:59.857 Submission Queue Entry Size 00:24:59.857 Max: 64 00:24:59.857 Min: 64 00:24:59.857 Completion Queue Entry Size 00:24:59.857 Max: 16 00:24:59.857 Min: 16 00:24:59.857 Number of Namespaces: 32 00:24:59.857 Compare Command: Supported 00:24:59.857 Write Uncorrectable Command: Not Supported 00:24:59.857 Dataset Management Command: Supported 00:24:59.857 Write Zeroes Command: Supported 00:24:59.857 Set Features Save Field: Not Supported 00:24:59.857 Reservations: Supported 00:24:59.857 Timestamp: Not Supported 00:24:59.857 Copy: Supported 00:24:59.857 Volatile Write Cache: Present 00:24:59.857 Atomic Write Unit (Normal): 1 00:24:59.857 Atomic Write Unit (PFail): 1 00:24:59.857 Atomic Compare & Write Unit: 1 00:24:59.857 Fused Compare & Write: Supported 00:24:59.857 Scatter-Gather List 00:24:59.857 SGL Command Set: Supported 00:24:59.857 SGL Keyed: Supported 00:24:59.857 SGL Bit Bucket Descriptor: Not Supported 00:24:59.857 SGL Metadata Pointer: Not Supported 00:24:59.857 Oversized SGL: Not Supported 00:24:59.857 SGL Metadata Address: Not Supported 00:24:59.857 SGL Offset: Supported 00:24:59.857 Transport SGL Data Block: Not Supported 00:24:59.857 Replay Protected Memory Block: Not Supported 00:24:59.857 00:24:59.857 Firmware Slot Information 00:24:59.857 ========================= 00:24:59.857 Active slot: 1 00:24:59.857 Slot 1 Firmware Revision: 24.05 00:24:59.857 00:24:59.857 00:24:59.857 Commands Supported and Effects 00:24:59.857 ============================== 00:24:59.857 Admin Commands 00:24:59.857 -------------- 00:24:59.857 Get Log Page (02h): Supported 00:24:59.857 Identify (06h): Supported 00:24:59.857 Abort (08h): Supported 00:24:59.857 Set Features (09h): Supported 00:24:59.857 Get Features (0Ah): Supported 00:24:59.857 Asynchronous Event Request (0Ch): Supported 00:24:59.857 Keep Alive (18h): Supported 00:24:59.857 I/O Commands 00:24:59.857 ------------ 00:24:59.857 Flush (00h): Supported LBA-Change 00:24:59.857 Write (01h): Supported LBA-Change 00:24:59.857 Read (02h): Supported 00:24:59.857 Compare (05h): Supported 00:24:59.857 Write Zeroes (08h): Supported LBA-Change 00:24:59.857 Dataset Management (09h): Supported LBA-Change 00:24:59.857 Copy (19h): Supported LBA-Change 00:24:59.857 Unknown (79h): Supported LBA-Change 00:24:59.857 Unknown (7Ah): Supported 00:24:59.857 00:24:59.857 Error Log 00:24:59.857 ========= 00:24:59.857 00:24:59.857 Arbitration 00:24:59.857 =========== 00:24:59.857 Arbitration Burst: 1 00:24:59.857 00:24:59.857 Power Management 00:24:59.857 ================ 00:24:59.857 Number of Power States: 1 00:24:59.857 Current Power State: Power State #0 00:24:59.857 Power State #0: 00:24:59.857 Max Power: 0.00 W 00:24:59.857 Non-Operational State: Operational 00:24:59.857 Entry Latency: Not Reported 00:24:59.857 Exit Latency: Not Reported 00:24:59.857 Relative Read Throughput: 0 00:24:59.857 Relative Read Latency: 0 00:24:59.857 Relative Write Throughput: 0 00:24:59.858 Relative Write Latency: 0 00:24:59.858 Idle Power: Not Reported 00:24:59.858 Active Power: Not Reported 00:24:59.858 Non-Operational Permissive Mode: Not Supported 00:24:59.858 00:24:59.858 Health Information 00:24:59.858 ================== 00:24:59.858 Critical Warnings: 00:24:59.858 Available Spare Space: OK 00:24:59.858 Temperature: OK 00:24:59.858 Device Reliability: OK 00:24:59.858 Read Only: No 00:24:59.858 Volatile Memory Backup: OK 00:24:59.858 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:59.858 Temperature Threshold: [2024-04-18 12:00:50.147796] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.858 [2024-04-18 12:00:50.147809] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x614000002040) 00:24:59.858 [2024-04-18 12:00:50.147822] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.858 [2024-04-18 12:00:50.147842] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001baa0, cid 7, qid 0 00:24:59.858 [2024-04-18 12:00:50.147969] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.858 [2024-04-18 12:00:50.147980] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.858 [2024-04-18 12:00:50.147986] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.858 [2024-04-18 12:00:50.147993] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001baa0) on tqpair=0x614000002040 00:24:59.858 [2024-04-18 12:00:50.148041] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:24:59.858 [2024-04-18 12:00:50.148059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.858 [2024-04-18 12:00:50.148070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.858 [2024-04-18 12:00:50.148080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.858 [2024-04-18 12:00:50.148089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.858 [2024-04-18 12:00:50.148102] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.858 [2024-04-18 12:00:50.148109] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.858 [2024-04-18 12:00:50.148120] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:24:59.858 [2024-04-18 12:00:50.148135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.858 [2024-04-18 12:00:50.148154] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:59.858 [2024-04-18 12:00:50.148276] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.858 [2024-04-18 12:00:50.148287] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.858 [2024-04-18 12:00:50.148293] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.858 [2024-04-18 12:00:50.148305] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:24:59.858 [2024-04-18 12:00:50.148319] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.858 [2024-04-18 12:00:50.148326] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.858 [2024-04-18 12:00:50.148336] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:24:59.858 [2024-04-18 12:00:50.148347] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.858 [2024-04-18 12:00:50.148371] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:59.858 [2024-04-18 12:00:50.148500] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.858 [2024-04-18 12:00:50.148511] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.858 [2024-04-18 12:00:50.148517] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.858 [2024-04-18 12:00:50.148524] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:24:59.858 [2024-04-18 12:00:50.148533] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:24:59.858 [2024-04-18 12:00:50.148541] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:24:59.858 [2024-04-18 12:00:50.148556] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.858 [2024-04-18 12:00:50.148564] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.858 [2024-04-18 12:00:50.148571] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:24:59.858 [2024-04-18 12:00:50.148583] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.858 [2024-04-18 12:00:50.148603] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:59.858 [2024-04-18 12:00:50.148722] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.858 [2024-04-18 12:00:50.148731] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.858 [2024-04-18 12:00:50.148737] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.858 [2024-04-18 12:00:50.148744] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:24:59.858 [2024-04-18 12:00:50.148759] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.858 [2024-04-18 12:00:50.148766] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.858 [2024-04-18 12:00:50.148772] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:24:59.858 [2024-04-18 12:00:50.148783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.858 [2024-04-18 12:00:50.148799] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:59.858 [2024-04-18 12:00:50.148906] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.858 [2024-04-18 12:00:50.148915] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.858 [2024-04-18 12:00:50.148921] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.858 [2024-04-18 12:00:50.148928] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:24:59.858 [2024-04-18 12:00:50.148942] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.858 [2024-04-18 12:00:50.148948] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.858 [2024-04-18 12:00:50.148955] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:24:59.858 [2024-04-18 12:00:50.148965] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.858 [2024-04-18 12:00:50.148980] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:59.858 [2024-04-18 12:00:50.149084] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.858 [2024-04-18 12:00:50.149093] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.858 [2024-04-18 12:00:50.149099] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.858 [2024-04-18 12:00:50.149106] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:24:59.858 [2024-04-18 12:00:50.149120] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.858 [2024-04-18 12:00:50.149127] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.858 [2024-04-18 12:00:50.149137] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:24:59.858 [2024-04-18 12:00:50.149148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.858 [2024-04-18 12:00:50.149163] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:59.858 [2024-04-18 12:00:50.149274] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.858 [2024-04-18 12:00:50.149284] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.858 [2024-04-18 12:00:50.149290] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.858 [2024-04-18 12:00:50.149296] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:24:59.858 [2024-04-18 12:00:50.149311] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.858 [2024-04-18 12:00:50.149318] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.858 [2024-04-18 12:00:50.149324] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:24:59.858 [2024-04-18 12:00:50.149340] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.858 [2024-04-18 12:00:50.149355] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:59.858 [2024-04-18 12:00:50.153464] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.858 [2024-04-18 12:00:50.153482] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.858 [2024-04-18 12:00:50.153489] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.858 [2024-04-18 12:00:50.153495] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:24:59.858 [2024-04-18 12:00:50.153515] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.858 [2024-04-18 12:00:50.153522] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.858 [2024-04-18 12:00:50.153529] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:24:59.858 [2024-04-18 12:00:50.153541] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.858 [2024-04-18 12:00:50.153560] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:59.858 [2024-04-18 12:00:50.153766] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.858 [2024-04-18 12:00:50.153775] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.858 [2024-04-18 12:00:50.153781] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.858 [2024-04-18 12:00:50.153788] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:24:59.858 [2024-04-18 12:00:50.153801] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:24:59.858 0 Kelvin (-273 Celsius) 00:24:59.858 Available Spare: 0% 00:24:59.858 Available Spare Threshold: 0% 00:24:59.858 Life Percentage Used: 0% 00:24:59.858 Data Units Read: 0 00:24:59.858 Data Units Written: 0 00:24:59.858 Host Read Commands: 0 00:24:59.858 Host Write Commands: 0 00:24:59.858 Controller Busy Time: 0 minutes 00:24:59.858 Power Cycles: 0 00:24:59.858 Power On Hours: 0 hours 00:24:59.858 Unsafe Shutdowns: 0 00:24:59.858 Unrecoverable Media Errors: 0 00:24:59.858 Lifetime Error Log Entries: 0 00:24:59.858 Warning Temperature Time: 0 minutes 00:24:59.858 Critical Temperature Time: 0 minutes 00:24:59.858 00:24:59.858 Number of Queues 00:24:59.858 ================ 00:24:59.858 Number of I/O Submission Queues: 127 00:24:59.858 Number of I/O Completion Queues: 127 00:24:59.858 00:24:59.858 Active Namespaces 00:24:59.858 ================= 00:24:59.858 Namespace ID:1 00:24:59.858 Error Recovery Timeout: Unlimited 00:24:59.858 Command Set Identifier: NVM (00h) 00:24:59.858 Deallocate: Supported 00:24:59.858 Deallocated/Unwritten Error: Not Supported 00:24:59.858 Deallocated Read Value: Unknown 00:24:59.858 Deallocate in Write Zeroes: Not Supported 00:24:59.858 Deallocated Guard Field: 0xFFFF 00:24:59.858 Flush: Supported 00:24:59.858 Reservation: Supported 00:24:59.858 Namespace Sharing Capabilities: Multiple Controllers 00:24:59.858 Size (in LBAs): 131072 (0GiB) 00:24:59.858 Capacity (in LBAs): 131072 (0GiB) 00:24:59.858 Utilization (in LBAs): 131072 (0GiB) 00:24:59.858 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:59.858 EUI64: ABCDEF0123456789 00:24:59.858 UUID: cee5bf67-3934-4a36-8afa-2d38bd360a91 00:24:59.858 Thin Provisioning: Not Supported 00:24:59.858 Per-NS Atomic Units: Yes 00:24:59.858 Atomic Boundary Size (Normal): 0 00:24:59.858 Atomic Boundary Size (PFail): 0 00:24:59.858 Atomic Boundary Offset: 0 00:24:59.858 Maximum Single Source Range Length: 65535 00:24:59.858 Maximum Copy Length: 65535 00:24:59.858 Maximum Source Range Count: 1 00:24:59.858 NGUID/EUI64 Never Reused: No 00:24:59.858 Namespace Write Protected: No 00:24:59.858 Number of LBA Formats: 1 00:24:59.858 Current LBA Format: LBA Format #00 00:24:59.858 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:59.858 00:24:59.858 12:00:50 -- host/identify.sh@51 -- # sync 00:24:59.858 12:00:50 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:59.858 12:00:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:59.858 12:00:50 -- common/autotest_common.sh@10 -- # set +x 00:24:59.858 12:00:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:59.858 12:00:50 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:59.858 12:00:50 -- host/identify.sh@56 -- # nvmftestfini 00:24:59.858 12:00:50 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:59.858 12:00:50 -- nvmf/common.sh@117 -- # sync 00:24:59.858 12:00:50 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:59.858 12:00:50 -- nvmf/common.sh@120 -- # set +e 00:24:59.858 12:00:50 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:59.858 12:00:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:59.858 rmmod nvme_tcp 00:24:59.858 rmmod nvme_fabrics 00:24:59.858 rmmod nvme_keyring 00:24:59.858 12:00:50 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:59.858 12:00:50 -- nvmf/common.sh@124 -- # set -e 00:24:59.858 12:00:50 -- nvmf/common.sh@125 -- # return 0 00:24:59.858 12:00:50 -- nvmf/common.sh@478 -- # '[' -n 2578405 ']' 00:24:59.858 12:00:50 -- nvmf/common.sh@479 -- # killprocess 2578405 00:24:59.858 12:00:50 -- common/autotest_common.sh@936 -- # '[' -z 2578405 ']' 00:24:59.858 12:00:50 -- common/autotest_common.sh@940 -- # kill -0 2578405 00:24:59.858 12:00:50 -- common/autotest_common.sh@941 -- # uname 00:24:59.858 12:00:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:59.858 12:00:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2578405 00:24:59.858 12:00:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:59.858 12:00:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:59.858 12:00:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2578405' 00:24:59.858 killing process with pid 2578405 00:24:59.858 12:00:50 -- common/autotest_common.sh@955 -- # kill 2578405 00:24:59.858 [2024-04-18 12:00:50.358232] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:24:59.858 12:00:50 -- common/autotest_common.sh@960 -- # wait 2578405 00:25:01.790 12:00:51 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:01.790 12:00:51 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:01.790 12:00:51 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:01.790 12:00:51 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:01.790 12:00:51 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:01.790 12:00:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:01.790 12:00:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:01.790 12:00:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.688 12:00:53 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:03.688 00:25:03.688 real 0m11.642s 00:25:03.688 user 0m11.721s 00:25:03.688 sys 0m5.318s 00:25:03.688 12:00:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:03.688 12:00:53 -- common/autotest_common.sh@10 -- # set +x 00:25:03.688 ************************************ 00:25:03.688 END TEST nvmf_identify 00:25:03.688 ************************************ 00:25:03.688 12:00:53 -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:03.688 12:00:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:03.688 12:00:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:03.688 12:00:53 -- common/autotest_common.sh@10 -- # set +x 00:25:03.688 ************************************ 00:25:03.688 START TEST nvmf_perf 00:25:03.688 ************************************ 00:25:03.688 12:00:54 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:03.688 * Looking for test storage... 00:25:03.945 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:03.945 12:00:54 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:03.945 12:00:54 -- nvmf/common.sh@7 -- # uname -s 00:25:03.945 12:00:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:03.945 12:00:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:03.945 12:00:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:03.945 12:00:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:03.945 12:00:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:03.945 12:00:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:03.945 12:00:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:03.946 12:00:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:03.946 12:00:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:03.946 12:00:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:03.946 12:00:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:25:03.946 12:00:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:25:03.946 12:00:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:03.946 12:00:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:03.946 12:00:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:03.946 12:00:54 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:03.946 12:00:54 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:03.946 12:00:54 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:03.946 12:00:54 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:03.946 12:00:54 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:03.946 12:00:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.946 12:00:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.946 12:00:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.946 12:00:54 -- paths/export.sh@5 -- # export PATH 00:25:03.946 12:00:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.946 12:00:54 -- nvmf/common.sh@47 -- # : 0 00:25:03.946 12:00:54 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:03.946 12:00:54 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:03.946 12:00:54 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:03.946 12:00:54 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:03.946 12:00:54 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:03.946 12:00:54 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:03.946 12:00:54 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:03.946 12:00:54 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:03.946 12:00:54 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:03.946 12:00:54 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:03.946 12:00:54 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:03.946 12:00:54 -- host/perf.sh@17 -- # nvmftestinit 00:25:03.946 12:00:54 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:03.946 12:00:54 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:03.946 12:00:54 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:03.946 12:00:54 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:03.946 12:00:54 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:03.946 12:00:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.946 12:00:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:03.946 12:00:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.946 12:00:54 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:25:03.946 12:00:54 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:25:03.946 12:00:54 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:03.946 12:00:54 -- common/autotest_common.sh@10 -- # set +x 00:25:10.508 12:01:00 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:10.508 12:01:00 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:10.508 12:01:00 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:10.508 12:01:00 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:10.508 12:01:00 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:10.508 12:01:00 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:10.508 12:01:00 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:10.508 12:01:00 -- nvmf/common.sh@295 -- # net_devs=() 00:25:10.508 12:01:00 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:10.508 12:01:00 -- nvmf/common.sh@296 -- # e810=() 00:25:10.508 12:01:00 -- nvmf/common.sh@296 -- # local -ga e810 00:25:10.508 12:01:00 -- nvmf/common.sh@297 -- # x722=() 00:25:10.508 12:01:00 -- nvmf/common.sh@297 -- # local -ga x722 00:25:10.508 12:01:00 -- nvmf/common.sh@298 -- # mlx=() 00:25:10.508 12:01:00 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:10.508 12:01:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:10.508 12:01:00 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:10.508 12:01:00 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:10.508 12:01:00 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:10.509 12:01:00 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:10.509 12:01:00 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:10.509 12:01:00 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:10.509 12:01:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:10.509 12:01:00 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:10.509 12:01:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:10.509 12:01:00 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:10.509 12:01:00 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:10.509 12:01:00 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:10.509 12:01:00 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:10.509 12:01:00 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:10.509 12:01:00 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:10.509 12:01:00 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:10.509 12:01:00 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:10.509 12:01:00 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:10.509 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:10.509 12:01:00 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:10.509 12:01:00 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:10.509 12:01:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:10.509 12:01:00 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:10.509 12:01:00 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:10.509 12:01:00 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:10.509 12:01:00 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:10.509 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:10.509 12:01:00 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:10.509 12:01:00 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:10.509 12:01:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:10.509 12:01:00 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:10.509 12:01:00 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:10.509 12:01:00 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:10.509 12:01:00 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:10.509 12:01:00 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:10.509 12:01:00 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:10.509 12:01:00 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:10.509 12:01:00 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:10.509 12:01:00 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:10.509 12:01:00 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:10.509 Found net devices under 0000:af:00.0: cvl_0_0 00:25:10.509 12:01:00 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:10.509 12:01:00 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:10.509 12:01:00 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:10.509 12:01:00 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:10.509 12:01:00 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:10.509 12:01:00 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:10.509 Found net devices under 0000:af:00.1: cvl_0_1 00:25:10.509 12:01:00 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:10.509 12:01:00 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:25:10.509 12:01:00 -- nvmf/common.sh@403 -- # is_hw=yes 00:25:10.509 12:01:00 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:25:10.509 12:01:00 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:25:10.509 12:01:00 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:25:10.509 12:01:00 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:10.509 12:01:00 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:10.509 12:01:00 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:10.509 12:01:00 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:10.509 12:01:00 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:10.509 12:01:00 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:10.509 12:01:00 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:10.509 12:01:00 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:10.509 12:01:00 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:10.509 12:01:00 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:10.509 12:01:00 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:10.509 12:01:00 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:10.509 12:01:00 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:10.766 12:01:01 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:10.767 12:01:01 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:10.767 12:01:01 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:10.767 12:01:01 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:10.767 12:01:01 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:10.767 12:01:01 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:10.767 12:01:01 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:10.767 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:10.767 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:25:10.767 00:25:10.767 --- 10.0.0.2 ping statistics --- 00:25:10.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:10.767 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:25:10.767 12:01:01 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:10.767 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:10.767 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:25:10.767 00:25:10.767 --- 10.0.0.1 ping statistics --- 00:25:10.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:10.767 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:25:10.767 12:01:01 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:10.767 12:01:01 -- nvmf/common.sh@411 -- # return 0 00:25:10.767 12:01:01 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:10.767 12:01:01 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:10.767 12:01:01 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:10.767 12:01:01 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:10.767 12:01:01 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:10.767 12:01:01 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:10.767 12:01:01 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:10.767 12:01:01 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:25:10.767 12:01:01 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:10.767 12:01:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:10.767 12:01:01 -- common/autotest_common.sh@10 -- # set +x 00:25:11.024 12:01:01 -- nvmf/common.sh@470 -- # nvmfpid=2582654 00:25:11.024 12:01:01 -- nvmf/common.sh@471 -- # waitforlisten 2582654 00:25:11.024 12:01:01 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:11.024 12:01:01 -- common/autotest_common.sh@817 -- # '[' -z 2582654 ']' 00:25:11.024 12:01:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:11.024 12:01:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:11.024 12:01:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:11.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:11.024 12:01:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:11.024 12:01:01 -- common/autotest_common.sh@10 -- # set +x 00:25:11.025 [2024-04-18 12:01:01.409773] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:25:11.025 [2024-04-18 12:01:01.409859] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:11.025 EAL: No free 2048 kB hugepages reported on node 1 00:25:11.025 [2024-04-18 12:01:01.535307] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:11.282 [2024-04-18 12:01:01.744500] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:11.282 [2024-04-18 12:01:01.744549] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:11.282 [2024-04-18 12:01:01.744562] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:11.282 [2024-04-18 12:01:01.744575] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:11.282 [2024-04-18 12:01:01.744584] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:11.282 [2024-04-18 12:01:01.744706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:11.282 [2024-04-18 12:01:01.744776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:11.282 [2024-04-18 12:01:01.744838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:11.282 [2024-04-18 12:01:01.744846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:11.848 12:01:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:11.848 12:01:02 -- common/autotest_common.sh@850 -- # return 0 00:25:11.848 12:01:02 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:11.848 12:01:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:11.848 12:01:02 -- common/autotest_common.sh@10 -- # set +x 00:25:11.848 12:01:02 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:11.848 12:01:02 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:11.848 12:01:02 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:15.139 12:01:05 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:25:15.139 12:01:05 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:25:15.139 12:01:05 -- host/perf.sh@30 -- # local_nvme_trid=0000:d8:00.0 00:25:15.139 12:01:05 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:15.396 12:01:05 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:25:15.396 12:01:05 -- host/perf.sh@33 -- # '[' -n 0000:d8:00.0 ']' 00:25:15.396 12:01:05 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:25:15.396 12:01:05 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:25:15.396 12:01:05 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:15.396 [2024-04-18 12:01:05.920300] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:15.653 12:01:05 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:15.653 12:01:06 -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:15.653 12:01:06 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:15.922 12:01:06 -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:15.922 12:01:06 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:16.181 12:01:06 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:16.181 [2024-04-18 12:01:06.678934] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:16.181 12:01:06 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:16.439 12:01:06 -- host/perf.sh@52 -- # '[' -n 0000:d8:00.0 ']' 00:25:16.439 12:01:06 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:25:16.439 12:01:06 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:25:16.439 12:01:06 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:25:18.341 Initializing NVMe Controllers 00:25:18.341 Attached to NVMe Controller at 0000:d8:00.0 [8086:0a54] 00:25:18.341 Associating PCIE (0000:d8:00.0) NSID 1 with lcore 0 00:25:18.341 Initialization complete. Launching workers. 00:25:18.341 ======================================================== 00:25:18.341 Latency(us) 00:25:18.341 Device Information : IOPS MiB/s Average min max 00:25:18.341 PCIE (0000:d8:00.0) NSID 1 from core 0: 93703.02 366.03 341.04 41.23 5328.84 00:25:18.341 ======================================================== 00:25:18.341 Total : 93703.02 366.03 341.04 41.23 5328.84 00:25:18.341 00:25:18.341 12:01:08 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:18.341 EAL: No free 2048 kB hugepages reported on node 1 00:25:19.716 Initializing NVMe Controllers 00:25:19.716 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:19.716 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:19.716 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:19.716 Initialization complete. Launching workers. 00:25:19.716 ======================================================== 00:25:19.716 Latency(us) 00:25:19.716 Device Information : IOPS MiB/s Average min max 00:25:19.716 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 151.00 0.59 6826.68 295.14 45447.04 00:25:19.716 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 56.00 0.22 18792.32 7827.77 54864.18 00:25:19.716 ======================================================== 00:25:19.716 Total : 207.00 0.81 10063.76 295.14 54864.18 00:25:19.716 00:25:19.716 12:01:09 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:19.716 EAL: No free 2048 kB hugepages reported on node 1 00:25:21.091 Initializing NVMe Controllers 00:25:21.091 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:21.091 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:21.091 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:21.091 Initialization complete. Launching workers. 00:25:21.091 ======================================================== 00:25:21.091 Latency(us) 00:25:21.091 Device Information : IOPS MiB/s Average min max 00:25:21.091 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8592.99 33.57 3726.32 715.47 8343.04 00:25:21.091 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3891.00 15.20 8274.89 7045.84 15736.78 00:25:21.091 ======================================================== 00:25:21.091 Total : 12483.99 48.77 5144.01 715.47 15736.78 00:25:21.091 00:25:21.091 12:01:11 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:25:21.091 12:01:11 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:25:21.091 12:01:11 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:21.091 EAL: No free 2048 kB hugepages reported on node 1 00:25:23.622 Initializing NVMe Controllers 00:25:23.622 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:23.622 Controller IO queue size 128, less than required. 00:25:23.622 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:23.622 Controller IO queue size 128, less than required. 00:25:23.622 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:23.622 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:23.622 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:23.622 Initialization complete. Launching workers. 00:25:23.622 ======================================================== 00:25:23.622 Latency(us) 00:25:23.622 Device Information : IOPS MiB/s Average min max 00:25:23.622 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 893.49 223.37 152247.98 86266.17 352634.59 00:25:23.622 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 570.99 142.75 238002.09 128009.69 521304.80 00:25:23.622 ======================================================== 00:25:23.622 Total : 1464.49 366.12 185683.01 86266.17 521304.80 00:25:23.622 00:25:23.622 12:01:14 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:25:23.890 EAL: No free 2048 kB hugepages reported on node 1 00:25:24.158 No valid NVMe controllers or AIO or URING devices found 00:25:24.158 Initializing NVMe Controllers 00:25:24.158 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:24.158 Controller IO queue size 128, less than required. 00:25:24.158 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:24.158 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:25:24.158 Controller IO queue size 128, less than required. 00:25:24.158 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:24.158 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:25:24.158 WARNING: Some requested NVMe devices were skipped 00:25:24.158 12:01:14 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:25:24.158 EAL: No free 2048 kB hugepages reported on node 1 00:25:27.447 Initializing NVMe Controllers 00:25:27.447 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:27.447 Controller IO queue size 128, less than required. 00:25:27.447 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:27.447 Controller IO queue size 128, less than required. 00:25:27.447 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:27.447 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:27.447 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:27.447 Initialization complete. Launching workers. 00:25:27.447 00:25:27.447 ==================== 00:25:27.447 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:25:27.447 TCP transport: 00:25:27.447 polls: 30716 00:25:27.447 idle_polls: 8962 00:25:27.447 sock_completions: 21754 00:25:27.447 nvme_completions: 3695 00:25:27.447 submitted_requests: 5536 00:25:27.447 queued_requests: 1 00:25:27.447 00:25:27.447 ==================== 00:25:27.447 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:25:27.447 TCP transport: 00:25:27.447 polls: 31898 00:25:27.447 idle_polls: 10884 00:25:27.447 sock_completions: 21014 00:25:27.447 nvme_completions: 3775 00:25:27.447 submitted_requests: 5680 00:25:27.447 queued_requests: 1 00:25:27.447 ======================================================== 00:25:27.447 Latency(us) 00:25:27.447 Device Information : IOPS MiB/s Average min max 00:25:27.447 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 919.76 229.94 151576.66 82693.98 487654.85 00:25:27.447 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 939.68 234.92 139401.21 73784.98 404477.64 00:25:27.447 ======================================================== 00:25:27.447 Total : 1859.44 464.86 145423.72 73784.98 487654.85 00:25:27.447 00:25:27.447 12:01:17 -- host/perf.sh@66 -- # sync 00:25:27.447 12:01:17 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:27.447 12:01:17 -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:25:27.447 12:01:17 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:25:27.447 12:01:17 -- host/perf.sh@114 -- # nvmftestfini 00:25:27.447 12:01:17 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:27.447 12:01:17 -- nvmf/common.sh@117 -- # sync 00:25:27.447 12:01:17 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:27.447 12:01:17 -- nvmf/common.sh@120 -- # set +e 00:25:27.447 12:01:17 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:27.447 12:01:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:27.447 rmmod nvme_tcp 00:25:27.447 rmmod nvme_fabrics 00:25:27.447 rmmod nvme_keyring 00:25:27.447 12:01:17 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:27.447 12:01:17 -- nvmf/common.sh@124 -- # set -e 00:25:27.447 12:01:17 -- nvmf/common.sh@125 -- # return 0 00:25:27.448 12:01:17 -- nvmf/common.sh@478 -- # '[' -n 2582654 ']' 00:25:27.448 12:01:17 -- nvmf/common.sh@479 -- # killprocess 2582654 00:25:27.448 12:01:17 -- common/autotest_common.sh@936 -- # '[' -z 2582654 ']' 00:25:27.448 12:01:17 -- common/autotest_common.sh@940 -- # kill -0 2582654 00:25:27.448 12:01:17 -- common/autotest_common.sh@941 -- # uname 00:25:27.448 12:01:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:27.448 12:01:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2582654 00:25:27.448 12:01:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:27.448 12:01:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:27.448 12:01:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2582654' 00:25:27.448 killing process with pid 2582654 00:25:27.448 12:01:17 -- common/autotest_common.sh@955 -- # kill 2582654 00:25:27.448 12:01:17 -- common/autotest_common.sh@960 -- # wait 2582654 00:25:30.733 12:01:20 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:30.733 12:01:20 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:30.733 12:01:20 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:30.733 12:01:20 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:30.733 12:01:20 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:30.733 12:01:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:30.733 12:01:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:30.733 12:01:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:32.634 12:01:22 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:32.634 00:25:32.634 real 0m28.875s 00:25:32.634 user 1m15.915s 00:25:32.634 sys 0m9.104s 00:25:32.634 12:01:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:32.634 12:01:22 -- common/autotest_common.sh@10 -- # set +x 00:25:32.634 ************************************ 00:25:32.634 END TEST nvmf_perf 00:25:32.634 ************************************ 00:25:32.634 12:01:23 -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:32.634 12:01:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:32.634 12:01:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:32.634 12:01:23 -- common/autotest_common.sh@10 -- # set +x 00:25:32.892 ************************************ 00:25:32.892 START TEST nvmf_fio_host 00:25:32.892 ************************************ 00:25:32.892 12:01:23 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:32.892 * Looking for test storage... 00:25:32.892 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:32.892 12:01:23 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:32.892 12:01:23 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:32.892 12:01:23 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:32.892 12:01:23 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:32.892 12:01:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.892 12:01:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.892 12:01:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.892 12:01:23 -- paths/export.sh@5 -- # export PATH 00:25:32.892 12:01:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.892 12:01:23 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:32.892 12:01:23 -- nvmf/common.sh@7 -- # uname -s 00:25:32.892 12:01:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:32.892 12:01:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:32.892 12:01:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:32.892 12:01:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:32.892 12:01:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:32.892 12:01:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:32.893 12:01:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:32.893 12:01:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:32.893 12:01:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:32.893 12:01:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:32.893 12:01:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:25:32.893 12:01:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:25:32.893 12:01:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:32.893 12:01:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:32.893 12:01:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:32.893 12:01:23 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:32.893 12:01:23 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:32.893 12:01:23 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:32.893 12:01:23 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:32.893 12:01:23 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:32.893 12:01:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.893 12:01:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.893 12:01:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.893 12:01:23 -- paths/export.sh@5 -- # export PATH 00:25:32.893 12:01:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.893 12:01:23 -- nvmf/common.sh@47 -- # : 0 00:25:32.893 12:01:23 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:32.893 12:01:23 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:32.893 12:01:23 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:32.893 12:01:23 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:32.893 12:01:23 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:32.893 12:01:23 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:32.893 12:01:23 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:32.893 12:01:23 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:32.893 12:01:23 -- host/fio.sh@12 -- # nvmftestinit 00:25:32.893 12:01:23 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:32.893 12:01:23 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:32.893 12:01:23 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:32.893 12:01:23 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:32.893 12:01:23 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:32.893 12:01:23 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:32.893 12:01:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:32.893 12:01:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:32.893 12:01:23 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:25:32.893 12:01:23 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:25:32.893 12:01:23 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:32.893 12:01:23 -- common/autotest_common.sh@10 -- # set +x 00:25:39.455 12:01:29 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:39.455 12:01:29 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:39.455 12:01:29 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:39.455 12:01:29 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:39.456 12:01:29 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:39.456 12:01:29 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:39.456 12:01:29 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:39.456 12:01:29 -- nvmf/common.sh@295 -- # net_devs=() 00:25:39.456 12:01:29 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:39.456 12:01:29 -- nvmf/common.sh@296 -- # e810=() 00:25:39.456 12:01:29 -- nvmf/common.sh@296 -- # local -ga e810 00:25:39.456 12:01:29 -- nvmf/common.sh@297 -- # x722=() 00:25:39.456 12:01:29 -- nvmf/common.sh@297 -- # local -ga x722 00:25:39.456 12:01:29 -- nvmf/common.sh@298 -- # mlx=() 00:25:39.456 12:01:29 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:39.456 12:01:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:39.456 12:01:29 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:39.456 12:01:29 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:39.456 12:01:29 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:39.456 12:01:29 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:39.456 12:01:29 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:39.456 12:01:29 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:39.456 12:01:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:39.456 12:01:29 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:39.456 12:01:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:39.456 12:01:29 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:39.456 12:01:29 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:39.456 12:01:29 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:39.456 12:01:29 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:39.456 12:01:29 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:39.456 12:01:29 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:39.456 12:01:29 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:39.456 12:01:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:39.456 12:01:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:39.456 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:39.456 12:01:29 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:39.456 12:01:29 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:39.456 12:01:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:39.456 12:01:29 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:39.456 12:01:29 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:39.456 12:01:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:39.456 12:01:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:39.456 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:39.456 12:01:29 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:39.456 12:01:29 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:39.456 12:01:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:39.456 12:01:29 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:39.456 12:01:29 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:39.456 12:01:29 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:39.456 12:01:29 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:39.456 12:01:29 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:39.456 12:01:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:39.456 12:01:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:39.456 12:01:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:39.456 12:01:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:39.456 12:01:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:39.456 Found net devices under 0000:af:00.0: cvl_0_0 00:25:39.456 12:01:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:39.456 12:01:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:39.456 12:01:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:39.456 12:01:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:39.456 12:01:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:39.456 12:01:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:39.456 Found net devices under 0000:af:00.1: cvl_0_1 00:25:39.456 12:01:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:39.456 12:01:29 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:25:39.456 12:01:29 -- nvmf/common.sh@403 -- # is_hw=yes 00:25:39.456 12:01:29 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:25:39.456 12:01:29 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:25:39.456 12:01:29 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:25:39.456 12:01:29 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:39.456 12:01:29 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:39.456 12:01:29 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:39.456 12:01:29 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:39.456 12:01:29 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:39.456 12:01:29 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:39.456 12:01:29 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:39.456 12:01:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:39.456 12:01:29 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:39.456 12:01:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:39.456 12:01:29 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:39.456 12:01:29 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:39.456 12:01:29 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:39.456 12:01:29 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:39.456 12:01:29 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:39.456 12:01:29 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:39.456 12:01:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:39.456 12:01:29 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:39.456 12:01:29 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:39.456 12:01:29 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:39.456 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:39.456 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.156 ms 00:25:39.456 00:25:39.456 --- 10.0.0.2 ping statistics --- 00:25:39.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:39.456 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:25:39.456 12:01:29 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:39.456 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:39.456 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:25:39.456 00:25:39.456 --- 10.0.0.1 ping statistics --- 00:25:39.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:39.456 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:25:39.456 12:01:29 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:39.456 12:01:29 -- nvmf/common.sh@411 -- # return 0 00:25:39.456 12:01:29 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:39.456 12:01:29 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:39.456 12:01:29 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:39.456 12:01:29 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:39.456 12:01:29 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:39.456 12:01:29 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:39.456 12:01:29 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:39.456 12:01:29 -- host/fio.sh@14 -- # [[ y != y ]] 00:25:39.456 12:01:29 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:25:39.456 12:01:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:39.456 12:01:29 -- common/autotest_common.sh@10 -- # set +x 00:25:39.456 12:01:29 -- host/fio.sh@22 -- # nvmfpid=2589605 00:25:39.456 12:01:29 -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:39.456 12:01:29 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:39.456 12:01:29 -- host/fio.sh@26 -- # waitforlisten 2589605 00:25:39.456 12:01:29 -- common/autotest_common.sh@817 -- # '[' -z 2589605 ']' 00:25:39.456 12:01:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:39.456 12:01:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:39.456 12:01:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:39.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:39.456 12:01:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:39.456 12:01:29 -- common/autotest_common.sh@10 -- # set +x 00:25:39.456 [2024-04-18 12:01:29.886827] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:25:39.456 [2024-04-18 12:01:29.886916] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:39.456 EAL: No free 2048 kB hugepages reported on node 1 00:25:39.715 [2024-04-18 12:01:30.017686] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:39.715 [2024-04-18 12:01:30.235747] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:39.715 [2024-04-18 12:01:30.235799] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:39.715 [2024-04-18 12:01:30.235813] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:39.715 [2024-04-18 12:01:30.235826] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:39.715 [2024-04-18 12:01:30.235836] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:39.715 [2024-04-18 12:01:30.235921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:39.715 [2024-04-18 12:01:30.235998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:39.715 [2024-04-18 12:01:30.236095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:39.715 [2024-04-18 12:01:30.236104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:40.281 12:01:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:40.281 12:01:30 -- common/autotest_common.sh@850 -- # return 0 00:25:40.281 12:01:30 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:40.281 12:01:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:40.281 12:01:30 -- common/autotest_common.sh@10 -- # set +x 00:25:40.281 [2024-04-18 12:01:30.665859] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:40.281 12:01:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:40.281 12:01:30 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:25:40.281 12:01:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:40.282 12:01:30 -- common/autotest_common.sh@10 -- # set +x 00:25:40.282 12:01:30 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:40.282 12:01:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:40.282 12:01:30 -- common/autotest_common.sh@10 -- # set +x 00:25:40.282 Malloc1 00:25:40.282 12:01:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:40.282 12:01:30 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:40.282 12:01:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:40.282 12:01:30 -- common/autotest_common.sh@10 -- # set +x 00:25:40.282 12:01:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:40.282 12:01:30 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:40.282 12:01:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:40.282 12:01:30 -- common/autotest_common.sh@10 -- # set +x 00:25:40.282 12:01:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:40.282 12:01:30 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:40.282 12:01:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:40.282 12:01:30 -- common/autotest_common.sh@10 -- # set +x 00:25:40.540 [2024-04-18 12:01:30.834233] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:40.540 12:01:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:40.540 12:01:30 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:40.540 12:01:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:40.540 12:01:30 -- common/autotest_common.sh@10 -- # set +x 00:25:40.540 12:01:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:40.540 12:01:30 -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:40.540 12:01:30 -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:40.540 12:01:30 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:40.540 12:01:30 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:25:40.540 12:01:30 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:40.540 12:01:30 -- common/autotest_common.sh@1325 -- # local sanitizers 00:25:40.540 12:01:30 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:40.540 12:01:30 -- common/autotest_common.sh@1327 -- # shift 00:25:40.540 12:01:30 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:25:40.540 12:01:30 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:25:40.540 12:01:30 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:40.540 12:01:30 -- common/autotest_common.sh@1331 -- # grep libasan 00:25:40.540 12:01:30 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:25:40.540 12:01:30 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:25:40.540 12:01:30 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:25:40.540 12:01:30 -- common/autotest_common.sh@1333 -- # break 00:25:40.540 12:01:30 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:40.540 12:01:30 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:40.798 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:25:40.798 fio-3.35 00:25:40.798 Starting 1 thread 00:25:40.798 EAL: No free 2048 kB hugepages reported on node 1 00:25:43.345 00:25:43.345 test: (groupid=0, jobs=1): err= 0: pid=2590020: Thu Apr 18 12:01:33 2024 00:25:43.345 read: IOPS=10.4k, BW=40.6MiB/s (42.6MB/s)(81.4MiB/2005msec) 00:25:43.345 slat (nsec): min=1733, max=172792, avg=1903.42, stdev=1686.00 00:25:43.345 clat (usec): min=4032, max=12182, avg=6807.34, stdev=610.84 00:25:43.345 lat (usec): min=4034, max=12184, avg=6809.25, stdev=610.88 00:25:43.345 clat percentiles (usec): 00:25:43.345 | 1.00th=[ 5276], 5.00th=[ 5932], 10.00th=[ 6128], 20.00th=[ 6390], 00:25:43.345 | 30.00th=[ 6521], 40.00th=[ 6652], 50.00th=[ 6783], 60.00th=[ 6915], 00:25:43.345 | 70.00th=[ 7046], 80.00th=[ 7242], 90.00th=[ 7439], 95.00th=[ 7701], 00:25:43.345 | 99.00th=[ 8586], 99.50th=[ 9241], 99.90th=[10814], 99.95th=[11207], 00:25:43.345 | 99.99th=[12125] 00:25:43.345 bw ( KiB/s): min=40016, max=42336, per=99.89%, avg=41540.00, stdev=1052.23, samples=4 00:25:43.345 iops : min=10004, max=10584, avg=10385.00, stdev=263.06, samples=4 00:25:43.345 write: IOPS=10.4k, BW=40.6MiB/s (42.6MB/s)(81.5MiB/2005msec); 0 zone resets 00:25:43.345 slat (nsec): min=1787, max=154712, avg=2008.85, stdev=1199.00 00:25:43.345 clat (usec): min=1788, max=11175, avg=5403.07, stdev=480.88 00:25:43.345 lat (usec): min=1801, max=11177, avg=5405.08, stdev=480.87 00:25:43.345 clat percentiles (usec): 00:25:43.345 | 1.00th=[ 4113], 5.00th=[ 4621], 10.00th=[ 4883], 20.00th=[ 5080], 00:25:43.345 | 30.00th=[ 5211], 40.00th=[ 5342], 50.00th=[ 5407], 60.00th=[ 5538], 00:25:43.345 | 70.00th=[ 5604], 80.00th=[ 5735], 90.00th=[ 5932], 95.00th=[ 6063], 00:25:43.345 | 99.00th=[ 6456], 99.50th=[ 6652], 99.90th=[ 8455], 99.95th=[ 9896], 00:25:43.345 | 99.99th=[10683] 00:25:43.345 bw ( KiB/s): min=40640, max=42088, per=100.00%, avg=41600.00, stdev=667.32, samples=4 00:25:43.345 iops : min=10160, max=10522, avg=10400.00, stdev=166.83, samples=4 00:25:43.345 lat (msec) : 2=0.01%, 4=0.42%, 10=99.43%, 20=0.14% 00:25:43.345 cpu : usr=64.47%, sys=29.74%, ctx=67, majf=0, minf=1530 00:25:43.345 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:25:43.345 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.345 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:43.345 issued rwts: total=20844,20853,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.345 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:43.345 00:25:43.345 Run status group 0 (all jobs): 00:25:43.345 READ: bw=40.6MiB/s (42.6MB/s), 40.6MiB/s-40.6MiB/s (42.6MB/s-42.6MB/s), io=81.4MiB (85.4MB), run=2005-2005msec 00:25:43.345 WRITE: bw=40.6MiB/s (42.6MB/s), 40.6MiB/s-40.6MiB/s (42.6MB/s-42.6MB/s), io=81.5MiB (85.4MB), run=2005-2005msec 00:25:43.604 ----------------------------------------------------- 00:25:43.604 Suppressions used: 00:25:43.604 count bytes template 00:25:43.604 1 57 /usr/src/fio/parse.c 00:25:43.604 1 8 libtcmalloc_minimal.so 00:25:43.605 ----------------------------------------------------- 00:25:43.605 00:25:43.605 12:01:34 -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:43.605 12:01:34 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:43.605 12:01:34 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:25:43.605 12:01:34 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:43.605 12:01:34 -- common/autotest_common.sh@1325 -- # local sanitizers 00:25:43.605 12:01:34 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:43.605 12:01:34 -- common/autotest_common.sh@1327 -- # shift 00:25:43.605 12:01:34 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:25:43.605 12:01:34 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:25:43.605 12:01:34 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:43.605 12:01:34 -- common/autotest_common.sh@1331 -- # grep libasan 00:25:43.605 12:01:34 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:25:43.605 12:01:34 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:25:43.605 12:01:34 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:25:43.605 12:01:34 -- common/autotest_common.sh@1333 -- # break 00:25:43.605 12:01:34 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:43.605 12:01:34 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:44.186 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:25:44.186 fio-3.35 00:25:44.186 Starting 1 thread 00:25:44.186 EAL: No free 2048 kB hugepages reported on node 1 00:25:46.717 00:25:46.717 test: (groupid=0, jobs=1): err= 0: pid=2590671: Thu Apr 18 12:01:37 2024 00:25:46.717 read: IOPS=9039, BW=141MiB/s (148MB/s)(289MiB/2045msec) 00:25:46.717 slat (nsec): min=2753, max=91813, avg=3111.77, stdev=1348.32 00:25:46.717 clat (usec): min=1722, max=55023, avg=8624.05, stdev=3863.78 00:25:46.717 lat (usec): min=1725, max=55026, avg=8627.16, stdev=3863.95 00:25:46.717 clat percentiles (usec): 00:25:46.717 | 1.00th=[ 4015], 5.00th=[ 5014], 10.00th=[ 5604], 20.00th=[ 6325], 00:25:46.717 | 30.00th=[ 6980], 40.00th=[ 7635], 50.00th=[ 8094], 60.00th=[ 8717], 00:25:46.717 | 70.00th=[ 9503], 80.00th=[10290], 90.00th=[11600], 95.00th=[13173], 00:25:46.717 | 99.00th=[17957], 99.50th=[19006], 99.90th=[54264], 99.95th=[54264], 00:25:46.717 | 99.99th=[54789] 00:25:46.717 bw ( KiB/s): min=65056, max=84352, per=50.32%, avg=72776.00, stdev=8298.08, samples=4 00:25:46.717 iops : min= 4066, max= 5272, avg=4548.50, stdev=518.63, samples=4 00:25:46.717 write: IOPS=5199, BW=81.2MiB/s (85.2MB/s)(149MiB/1834msec); 0 zone resets 00:25:46.717 slat (usec): min=29, max=280, avg=30.95, stdev= 5.49 00:25:46.717 clat (usec): min=5430, max=54960, avg=9804.11, stdev=3521.46 00:25:46.717 lat (usec): min=5460, max=54991, avg=9835.07, stdev=3522.42 00:25:46.717 clat percentiles (usec): 00:25:46.717 | 1.00th=[ 6390], 5.00th=[ 7046], 10.00th=[ 7439], 20.00th=[ 7963], 00:25:46.717 | 30.00th=[ 8356], 40.00th=[ 8717], 50.00th=[ 9110], 60.00th=[ 9634], 00:25:46.717 | 70.00th=[10159], 80.00th=[11076], 90.00th=[12387], 95.00th=[13698], 00:25:46.717 | 99.00th=[18220], 99.50th=[18744], 99.90th=[54264], 99.95th=[54789], 00:25:46.717 | 99.99th=[54789] 00:25:46.717 bw ( KiB/s): min=68064, max=87936, per=91.31%, avg=75952.00, stdev=8606.69, samples=4 00:25:46.717 iops : min= 4254, max= 5496, avg=4747.00, stdev=537.92, samples=4 00:25:46.717 lat (msec) : 2=0.03%, 4=0.58%, 10=72.57%, 20=26.35%, 50=0.11% 00:25:46.717 lat (msec) : 100=0.36% 00:25:46.717 cpu : usr=82.49%, sys=15.46%, ctx=62, majf=0, minf=2332 00:25:46.717 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:25:46.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:46.717 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:46.717 issued rwts: total=18485,9535,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:46.717 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:46.717 00:25:46.717 Run status group 0 (all jobs): 00:25:46.717 READ: bw=141MiB/s (148MB/s), 141MiB/s-141MiB/s (148MB/s-148MB/s), io=289MiB (303MB), run=2045-2045msec 00:25:46.717 WRITE: bw=81.2MiB/s (85.2MB/s), 81.2MiB/s-81.2MiB/s (85.2MB/s-85.2MB/s), io=149MiB (156MB), run=1834-1834msec 00:25:46.717 ----------------------------------------------------- 00:25:46.717 Suppressions used: 00:25:46.717 count bytes template 00:25:46.717 1 57 /usr/src/fio/parse.c 00:25:46.717 201 19296 /usr/src/fio/iolog.c 00:25:46.717 1 8 libtcmalloc_minimal.so 00:25:46.717 ----------------------------------------------------- 00:25:46.717 00:25:46.717 12:01:37 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:46.717 12:01:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:46.717 12:01:37 -- common/autotest_common.sh@10 -- # set +x 00:25:46.717 12:01:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:46.717 12:01:37 -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:25:46.717 12:01:37 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:25:46.717 12:01:37 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:25:46.717 12:01:37 -- host/fio.sh@84 -- # nvmftestfini 00:25:46.717 12:01:37 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:46.717 12:01:37 -- nvmf/common.sh@117 -- # sync 00:25:46.717 12:01:37 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:46.717 12:01:37 -- nvmf/common.sh@120 -- # set +e 00:25:46.717 12:01:37 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:46.975 12:01:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:46.975 rmmod nvme_tcp 00:25:46.975 rmmod nvme_fabrics 00:25:46.975 rmmod nvme_keyring 00:25:46.975 12:01:37 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:46.975 12:01:37 -- nvmf/common.sh@124 -- # set -e 00:25:46.975 12:01:37 -- nvmf/common.sh@125 -- # return 0 00:25:46.975 12:01:37 -- nvmf/common.sh@478 -- # '[' -n 2589605 ']' 00:25:46.975 12:01:37 -- nvmf/common.sh@479 -- # killprocess 2589605 00:25:46.975 12:01:37 -- common/autotest_common.sh@936 -- # '[' -z 2589605 ']' 00:25:46.975 12:01:37 -- common/autotest_common.sh@940 -- # kill -0 2589605 00:25:46.975 12:01:37 -- common/autotest_common.sh@941 -- # uname 00:25:46.975 12:01:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:46.975 12:01:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2589605 00:25:46.975 12:01:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:46.975 12:01:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:46.975 12:01:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2589605' 00:25:46.975 killing process with pid 2589605 00:25:46.975 12:01:37 -- common/autotest_common.sh@955 -- # kill 2589605 00:25:46.975 12:01:37 -- common/autotest_common.sh@960 -- # wait 2589605 00:25:48.379 12:01:38 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:48.379 12:01:38 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:48.379 12:01:38 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:48.379 12:01:38 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:48.379 12:01:38 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:48.379 12:01:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:48.379 12:01:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:48.379 12:01:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:50.905 12:01:40 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:50.905 00:25:50.905 real 0m17.731s 00:25:50.905 user 0m54.188s 00:25:50.905 sys 0m7.759s 00:25:50.905 12:01:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:50.905 12:01:40 -- common/autotest_common.sh@10 -- # set +x 00:25:50.905 ************************************ 00:25:50.905 END TEST nvmf_fio_host 00:25:50.905 ************************************ 00:25:50.905 12:01:41 -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:50.905 12:01:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:50.905 12:01:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:50.905 12:01:41 -- common/autotest_common.sh@10 -- # set +x 00:25:50.905 ************************************ 00:25:50.905 START TEST nvmf_failover 00:25:50.905 ************************************ 00:25:50.905 12:01:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:50.905 * Looking for test storage... 00:25:50.905 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:50.905 12:01:41 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:50.905 12:01:41 -- nvmf/common.sh@7 -- # uname -s 00:25:50.905 12:01:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:50.905 12:01:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:50.905 12:01:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:50.905 12:01:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:50.905 12:01:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:50.905 12:01:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:50.905 12:01:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:50.905 12:01:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:50.905 12:01:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:50.905 12:01:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:50.905 12:01:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:25:50.905 12:01:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:25:50.905 12:01:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:50.905 12:01:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:50.905 12:01:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:50.905 12:01:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:50.905 12:01:41 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:50.905 12:01:41 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:50.905 12:01:41 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:50.905 12:01:41 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:50.905 12:01:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.905 12:01:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.905 12:01:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.905 12:01:41 -- paths/export.sh@5 -- # export PATH 00:25:50.905 12:01:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.905 12:01:41 -- nvmf/common.sh@47 -- # : 0 00:25:50.905 12:01:41 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:50.905 12:01:41 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:50.906 12:01:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:50.906 12:01:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:50.906 12:01:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:50.906 12:01:41 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:50.906 12:01:41 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:50.906 12:01:41 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:50.906 12:01:41 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:50.906 12:01:41 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:50.906 12:01:41 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:50.906 12:01:41 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:50.906 12:01:41 -- host/failover.sh@18 -- # nvmftestinit 00:25:50.906 12:01:41 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:50.906 12:01:41 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:50.906 12:01:41 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:50.906 12:01:41 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:50.906 12:01:41 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:50.906 12:01:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:50.906 12:01:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:50.906 12:01:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:50.906 12:01:41 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:25:50.906 12:01:41 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:25:50.906 12:01:41 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:50.906 12:01:41 -- common/autotest_common.sh@10 -- # set +x 00:25:57.457 12:01:47 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:57.458 12:01:47 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:57.458 12:01:47 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:57.458 12:01:47 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:57.458 12:01:47 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:57.458 12:01:47 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:57.458 12:01:47 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:57.458 12:01:47 -- nvmf/common.sh@295 -- # net_devs=() 00:25:57.458 12:01:47 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:57.458 12:01:47 -- nvmf/common.sh@296 -- # e810=() 00:25:57.458 12:01:47 -- nvmf/common.sh@296 -- # local -ga e810 00:25:57.458 12:01:47 -- nvmf/common.sh@297 -- # x722=() 00:25:57.458 12:01:47 -- nvmf/common.sh@297 -- # local -ga x722 00:25:57.458 12:01:47 -- nvmf/common.sh@298 -- # mlx=() 00:25:57.458 12:01:47 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:57.458 12:01:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:57.458 12:01:47 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:57.458 12:01:47 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:57.458 12:01:47 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:57.458 12:01:47 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:57.458 12:01:47 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:57.458 12:01:47 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:57.458 12:01:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:57.458 12:01:47 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:57.458 12:01:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:57.458 12:01:47 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:57.458 12:01:47 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:57.458 12:01:47 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:57.458 12:01:47 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:57.458 12:01:47 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:57.458 12:01:47 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:57.458 12:01:47 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:57.458 12:01:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:57.458 12:01:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:57.458 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:57.458 12:01:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:57.458 12:01:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:57.458 12:01:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:57.458 12:01:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:57.458 12:01:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:57.458 12:01:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:57.458 12:01:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:57.458 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:57.458 12:01:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:57.458 12:01:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:57.458 12:01:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:57.458 12:01:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:57.458 12:01:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:57.458 12:01:47 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:57.458 12:01:47 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:57.458 12:01:47 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:57.458 12:01:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:57.458 12:01:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:57.458 12:01:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:57.458 12:01:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:57.458 12:01:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:57.458 Found net devices under 0000:af:00.0: cvl_0_0 00:25:57.458 12:01:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:57.458 12:01:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:57.458 12:01:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:57.458 12:01:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:57.458 12:01:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:57.458 12:01:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:57.458 Found net devices under 0000:af:00.1: cvl_0_1 00:25:57.458 12:01:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:57.458 12:01:47 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:25:57.458 12:01:47 -- nvmf/common.sh@403 -- # is_hw=yes 00:25:57.458 12:01:47 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:25:57.458 12:01:47 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:25:57.458 12:01:47 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:25:57.458 12:01:47 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:57.458 12:01:47 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:57.458 12:01:47 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:57.458 12:01:47 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:57.458 12:01:47 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:57.458 12:01:47 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:57.458 12:01:47 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:57.458 12:01:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:57.458 12:01:47 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:57.458 12:01:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:57.458 12:01:47 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:57.458 12:01:47 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:57.458 12:01:47 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:57.717 12:01:48 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:57.717 12:01:48 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:57.717 12:01:48 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:57.717 12:01:48 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:57.717 12:01:48 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:57.717 12:01:48 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:57.717 12:01:48 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:57.717 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:57.717 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.306 ms 00:25:57.717 00:25:57.717 --- 10.0.0.2 ping statistics --- 00:25:57.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:57.717 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:25:57.717 12:01:48 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:57.717 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:57.717 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:25:57.717 00:25:57.717 --- 10.0.0.1 ping statistics --- 00:25:57.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:57.717 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:25:57.717 12:01:48 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:57.717 12:01:48 -- nvmf/common.sh@411 -- # return 0 00:25:57.717 12:01:48 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:57.717 12:01:48 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:57.717 12:01:48 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:57.717 12:01:48 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:57.717 12:01:48 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:57.717 12:01:48 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:57.717 12:01:48 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:57.717 12:01:48 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:25:57.717 12:01:48 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:57.717 12:01:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:57.717 12:01:48 -- common/autotest_common.sh@10 -- # set +x 00:25:57.975 12:01:48 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:57.975 12:01:48 -- nvmf/common.sh@470 -- # nvmfpid=2594912 00:25:57.975 12:01:48 -- nvmf/common.sh@471 -- # waitforlisten 2594912 00:25:57.975 12:01:48 -- common/autotest_common.sh@817 -- # '[' -z 2594912 ']' 00:25:57.975 12:01:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:57.975 12:01:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:57.975 12:01:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:57.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:57.975 12:01:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:57.975 12:01:48 -- common/autotest_common.sh@10 -- # set +x 00:25:57.975 [2024-04-18 12:01:48.346618] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:25:57.975 [2024-04-18 12:01:48.346707] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:57.975 EAL: No free 2048 kB hugepages reported on node 1 00:25:57.975 [2024-04-18 12:01:48.474374] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:58.233 [2024-04-18 12:01:48.692586] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:58.233 [2024-04-18 12:01:48.692630] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:58.233 [2024-04-18 12:01:48.692642] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:58.233 [2024-04-18 12:01:48.692655] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:58.233 [2024-04-18 12:01:48.692666] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:58.233 [2024-04-18 12:01:48.692793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:58.233 [2024-04-18 12:01:48.692852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:58.233 [2024-04-18 12:01:48.692858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:58.797 12:01:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:58.797 12:01:49 -- common/autotest_common.sh@850 -- # return 0 00:25:58.797 12:01:49 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:58.797 12:01:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:58.797 12:01:49 -- common/autotest_common.sh@10 -- # set +x 00:25:58.797 12:01:49 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:58.797 12:01:49 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:58.797 [2024-04-18 12:01:49.316373] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:59.054 12:01:49 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:59.054 Malloc0 00:25:59.311 12:01:49 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:59.311 12:01:49 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:59.568 12:01:49 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:59.568 [2024-04-18 12:01:50.112969] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:59.825 12:01:50 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:59.825 [2024-04-18 12:01:50.297514] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:59.825 12:01:50 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:00.082 [2024-04-18 12:01:50.486109] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:00.082 12:01:50 -- host/failover.sh@31 -- # bdevperf_pid=2595454 00:26:00.082 12:01:50 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:26:00.082 12:01:50 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:00.082 12:01:50 -- host/failover.sh@34 -- # waitforlisten 2595454 /var/tmp/bdevperf.sock 00:26:00.082 12:01:50 -- common/autotest_common.sh@817 -- # '[' -z 2595454 ']' 00:26:00.082 12:01:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:00.082 12:01:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:00.082 12:01:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:00.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:00.082 12:01:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:00.082 12:01:50 -- common/autotest_common.sh@10 -- # set +x 00:26:01.015 12:01:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:01.015 12:01:51 -- common/autotest_common.sh@850 -- # return 0 00:26:01.015 12:01:51 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:01.272 NVMe0n1 00:26:01.272 12:01:51 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:01.529 00:26:01.529 12:01:52 -- host/failover.sh@39 -- # run_test_pid=2595677 00:26:01.529 12:01:52 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:01.529 12:01:52 -- host/failover.sh@41 -- # sleep 1 00:26:02.900 12:01:53 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:02.900 [2024-04-18 12:01:53.172557] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:26:02.900 [2024-04-18 12:01:53.172612] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:26:02.900 [2024-04-18 12:01:53.172625] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:26:02.900 [2024-04-18 12:01:53.172636] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:26:02.900 [2024-04-18 12:01:53.172646] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:26:02.900 [2024-04-18 12:01:53.172657] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:26:02.900 [2024-04-18 12:01:53.172667] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:26:02.900 [2024-04-18 12:01:53.172678] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:26:02.900 [2024-04-18 12:01:53.172688] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:26:02.900 [2024-04-18 12:01:53.172698] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:26:02.900 [2024-04-18 12:01:53.172708] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:26:02.900 [2024-04-18 12:01:53.172718] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:26:02.900 [2024-04-18 12:01:53.172730] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:26:02.900 [2024-04-18 12:01:53.172741] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:26:02.900 [2024-04-18 12:01:53.172752] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:26:02.900 [2024-04-18 12:01:53.172762] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:26:02.900 [2024-04-18 12:01:53.172772] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:26:02.900 [2024-04-18 12:01:53.172783] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:26:02.900 [2024-04-18 12:01:53.172793] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:26:02.900 [2024-04-18 12:01:53.172803] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:26:02.900 [2024-04-18 12:01:53.172813] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:26:02.900 [2024-04-18 12:01:53.172823] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:26:02.900 [2024-04-18 12:01:53.172833] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:26:02.900 [2024-04-18 12:01:53.172845] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:26:02.900 [2024-04-18 12:01:53.172856] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:26:02.900 [2024-04-18 12:01:53.172872] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:26:02.900 [2024-04-18 12:01:53.172883] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:26:02.900 [2024-04-18 12:01:53.172894] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:26:02.900 [2024-04-18 12:01:53.172905] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:26:02.900 [2024-04-18 12:01:53.172916] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:26:02.900 [2024-04-18 12:01:53.172926] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:26:02.900 [2024-04-18 12:01:53.172936] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:26:02.901 [2024-04-18 12:01:53.172947] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:26:02.901 [2024-04-18 12:01:53.172957] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:26:02.901 [2024-04-18 12:01:53.172968] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:26:02.901 [2024-04-18 12:01:53.172978] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:26:02.901 [2024-04-18 12:01:53.172989] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:26:02.901 [2024-04-18 12:01:53.172999] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:26:02.901 [2024-04-18 12:01:53.173010] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:26:02.901 [2024-04-18 12:01:53.173022] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:26:02.901 [2024-04-18 12:01:53.173033] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:26:02.901 [2024-04-18 12:01:53.173043] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:26:02.901 [2024-04-18 12:01:53.173054] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:26:02.901 [2024-04-18 12:01:53.173064] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:26:02.901 [2024-04-18 12:01:53.173074] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:26:02.901 [2024-04-18 12:01:53.173085] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:26:02.901 [2024-04-18 12:01:53.173095] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:26:02.901 12:01:53 -- host/failover.sh@45 -- # sleep 3 00:26:06.177 12:01:56 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:06.177 00:26:06.177 12:01:56 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:06.177 [2024-04-18 12:01:56.724278] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:06.177 [2024-04-18 12:01:56.724332] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:06.177 [2024-04-18 12:01:56.724344] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:06.177 [2024-04-18 12:01:56.724355] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:06.177 [2024-04-18 12:01:56.724366] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:06.177 [2024-04-18 12:01:56.724376] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:06.177 [2024-04-18 12:01:56.724386] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:06.177 [2024-04-18 12:01:56.724396] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:06.177 [2024-04-18 12:01:56.724406] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:06.177 [2024-04-18 12:01:56.724416] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:06.177 [2024-04-18 12:01:56.724426] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:06.177 [2024-04-18 12:01:56.724436] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:06.177 [2024-04-18 12:01:56.724447] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:06.177 [2024-04-18 12:01:56.724465] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:06.177 [2024-04-18 12:01:56.724475] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:06.177 [2024-04-18 12:01:56.724486] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:06.177 [2024-04-18 12:01:56.724497] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:06.177 [2024-04-18 12:01:56.724508] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:06.177 [2024-04-18 12:01:56.724518] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:06.177 [2024-04-18 12:01:56.724529] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:06.177 [2024-04-18 12:01:56.724540] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:06.177 [2024-04-18 12:01:56.724550] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:06.435 12:01:56 -- host/failover.sh@50 -- # sleep 3 00:26:09.711 12:01:59 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:09.711 [2024-04-18 12:01:59.918388] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:09.711 12:01:59 -- host/failover.sh@55 -- # sleep 1 00:26:10.673 12:02:00 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:10.673 [2024-04-18 12:02:01.109291] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.673 [2024-04-18 12:02:01.109347] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.673 [2024-04-18 12:02:01.109364] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.673 [2024-04-18 12:02:01.109375] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.673 [2024-04-18 12:02:01.109386] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.673 [2024-04-18 12:02:01.109397] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.673 [2024-04-18 12:02:01.109407] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.673 [2024-04-18 12:02:01.109417] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.673 [2024-04-18 12:02:01.109428] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.673 [2024-04-18 12:02:01.109438] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.673 [2024-04-18 12:02:01.109449] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.673 [2024-04-18 12:02:01.109467] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.673 [2024-04-18 12:02:01.109478] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.673 [2024-04-18 12:02:01.109488] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.673 [2024-04-18 12:02:01.109498] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.673 [2024-04-18 12:02:01.109509] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.673 [2024-04-18 12:02:01.109520] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.673 [2024-04-18 12:02:01.109530] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.673 [2024-04-18 12:02:01.109541] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.673 [2024-04-18 12:02:01.109551] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.673 [2024-04-18 12:02:01.109562] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.673 [2024-04-18 12:02:01.109573] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.109583] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.109593] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.109603] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.109614] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.109626] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.109636] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.109648] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.109658] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.109669] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.109679] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.109689] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.109700] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.109712] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.109723] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.109734] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.109745] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.109755] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.109766] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.109776] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.109787] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.109798] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.109809] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.109820] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.109830] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.109841] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.109852] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.109862] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.109872] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.109882] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.109892] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.109903] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.109913] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.109925] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.109935] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.109946] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.109956] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.109966] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.109976] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.109987] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.109997] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.110007] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.110017] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.110027] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.110037] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.110047] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.110057] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.110069] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.110079] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.110089] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.110101] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.110111] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.110121] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.110132] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.110142] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.110152] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.110163] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.110173] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.110184] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.110194] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.110205] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.110216] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.110226] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.110236] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.110247] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.110257] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.110267] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.110291] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.110302] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.110312] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.110322] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.110332] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.110343] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.110354] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.110365] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.110375] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.110386] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.110396] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.110407] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.110417] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.110428] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.110439] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.110454] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.110465] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.110476] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.110486] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.110498] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.110508] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.110519] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.110531] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.110542] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.110552] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.110562] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.110573] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.110583] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.110593] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.110603] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.110614] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.110624] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.110635] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.110645] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.110655] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.110665] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.110675] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 [2024-04-18 12:02:01.110685] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:10.674 12:02:01 -- host/failover.sh@59 -- # wait 2595677 00:26:17.224 0 00:26:17.224 12:02:07 -- host/failover.sh@61 -- # killprocess 2595454 00:26:17.224 12:02:07 -- common/autotest_common.sh@936 -- # '[' -z 2595454 ']' 00:26:17.224 12:02:07 -- common/autotest_common.sh@940 -- # kill -0 2595454 00:26:17.224 12:02:07 -- common/autotest_common.sh@941 -- # uname 00:26:17.224 12:02:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:17.224 12:02:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2595454 00:26:17.224 12:02:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:17.224 12:02:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:17.224 12:02:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2595454' 00:26:17.224 killing process with pid 2595454 00:26:17.224 12:02:07 -- common/autotest_common.sh@955 -- # kill 2595454 00:26:17.224 12:02:07 -- common/autotest_common.sh@960 -- # wait 2595454 00:26:17.799 12:02:08 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:17.799 [2024-04-18 12:01:50.592210] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:26:17.799 [2024-04-18 12:01:50.592310] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2595454 ] 00:26:17.799 EAL: No free 2048 kB hugepages reported on node 1 00:26:17.799 [2024-04-18 12:01:50.716506] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.799 [2024-04-18 12:01:50.949741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:17.799 Running I/O for 15 seconds... 00:26:17.799 [2024-04-18 12:01:53.174109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:86096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.799 [2024-04-18 12:01:53.174157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.799 [2024-04-18 12:01:53.174184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:86104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.799 [2024-04-18 12:01:53.174197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.799 [2024-04-18 12:01:53.174212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:86112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.799 [2024-04-18 12:01:53.174224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.799 [2024-04-18 12:01:53.174238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:86120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.799 [2024-04-18 12:01:53.174251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.799 [2024-04-18 12:01:53.174264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.799 [2024-04-18 12:01:53.174276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.799 [2024-04-18 12:01:53.174289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:86136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.799 [2024-04-18 12:01:53.174301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.799 [2024-04-18 12:01:53.174314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:86144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.799 [2024-04-18 12:01:53.174327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.799 [2024-04-18 12:01:53.174340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:86152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.799 [2024-04-18 12:01:53.174351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.799 [2024-04-18 12:01:53.174367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:86160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.799 [2024-04-18 12:01:53.174378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.799 [2024-04-18 12:01:53.174391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:86168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.799 [2024-04-18 12:01:53.174403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.799 [2024-04-18 12:01:53.174417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:86176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.799 [2024-04-18 12:01:53.174430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.799 [2024-04-18 12:01:53.174454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:86184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.799 [2024-04-18 12:01:53.174467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.799 [2024-04-18 12:01:53.174482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:86192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.799 [2024-04-18 12:01:53.174494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.799 [2024-04-18 12:01:53.174507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:86200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.799 [2024-04-18 12:01:53.174519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.799 [2024-04-18 12:01:53.174532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:86208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.799 [2024-04-18 12:01:53.174544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.799 [2024-04-18 12:01:53.174557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:86216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.799 [2024-04-18 12:01:53.174569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.799 [2024-04-18 12:01:53.174582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:86224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.799 [2024-04-18 12:01:53.174595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.799 [2024-04-18 12:01:53.174609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:86232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.799 [2024-04-18 12:01:53.174621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.799 [2024-04-18 12:01:53.174634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:86240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.799 [2024-04-18 12:01:53.174645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.799 [2024-04-18 12:01:53.174659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:86248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.799 [2024-04-18 12:01:53.174670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.799 [2024-04-18 12:01:53.174683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:86256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.799 [2024-04-18 12:01:53.174695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.799 [2024-04-18 12:01:53.174717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:86264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.799 [2024-04-18 12:01:53.174729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.799 [2024-04-18 12:01:53.174742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:86272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.799 [2024-04-18 12:01:53.174753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.799 [2024-04-18 12:01:53.174767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:86280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.799 [2024-04-18 12:01:53.174780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.799 [2024-04-18 12:01:53.174793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:86288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.799 [2024-04-18 12:01:53.174805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.799 [2024-04-18 12:01:53.174818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:86296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.799 [2024-04-18 12:01:53.174829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.799 [2024-04-18 12:01:53.174842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:86304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.799 [2024-04-18 12:01:53.174854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.799 [2024-04-18 12:01:53.174867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:86312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.800 [2024-04-18 12:01:53.174878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.800 [2024-04-18 12:01:53.174891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:86320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.800 [2024-04-18 12:01:53.174903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.800 [2024-04-18 12:01:53.174916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:86328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.800 [2024-04-18 12:01:53.174927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.800 [2024-04-18 12:01:53.174940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:86336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.800 [2024-04-18 12:01:53.174952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.800 [2024-04-18 12:01:53.174966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:86344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.800 [2024-04-18 12:01:53.174977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.800 [2024-04-18 12:01:53.174990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:86352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.800 [2024-04-18 12:01:53.175002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.800 [2024-04-18 12:01:53.175015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:86360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.800 [2024-04-18 12:01:53.175027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.800 [2024-04-18 12:01:53.175040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:86368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.800 [2024-04-18 12:01:53.175051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.800 [2024-04-18 12:01:53.175064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:86376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.800 [2024-04-18 12:01:53.175076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.800 [2024-04-18 12:01:53.175091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:86384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.800 [2024-04-18 12:01:53.175103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.800 [2024-04-18 12:01:53.175116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:86392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.800 [2024-04-18 12:01:53.175129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.800 [2024-04-18 12:01:53.175142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:86400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.800 [2024-04-18 12:01:53.175154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.800 [2024-04-18 12:01:53.175167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:86408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.800 [2024-04-18 12:01:53.175179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.800 [2024-04-18 12:01:53.175193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.800 [2024-04-18 12:01:53.175205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.800 [2024-04-18 12:01:53.175217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:86416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.800 [2024-04-18 12:01:53.175229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.800 [2024-04-18 12:01:53.175242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:86424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.800 [2024-04-18 12:01:53.175254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.800 [2024-04-18 12:01:53.175266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.800 [2024-04-18 12:01:53.175278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.800 [2024-04-18 12:01:53.175291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:86440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.800 [2024-04-18 12:01:53.175303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.800 [2024-04-18 12:01:53.175316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:86448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.800 [2024-04-18 12:01:53.175327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.800 [2024-04-18 12:01:53.175340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:86456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.800 [2024-04-18 12:01:53.175352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.800 [2024-04-18 12:01:53.175366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.800 [2024-04-18 12:01:53.175377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.800 [2024-04-18 12:01:53.175390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:86992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.800 [2024-04-18 12:01:53.175404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.800 [2024-04-18 12:01:53.175418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.800 [2024-04-18 12:01:53.175430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.800 [2024-04-18 12:01:53.175443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:87008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.800 [2024-04-18 12:01:53.175459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.800 [2024-04-18 12:01:53.175472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:87016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.800 [2024-04-18 12:01:53.175484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.800 [2024-04-18 12:01:53.175497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:87024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.800 [2024-04-18 12:01:53.175509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.800 [2024-04-18 12:01:53.175522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:87032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.800 [2024-04-18 12:01:53.175533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.800 [2024-04-18 12:01:53.175546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:87040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.800 [2024-04-18 12:01:53.175558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.800 [2024-04-18 12:01:53.175572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.800 [2024-04-18 12:01:53.175584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.800 [2024-04-18 12:01:53.175597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:86472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.800 [2024-04-18 12:01:53.175608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.800 [2024-04-18 12:01:53.175621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:86480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.800 [2024-04-18 12:01:53.175634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.800 [2024-04-18 12:01:53.175647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:86488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.800 [2024-04-18 12:01:53.175659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.800 [2024-04-18 12:01:53.175671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:86496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.800 [2024-04-18 12:01:53.175683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.800 [2024-04-18 12:01:53.175697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:86504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.800 [2024-04-18 12:01:53.175709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.800 [2024-04-18 12:01:53.175721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:86512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.800 [2024-04-18 12:01:53.175734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.800 [2024-04-18 12:01:53.175748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:86520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.800 [2024-04-18 12:01:53.175760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.800 [2024-04-18 12:01:53.175773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.800 [2024-04-18 12:01:53.175784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.800 [2024-04-18 12:01:53.175797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:87056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.800 [2024-04-18 12:01:53.175810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.800 [2024-04-18 12:01:53.175823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:87064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.800 [2024-04-18 12:01:53.175834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.800 [2024-04-18 12:01:53.175848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:87072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.800 [2024-04-18 12:01:53.175860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.800 [2024-04-18 12:01:53.175873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:87080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.801 [2024-04-18 12:01:53.175884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.801 [2024-04-18 12:01:53.175897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:87088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.801 [2024-04-18 12:01:53.175909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.801 [2024-04-18 12:01:53.175922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:87096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.801 [2024-04-18 12:01:53.175934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.801 [2024-04-18 12:01:53.175947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:87104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.801 [2024-04-18 12:01:53.175959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.801 [2024-04-18 12:01:53.175974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:87112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.801 [2024-04-18 12:01:53.175986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.801 [2024-04-18 12:01:53.175999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:86536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.801 [2024-04-18 12:01:53.176010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.801 [2024-04-18 12:01:53.176024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:86544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.801 [2024-04-18 12:01:53.176036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.801 [2024-04-18 12:01:53.176055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:86552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.801 [2024-04-18 12:01:53.176067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.801 [2024-04-18 12:01:53.176080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:86560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.801 [2024-04-18 12:01:53.176094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.801 [2024-04-18 12:01:53.176109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.801 [2024-04-18 12:01:53.176120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.801 [2024-04-18 12:01:53.176134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:86576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.801 [2024-04-18 12:01:53.176145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.801 [2024-04-18 12:01:53.176159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:86584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.801 [2024-04-18 12:01:53.176173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.801 [2024-04-18 12:01:53.176187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:86592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.801 [2024-04-18 12:01:53.176199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.801 [2024-04-18 12:01:53.176214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:86600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.801 [2024-04-18 12:01:53.176232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.801 [2024-04-18 12:01:53.176246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:86608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.801 [2024-04-18 12:01:53.176259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.801 [2024-04-18 12:01:53.176273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:86616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.801 [2024-04-18 12:01:53.176285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.801 [2024-04-18 12:01:53.176298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:86624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.801 [2024-04-18 12:01:53.176310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.801 [2024-04-18 12:01:53.176324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:86632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.801 [2024-04-18 12:01:53.176336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.801 [2024-04-18 12:01:53.176356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:86640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.801 [2024-04-18 12:01:53.176368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.801 [2024-04-18 12:01:53.176381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:86648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.801 [2024-04-18 12:01:53.176394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.801 [2024-04-18 12:01:53.176407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:86656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.801 [2024-04-18 12:01:53.176418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.801 [2024-04-18 12:01:53.176432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:86664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.801 [2024-04-18 12:01:53.176444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.801 [2024-04-18 12:01:53.176463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:86672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.801 [2024-04-18 12:01:53.176475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.801 [2024-04-18 12:01:53.176489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:86680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.801 [2024-04-18 12:01:53.176500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.801 [2024-04-18 12:01:53.176514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:86688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.801 [2024-04-18 12:01:53.176526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.801 [2024-04-18 12:01:53.176539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:86696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.801 [2024-04-18 12:01:53.176551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.801 [2024-04-18 12:01:53.176566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:86704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.801 [2024-04-18 12:01:53.176578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.801 [2024-04-18 12:01:53.176592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:86712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.801 [2024-04-18 12:01:53.176604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.801 [2024-04-18 12:01:53.176617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:86720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.801 [2024-04-18 12:01:53.176628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.801 [2024-04-18 12:01:53.176643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:86728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.801 [2024-04-18 12:01:53.176664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.801 [2024-04-18 12:01:53.176677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:86736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.801 [2024-04-18 12:01:53.176689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.801 [2024-04-18 12:01:53.176704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:86744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.801 [2024-04-18 12:01:53.176715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.801 [2024-04-18 12:01:53.176730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:86752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.801 [2024-04-18 12:01:53.176743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.801 [2024-04-18 12:01:53.176757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:86760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.801 [2024-04-18 12:01:53.176769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.801 [2024-04-18 12:01:53.176783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:86768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.801 [2024-04-18 12:01:53.176795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.801 [2024-04-18 12:01:53.176808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:86776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.801 [2024-04-18 12:01:53.176820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.801 [2024-04-18 12:01:53.176834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:86784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.801 [2024-04-18 12:01:53.176846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.801 [2024-04-18 12:01:53.176859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:86792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.801 [2024-04-18 12:01:53.176873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.801 [2024-04-18 12:01:53.176887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:86800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.801 [2024-04-18 12:01:53.176898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.801 [2024-04-18 12:01:53.176911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:86808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.801 [2024-04-18 12:01:53.176922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.802 [2024-04-18 12:01:53.176936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.802 [2024-04-18 12:01:53.176949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.802 [2024-04-18 12:01:53.176962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:86824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.802 [2024-04-18 12:01:53.176975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.802 [2024-04-18 12:01:53.176988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:86832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.802 [2024-04-18 12:01:53.176999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.802 [2024-04-18 12:01:53.177013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.802 [2024-04-18 12:01:53.177024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.802 [2024-04-18 12:01:53.177038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:86848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.802 [2024-04-18 12:01:53.177051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.802 [2024-04-18 12:01:53.177064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.802 [2024-04-18 12:01:53.177079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.802 [2024-04-18 12:01:53.177092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:86864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.802 [2024-04-18 12:01:53.177104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.802 [2024-04-18 12:01:53.177116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.802 [2024-04-18 12:01:53.177129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.802 [2024-04-18 12:01:53.177143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:86880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.802 [2024-04-18 12:01:53.177155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.802 [2024-04-18 12:01:53.177168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:86888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.802 [2024-04-18 12:01:53.177179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.802 [2024-04-18 12:01:53.177193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:86896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.802 [2024-04-18 12:01:53.177205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.802 [2024-04-18 12:01:53.177218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:86904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.802 [2024-04-18 12:01:53.177230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.802 [2024-04-18 12:01:53.177245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:86912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.802 [2024-04-18 12:01:53.177257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.802 [2024-04-18 12:01:53.177271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:86920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.802 [2024-04-18 12:01:53.177282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.802 [2024-04-18 12:01:53.177295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:86928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.802 [2024-04-18 12:01:53.177307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.802 [2024-04-18 12:01:53.177320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:86936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.802 [2024-04-18 12:01:53.177331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.802 [2024-04-18 12:01:53.177345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:86944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.802 [2024-04-18 12:01:53.177358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.802 [2024-04-18 12:01:53.177371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:86952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.802 [2024-04-18 12:01:53.177385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.802 [2024-04-18 12:01:53.177398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:86960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.802 [2024-04-18 12:01:53.177409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.802 [2024-04-18 12:01:53.177424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:86968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.802 [2024-04-18 12:01:53.177436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.802 [2024-04-18 12:01:53.177448] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007240 is same with the state(5) to be set 00:26:17.802 [2024-04-18 12:01:53.177468] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.802 [2024-04-18 12:01:53.177479] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.802 [2024-04-18 12:01:53.177491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86976 len:8 PRP1 0x0 PRP2 0x0 00:26:17.802 [2024-04-18 12:01:53.177504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.802 [2024-04-18 12:01:53.177788] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x614000007240 was disconnected and freed. reset controller. 00:26:17.802 [2024-04-18 12:01:53.177810] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:17.802 [2024-04-18 12:01:53.177842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.802 [2024-04-18 12:01:53.177857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.802 [2024-04-18 12:01:53.177870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.802 [2024-04-18 12:01:53.177882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.802 [2024-04-18 12:01:53.177894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.802 [2024-04-18 12:01:53.177905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.802 [2024-04-18 12:01:53.177918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.802 [2024-04-18 12:01:53.177929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.802 [2024-04-18 12:01:53.177940] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.802 [2024-04-18 12:01:53.177988] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004a40 (9): Bad file descriptor 00:26:17.802 [2024-04-18 12:01:53.180929] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.802 [2024-04-18 12:01:53.301984] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:17.802 [2024-04-18 12:01:56.724836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.802 [2024-04-18 12:01:56.724881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.802 [2024-04-18 12:01:56.724905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.802 [2024-04-18 12:01:56.724923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.802 [2024-04-18 12:01:56.724937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.802 [2024-04-18 12:01:56.724950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.802 [2024-04-18 12:01:56.724963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:9176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.802 [2024-04-18 12:01:56.724975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.802 [2024-04-18 12:01:56.724988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.802 [2024-04-18 12:01:56.725000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.802 [2024-04-18 12:01:56.725013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:9192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.802 [2024-04-18 12:01:56.725025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.802 [2024-04-18 12:01:56.725038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.802 [2024-04-18 12:01:56.725049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.802 [2024-04-18 12:01:56.725063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:9208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.802 [2024-04-18 12:01:56.725075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.802 [2024-04-18 12:01:56.725088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.802 [2024-04-18 12:01:56.725099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.802 [2024-04-18 12:01:56.725112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.802 [2024-04-18 12:01:56.725137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.802 [2024-04-18 12:01:56.725150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.802 [2024-04-18 12:01:56.725162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.803 [2024-04-18 12:01:56.725176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.803 [2024-04-18 12:01:56.725187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.803 [2024-04-18 12:01:56.725200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.803 [2024-04-18 12:01:56.725216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.803 [2024-04-18 12:01:56.725230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:9256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.803 [2024-04-18 12:01:56.725241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.803 [2024-04-18 12:01:56.725256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.803 [2024-04-18 12:01:56.725268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.803 [2024-04-18 12:01:56.725282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:10168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.803 [2024-04-18 12:01:56.725294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.803 [2024-04-18 12:01:56.725308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.803 [2024-04-18 12:01:56.725320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.803 [2024-04-18 12:01:56.725334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:9280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.803 [2024-04-18 12:01:56.725345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.803 [2024-04-18 12:01:56.725359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:9288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.803 [2024-04-18 12:01:56.725371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.803 [2024-04-18 12:01:56.725384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:9296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.803 [2024-04-18 12:01:56.725396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.803 [2024-04-18 12:01:56.725409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.803 [2024-04-18 12:01:56.725421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.803 [2024-04-18 12:01:56.725434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:9312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.803 [2024-04-18 12:01:56.725446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.803 [2024-04-18 12:01:56.725465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:9320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.803 [2024-04-18 12:01:56.725477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.803 [2024-04-18 12:01:56.725490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.803 [2024-04-18 12:01:56.725502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.803 [2024-04-18 12:01:56.725515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.803 [2024-04-18 12:01:56.725527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.803 [2024-04-18 12:01:56.725540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.803 [2024-04-18 12:01:56.725551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.803 [2024-04-18 12:01:56.725565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:9352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.803 [2024-04-18 12:01:56.725583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.803 [2024-04-18 12:01:56.725600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.803 [2024-04-18 12:01:56.725616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.803 [2024-04-18 12:01:56.725636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.803 [2024-04-18 12:01:56.725653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.803 [2024-04-18 12:01:56.725670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.803 [2024-04-18 12:01:56.725688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.803 [2024-04-18 12:01:56.725709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.803 [2024-04-18 12:01:56.725726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.803 [2024-04-18 12:01:56.725745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:9392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.803 [2024-04-18 12:01:56.725761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.803 [2024-04-18 12:01:56.725780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:9400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.803 [2024-04-18 12:01:56.725798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.803 [2024-04-18 12:01:56.725818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:9408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.803 [2024-04-18 12:01:56.725830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.803 [2024-04-18 12:01:56.725844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:9416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.803 [2024-04-18 12:01:56.725856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.803 [2024-04-18 12:01:56.725870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.803 [2024-04-18 12:01:56.725882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.803 [2024-04-18 12:01:56.725895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.803 [2024-04-18 12:01:56.725906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.803 [2024-04-18 12:01:56.725921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.803 [2024-04-18 12:01:56.725932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.803 [2024-04-18 12:01:56.725946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:9448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.803 [2024-04-18 12:01:56.725958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.803 [2024-04-18 12:01:56.725971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:9456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.803 [2024-04-18 12:01:56.725986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.803 [2024-04-18 12:01:56.726000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.803 [2024-04-18 12:01:56.726011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.803 [2024-04-18 12:01:56.726024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.803 [2024-04-18 12:01:56.726037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.803 [2024-04-18 12:01:56.726050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.803 [2024-04-18 12:01:56.726061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.804 [2024-04-18 12:01:56.726075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.804 [2024-04-18 12:01:56.726086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.804 [2024-04-18 12:01:56.726100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:9496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.804 [2024-04-18 12:01:56.726111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.804 [2024-04-18 12:01:56.726124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.804 [2024-04-18 12:01:56.726136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.804 [2024-04-18 12:01:56.726151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.804 [2024-04-18 12:01:56.726163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.804 [2024-04-18 12:01:56.726177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:9520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.804 [2024-04-18 12:01:56.726188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.804 [2024-04-18 12:01:56.726201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.804 [2024-04-18 12:01:56.726214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.804 [2024-04-18 12:01:56.726227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.804 [2024-04-18 12:01:56.726239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.804 [2024-04-18 12:01:56.726252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:9544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.804 [2024-04-18 12:01:56.726264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.804 [2024-04-18 12:01:56.726277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.804 [2024-04-18 12:01:56.726289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.804 [2024-04-18 12:01:56.726303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.804 [2024-04-18 12:01:56.726315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.804 [2024-04-18 12:01:56.726329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.804 [2024-04-18 12:01:56.726341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.804 [2024-04-18 12:01:56.726354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:9576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.804 [2024-04-18 12:01:56.726366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.804 [2024-04-18 12:01:56.726381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:9584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.804 [2024-04-18 12:01:56.726394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.804 [2024-04-18 12:01:56.726407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.804 [2024-04-18 12:01:56.726419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.804 [2024-04-18 12:01:56.726433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.804 [2024-04-18 12:01:56.726446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.804 [2024-04-18 12:01:56.726467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.804 [2024-04-18 12:01:56.726479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.804 [2024-04-18 12:01:56.726493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.804 [2024-04-18 12:01:56.726507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.804 [2024-04-18 12:01:56.726521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.804 [2024-04-18 12:01:56.726533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.804 [2024-04-18 12:01:56.726548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.804 [2024-04-18 12:01:56.726559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.804 [2024-04-18 12:01:56.726573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.804 [2024-04-18 12:01:56.726585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.804 [2024-04-18 12:01:56.726599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.804 [2024-04-18 12:01:56.726610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.804 [2024-04-18 12:01:56.726623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.804 [2024-04-18 12:01:56.726637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.804 [2024-04-18 12:01:56.726651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.804 [2024-04-18 12:01:56.726663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.804 [2024-04-18 12:01:56.726676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:9672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.804 [2024-04-18 12:01:56.726688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.804 [2024-04-18 12:01:56.726701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.804 [2024-04-18 12:01:56.726712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.804 [2024-04-18 12:01:56.726725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.804 [2024-04-18 12:01:56.726737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.804 [2024-04-18 12:01:56.726750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.804 [2024-04-18 12:01:56.726762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.804 [2024-04-18 12:01:56.726775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:9704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.804 [2024-04-18 12:01:56.726787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.804 [2024-04-18 12:01:56.726800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:9712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.804 [2024-04-18 12:01:56.726812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.804 [2024-04-18 12:01:56.726825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.804 [2024-04-18 12:01:56.726836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.804 [2024-04-18 12:01:56.726849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.804 [2024-04-18 12:01:56.726868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.804 [2024-04-18 12:01:56.726881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.804 [2024-04-18 12:01:56.726893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.804 [2024-04-18 12:01:56.726906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.804 [2024-04-18 12:01:56.726918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.804 [2024-04-18 12:01:56.726931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.804 [2024-04-18 12:01:56.726943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.804 [2024-04-18 12:01:56.726957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.804 [2024-04-18 12:01:56.726969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.804 [2024-04-18 12:01:56.726982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.804 [2024-04-18 12:01:56.726995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.804 [2024-04-18 12:01:56.727008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:9776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.804 [2024-04-18 12:01:56.727020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.804 [2024-04-18 12:01:56.727034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:9784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.804 [2024-04-18 12:01:56.727046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.804 [2024-04-18 12:01:56.727060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.805 [2024-04-18 12:01:56.727071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.805 [2024-04-18 12:01:56.727085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.805 [2024-04-18 12:01:56.727097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.805 [2024-04-18 12:01:56.727111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:9808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.805 [2024-04-18 12:01:56.727122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.805 [2024-04-18 12:01:56.727136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.805 [2024-04-18 12:01:56.727148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.805 [2024-04-18 12:01:56.727161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.805 [2024-04-18 12:01:56.727173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.805 [2024-04-18 12:01:56.727187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:9832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.805 [2024-04-18 12:01:56.727198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.805 [2024-04-18 12:01:56.727212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.805 [2024-04-18 12:01:56.727224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.805 [2024-04-18 12:01:56.727237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:9848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.805 [2024-04-18 12:01:56.727249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.805 [2024-04-18 12:01:56.727262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.805 [2024-04-18 12:01:56.727273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.805 [2024-04-18 12:01:56.727288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.805 [2024-04-18 12:01:56.727299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.805 [2024-04-18 12:01:56.727313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.805 [2024-04-18 12:01:56.727324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.805 [2024-04-18 12:01:56.727338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.805 [2024-04-18 12:01:56.727349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.805 [2024-04-18 12:01:56.727362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.805 [2024-04-18 12:01:56.727373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.805 [2024-04-18 12:01:56.727387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.805 [2024-04-18 12:01:56.727399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.805 [2024-04-18 12:01:56.727412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:9904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.805 [2024-04-18 12:01:56.727423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.805 [2024-04-18 12:01:56.727436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:9912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.805 [2024-04-18 12:01:56.727462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.805 [2024-04-18 12:01:56.727476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:9920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.805 [2024-04-18 12:01:56.727487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.805 [2024-04-18 12:01:56.727500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.805 [2024-04-18 12:01:56.727512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.805 [2024-04-18 12:01:56.727525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.805 [2024-04-18 12:01:56.727537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.805 [2024-04-18 12:01:56.727550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.805 [2024-04-18 12:01:56.727561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.805 [2024-04-18 12:01:56.727574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.805 [2024-04-18 12:01:56.727586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.805 [2024-04-18 12:01:56.727599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:9960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.805 [2024-04-18 12:01:56.727613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.805 [2024-04-18 12:01:56.727627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:9968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.805 [2024-04-18 12:01:56.727638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.805 [2024-04-18 12:01:56.727652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:9976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.805 [2024-04-18 12:01:56.727664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.805 [2024-04-18 12:01:56.727677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:9984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.805 [2024-04-18 12:01:56.727688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.805 [2024-04-18 12:01:56.727701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.805 [2024-04-18 12:01:56.727713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.805 [2024-04-18 12:01:56.727726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:10000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.805 [2024-04-18 12:01:56.727737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.805 [2024-04-18 12:01:56.727751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:10008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.805 [2024-04-18 12:01:56.727763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.805 [2024-04-18 12:01:56.727776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:10016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.805 [2024-04-18 12:01:56.727787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.805 [2024-04-18 12:01:56.727800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.805 [2024-04-18 12:01:56.727812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.805 [2024-04-18 12:01:56.727825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.805 [2024-04-18 12:01:56.727837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.805 [2024-04-18 12:01:56.727850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.805 [2024-04-18 12:01:56.727862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.805 [2024-04-18 12:01:56.727876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:10048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.805 [2024-04-18 12:01:56.727888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.805 [2024-04-18 12:01:56.727901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:10056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.805 [2024-04-18 12:01:56.727912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.805 [2024-04-18 12:01:56.727927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:10064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.805 [2024-04-18 12:01:56.727939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.805 [2024-04-18 12:01:56.727952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:10072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.805 [2024-04-18 12:01:56.727963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.805 [2024-04-18 12:01:56.727976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:10080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.805 [2024-04-18 12:01:56.727987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.805 [2024-04-18 12:01:56.728001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.805 [2024-04-18 12:01:56.728013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.805 [2024-04-18 12:01:56.728026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.805 [2024-04-18 12:01:56.728037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.805 [2024-04-18 12:01:56.728050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:10104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.805 [2024-04-18 12:01:56.728062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.805 [2024-04-18 12:01:56.728081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.806 [2024-04-18 12:01:56.728098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.806 [2024-04-18 12:01:56.728116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:10120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.806 [2024-04-18 12:01:56.728133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.806 [2024-04-18 12:01:56.728152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.806 [2024-04-18 12:01:56.728168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.806 [2024-04-18 12:01:56.728187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:10136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.806 [2024-04-18 12:01:56.728204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.806 [2024-04-18 12:01:56.728222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:10144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.806 [2024-04-18 12:01:56.728234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.806 [2024-04-18 12:01:56.728247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:10152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.806 [2024-04-18 12:01:56.728259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.806 [2024-04-18 12:01:56.728273] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000008040 is same with the state(5) to be set 00:26:17.806 [2024-04-18 12:01:56.728291] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.806 [2024-04-18 12:01:56.728302] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.806 [2024-04-18 12:01:56.728314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10160 len:8 PRP1 0x0 PRP2 0x0 00:26:17.806 [2024-04-18 12:01:56.728326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.806 [2024-04-18 12:01:56.728586] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x614000008040 was disconnected and freed. reset controller. 00:26:17.806 [2024-04-18 12:01:56.728603] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:26:17.806 [2024-04-18 12:01:56.728636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.806 [2024-04-18 12:01:56.728649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.806 [2024-04-18 12:01:56.728662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.806 [2024-04-18 12:01:56.728674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.806 [2024-04-18 12:01:56.728686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.806 [2024-04-18 12:01:56.728697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.806 [2024-04-18 12:01:56.728709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.806 [2024-04-18 12:01:56.728721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.806 [2024-04-18 12:01:56.728733] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.806 [2024-04-18 12:01:56.728766] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004a40 (9): Bad file descriptor 00:26:17.806 [2024-04-18 12:01:56.731730] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.806 [2024-04-18 12:01:56.849410] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:17.806 [2024-04-18 12:02:01.112230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.806 [2024-04-18 12:02:01.112277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.806 [2024-04-18 12:02:01.112313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:3800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.806 [2024-04-18 12:02:01.112325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.806 [2024-04-18 12:02:01.112340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:3808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.806 [2024-04-18 12:02:01.112352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.806 [2024-04-18 12:02:01.112365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.806 [2024-04-18 12:02:01.112377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.806 [2024-04-18 12:02:01.112390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.806 [2024-04-18 12:02:01.112405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.806 [2024-04-18 12:02:01.112419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.806 [2024-04-18 12:02:01.112430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.806 [2024-04-18 12:02:01.112443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.806 [2024-04-18 12:02:01.112460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.806 [2024-04-18 12:02:01.112474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:3848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.806 [2024-04-18 12:02:01.112485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.806 [2024-04-18 12:02:01.112498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.806 [2024-04-18 12:02:01.112510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.806 [2024-04-18 12:02:01.112523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.806 [2024-04-18 12:02:01.112535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.806 [2024-04-18 12:02:01.112548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.806 [2024-04-18 12:02:01.112559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.806 [2024-04-18 12:02:01.112572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.806 [2024-04-18 12:02:01.112584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.806 [2024-04-18 12:02:01.112597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.806 [2024-04-18 12:02:01.112608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.806 [2024-04-18 12:02:01.112621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.806 [2024-04-18 12:02:01.112633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.806 [2024-04-18 12:02:01.112646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:3904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.806 [2024-04-18 12:02:01.112658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.806 [2024-04-18 12:02:01.112674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.806 [2024-04-18 12:02:01.112685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.806 [2024-04-18 12:02:01.112698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.806 [2024-04-18 12:02:01.112711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.806 [2024-04-18 12:02:01.112724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.806 [2024-04-18 12:02:01.112737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.806 [2024-04-18 12:02:01.112751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:3936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.806 [2024-04-18 12:02:01.112762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.806 [2024-04-18 12:02:01.112775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.806 [2024-04-18 12:02:01.112787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.806 [2024-04-18 12:02:01.112801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:3952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.806 [2024-04-18 12:02:01.112812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.806 [2024-04-18 12:02:01.112825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:3960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.806 [2024-04-18 12:02:01.112837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.806 [2024-04-18 12:02:01.112850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.806 [2024-04-18 12:02:01.112861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.806 [2024-04-18 12:02:01.112874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.806 [2024-04-18 12:02:01.112886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.806 [2024-04-18 12:02:01.112899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.806 [2024-04-18 12:02:01.112910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.806 [2024-04-18 12:02:01.112923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.806 [2024-04-18 12:02:01.112934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.807 [2024-04-18 12:02:01.112947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.807 [2024-04-18 12:02:01.112959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.807 [2024-04-18 12:02:01.112971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.807 [2024-04-18 12:02:01.112983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.807 [2024-04-18 12:02:01.112996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.807 [2024-04-18 12:02:01.113008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.807 [2024-04-18 12:02:01.113021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:4024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.807 [2024-04-18 12:02:01.113033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.807 [2024-04-18 12:02:01.113051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:4032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.807 [2024-04-18 12:02:01.113063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.807 [2024-04-18 12:02:01.113076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.807 [2024-04-18 12:02:01.113088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.807 [2024-04-18 12:02:01.113101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.807 [2024-04-18 12:02:01.113118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.807 [2024-04-18 12:02:01.113131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.807 [2024-04-18 12:02:01.113142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.807 [2024-04-18 12:02:01.113155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.807 [2024-04-18 12:02:01.113167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.807 [2024-04-18 12:02:01.113180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.807 [2024-04-18 12:02:01.113191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.807 [2024-04-18 12:02:01.113204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.807 [2024-04-18 12:02:01.113215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.807 [2024-04-18 12:02:01.113229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.807 [2024-04-18 12:02:01.113240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.807 [2024-04-18 12:02:01.113253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.807 [2024-04-18 12:02:01.113264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.807 [2024-04-18 12:02:01.113278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.807 [2024-04-18 12:02:01.113290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.807 [2024-04-18 12:02:01.113303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.807 [2024-04-18 12:02:01.113314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.807 [2024-04-18 12:02:01.113327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.807 [2024-04-18 12:02:01.113339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.807 [2024-04-18 12:02:01.113351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:4128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.807 [2024-04-18 12:02:01.113365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.807 [2024-04-18 12:02:01.113378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.807 [2024-04-18 12:02:01.113390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.807 [2024-04-18 12:02:01.113403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.807 [2024-04-18 12:02:01.113414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.807 [2024-04-18 12:02:01.113427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:4152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.807 [2024-04-18 12:02:01.113438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.807 [2024-04-18 12:02:01.113457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.807 [2024-04-18 12:02:01.113469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.807 [2024-04-18 12:02:01.113482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:4168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.807 [2024-04-18 12:02:01.113493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.807 [2024-04-18 12:02:01.113506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:4176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.807 [2024-04-18 12:02:01.113519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.807 [2024-04-18 12:02:01.113533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:4184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.807 [2024-04-18 12:02:01.113544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.807 [2024-04-18 12:02:01.113557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.807 [2024-04-18 12:02:01.113568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.807 [2024-04-18 12:02:01.113581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.807 [2024-04-18 12:02:01.113593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.807 [2024-04-18 12:02:01.113606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.807 [2024-04-18 12:02:01.113617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.807 [2024-04-18 12:02:01.113631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.807 [2024-04-18 12:02:01.113643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.807 [2024-04-18 12:02:01.113656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:4224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.807 [2024-04-18 12:02:01.113667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.807 [2024-04-18 12:02:01.113681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:4232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.807 [2024-04-18 12:02:01.113694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.807 [2024-04-18 12:02:01.113707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.807 [2024-04-18 12:02:01.113719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.807 [2024-04-18 12:02:01.113732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.807 [2024-04-18 12:02:01.113743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.807 [2024-04-18 12:02:01.113756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.807 [2024-04-18 12:02:01.113767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.807 [2024-04-18 12:02:01.113780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.807 [2024-04-18 12:02:01.113791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.807 [2024-04-18 12:02:01.113805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:4272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.807 [2024-04-18 12:02:01.113816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.807 [2024-04-18 12:02:01.113830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.807 [2024-04-18 12:02:01.113841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.807 [2024-04-18 12:02:01.113854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.807 [2024-04-18 12:02:01.113866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.807 [2024-04-18 12:02:01.113878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.807 [2024-04-18 12:02:01.113890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.807 [2024-04-18 12:02:01.113902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.807 [2024-04-18 12:02:01.113916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.808 [2024-04-18 12:02:01.113955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.808 [2024-04-18 12:02:01.113967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.808 [2024-04-18 12:02:01.113980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.808 [2024-04-18 12:02:01.113992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.808 [2024-04-18 12:02:01.114005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.808 [2024-04-18 12:02:01.114016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.808 [2024-04-18 12:02:01.114032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.808 [2024-04-18 12:02:01.114043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.808 [2024-04-18 12:02:01.114056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.808 [2024-04-18 12:02:01.114068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.808 [2024-04-18 12:02:01.114082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.808 [2024-04-18 12:02:01.114094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.808 [2024-04-18 12:02:01.114107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.808 [2024-04-18 12:02:01.114118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.808 [2024-04-18 12:02:01.114131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.808 [2024-04-18 12:02:01.114143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.808 [2024-04-18 12:02:01.114156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.808 [2024-04-18 12:02:01.114167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.808 [2024-04-18 12:02:01.114180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.808 [2024-04-18 12:02:01.114191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.808 [2024-04-18 12:02:01.114204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.808 [2024-04-18 12:02:01.114216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.808 [2024-04-18 12:02:01.114229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:4400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.808 [2024-04-18 12:02:01.114240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.808 [2024-04-18 12:02:01.114253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.808 [2024-04-18 12:02:01.114265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.808 [2024-04-18 12:02:01.114277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:4416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.808 [2024-04-18 12:02:01.114289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.808 [2024-04-18 12:02:01.114301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.808 [2024-04-18 12:02:01.114313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.808 [2024-04-18 12:02:01.114326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:4432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.808 [2024-04-18 12:02:01.114340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.808 [2024-04-18 12:02:01.114353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.808 [2024-04-18 12:02:01.114365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.808 [2024-04-18 12:02:01.114378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.808 [2024-04-18 12:02:01.114390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.808 [2024-04-18 12:02:01.114403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.808 [2024-04-18 12:02:01.114414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.808 [2024-04-18 12:02:01.114427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.808 [2024-04-18 12:02:01.114439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.808 [2024-04-18 12:02:01.114456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:4472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.808 [2024-04-18 12:02:01.114468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.808 [2024-04-18 12:02:01.114481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.808 [2024-04-18 12:02:01.114492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.808 [2024-04-18 12:02:01.114505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.808 [2024-04-18 12:02:01.114517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.808 [2024-04-18 12:02:01.114529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:4496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.808 [2024-04-18 12:02:01.114541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.808 [2024-04-18 12:02:01.114554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:4504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.808 [2024-04-18 12:02:01.114566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.808 [2024-04-18 12:02:01.114578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:4512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.808 [2024-04-18 12:02:01.114590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.808 [2024-04-18 12:02:01.114603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.808 [2024-04-18 12:02:01.114615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.808 [2024-04-18 12:02:01.114628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.808 [2024-04-18 12:02:01.114640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.808 [2024-04-18 12:02:01.114658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.808 [2024-04-18 12:02:01.114672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.808 [2024-04-18 12:02:01.114685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.808 [2024-04-18 12:02:01.114696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.808 [2024-04-18 12:02:01.114709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.808 [2024-04-18 12:02:01.114721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.808 [2024-04-18 12:02:01.114734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.808 [2024-04-18 12:02:01.114747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.808 [2024-04-18 12:02:01.114760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.808 [2024-04-18 12:02:01.114772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.808 [2024-04-18 12:02:01.114785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.809 [2024-04-18 12:02:01.114796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.809 [2024-04-18 12:02:01.114809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:4584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.809 [2024-04-18 12:02:01.114820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.809 [2024-04-18 12:02:01.114833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.809 [2024-04-18 12:02:01.114845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.809 [2024-04-18 12:02:01.114858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:4600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.809 [2024-04-18 12:02:01.114869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.809 [2024-04-18 12:02:01.114882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.809 [2024-04-18 12:02:01.114893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.809 [2024-04-18 12:02:01.114906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.809 [2024-04-18 12:02:01.114918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.809 [2024-04-18 12:02:01.114931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:4624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.809 [2024-04-18 12:02:01.114943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.809 [2024-04-18 12:02:01.114977] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.809 [2024-04-18 12:02:01.114990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4632 len:8 PRP1 0x0 PRP2 0x0 00:26:17.809 [2024-04-18 12:02:01.115005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.809 [2024-04-18 12:02:01.115047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.809 [2024-04-18 12:02:01.115062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.809 [2024-04-18 12:02:01.115076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.809 [2024-04-18 12:02:01.115087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.809 [2024-04-18 12:02:01.115100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.809 [2024-04-18 12:02:01.115112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.809 [2024-04-18 12:02:01.115124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.809 [2024-04-18 12:02:01.115135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.809 [2024-04-18 12:02:01.115146] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000004a40 is same with the state(5) to be set 00:26:17.809 [2024-04-18 12:02:01.115352] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.809 [2024-04-18 12:02:01.115365] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.809 [2024-04-18 12:02:01.115376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:8 PRP1 0x0 PRP2 0x0 00:26:17.809 [2024-04-18 12:02:01.115390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.809 [2024-04-18 12:02:01.115405] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.809 [2024-04-18 12:02:01.115415] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.809 [2024-04-18 12:02:01.115425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4648 len:8 PRP1 0x0 PRP2 0x0 00:26:17.809 [2024-04-18 12:02:01.115436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.809 [2024-04-18 12:02:01.115447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.809 [2024-04-18 12:02:01.115463] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.809 [2024-04-18 12:02:01.115473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4656 len:8 PRP1 0x0 PRP2 0x0 00:26:17.809 [2024-04-18 12:02:01.115485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.809 [2024-04-18 12:02:01.115496] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.809 [2024-04-18 12:02:01.115504] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.809 [2024-04-18 12:02:01.115515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4664 len:8 PRP1 0x0 PRP2 0x0 00:26:17.809 [2024-04-18 12:02:01.115526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.809 [2024-04-18 12:02:01.115537] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.809 [2024-04-18 12:02:01.115546] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.809 [2024-04-18 12:02:01.115555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:8 PRP1 0x0 PRP2 0x0 00:26:17.809 [2024-04-18 12:02:01.115570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.809 [2024-04-18 12:02:01.115581] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.809 [2024-04-18 12:02:01.115590] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.809 [2024-04-18 12:02:01.115600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4680 len:8 PRP1 0x0 PRP2 0x0 00:26:17.809 [2024-04-18 12:02:01.115611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.809 [2024-04-18 12:02:01.115622] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.809 [2024-04-18 12:02:01.115632] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.809 [2024-04-18 12:02:01.115642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4688 len:8 PRP1 0x0 PRP2 0x0 00:26:17.809 [2024-04-18 12:02:01.115653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.809 [2024-04-18 12:02:01.115664] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.809 [2024-04-18 12:02:01.115672] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.809 [2024-04-18 12:02:01.115683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4696 len:8 PRP1 0x0 PRP2 0x0 00:26:17.809 [2024-04-18 12:02:01.115694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.809 [2024-04-18 12:02:01.115705] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.809 [2024-04-18 12:02:01.115714] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.809 [2024-04-18 12:02:01.115723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:8 PRP1 0x0 PRP2 0x0 00:26:17.809 [2024-04-18 12:02:01.115736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.809 [2024-04-18 12:02:01.115754] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.809 [2024-04-18 12:02:01.115763] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.809 [2024-04-18 12:02:01.115773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4712 len:8 PRP1 0x0 PRP2 0x0 00:26:17.809 [2024-04-18 12:02:01.115784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.809 [2024-04-18 12:02:01.115795] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.809 [2024-04-18 12:02:01.115805] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.809 [2024-04-18 12:02:01.115815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4720 len:8 PRP1 0x0 PRP2 0x0 00:26:17.809 [2024-04-18 12:02:01.115826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.809 [2024-04-18 12:02:01.115837] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.809 [2024-04-18 12:02:01.115845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.809 [2024-04-18 12:02:01.115855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4728 len:8 PRP1 0x0 PRP2 0x0 00:26:17.809 [2024-04-18 12:02:01.115867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.809 [2024-04-18 12:02:01.115878] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.809 [2024-04-18 12:02:01.115886] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.809 [2024-04-18 12:02:01.115898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:8 PRP1 0x0 PRP2 0x0 00:26:17.809 [2024-04-18 12:02:01.115912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.809 [2024-04-18 12:02:01.115923] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.809 [2024-04-18 12:02:01.115932] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.809 [2024-04-18 12:02:01.115941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4744 len:8 PRP1 0x0 PRP2 0x0 00:26:17.809 [2024-04-18 12:02:01.115952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.809 [2024-04-18 12:02:01.115963] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.809 [2024-04-18 12:02:01.128854] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.809 [2024-04-18 12:02:01.128875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4752 len:8 PRP1 0x0 PRP2 0x0 00:26:17.809 [2024-04-18 12:02:01.128890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.809 [2024-04-18 12:02:01.128905] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.809 [2024-04-18 12:02:01.128917] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.809 [2024-04-18 12:02:01.128931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4760 len:8 PRP1 0x0 PRP2 0x0 00:26:17.809 [2024-04-18 12:02:01.128946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.809 [2024-04-18 12:02:01.128961] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.810 [2024-04-18 12:02:01.128973] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.810 [2024-04-18 12:02:01.128986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:8 PRP1 0x0 PRP2 0x0 00:26:17.810 [2024-04-18 12:02:01.129003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.810 [2024-04-18 12:02:01.129018] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.810 [2024-04-18 12:02:01.129030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.810 [2024-04-18 12:02:01.129044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4776 len:8 PRP1 0x0 PRP2 0x0 00:26:17.810 [2024-04-18 12:02:01.129059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.810 [2024-04-18 12:02:01.129074] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.810 [2024-04-18 12:02:01.129087] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.810 [2024-04-18 12:02:01.129100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4784 len:8 PRP1 0x0 PRP2 0x0 00:26:17.810 [2024-04-18 12:02:01.129116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.810 [2024-04-18 12:02:01.129131] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.810 [2024-04-18 12:02:01.129143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.810 [2024-04-18 12:02:01.129156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4792 len:8 PRP1 0x0 PRP2 0x0 00:26:17.810 [2024-04-18 12:02:01.129171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.810 [2024-04-18 12:02:01.129186] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.810 [2024-04-18 12:02:01.129201] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.810 [2024-04-18 12:02:01.129214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:8 PRP1 0x0 PRP2 0x0 00:26:17.810 [2024-04-18 12:02:01.129231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.810 [2024-04-18 12:02:01.129246] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.810 [2024-04-18 12:02:01.129258] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.810 [2024-04-18 12:02:01.129272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4808 len:8 PRP1 0x0 PRP2 0x0 00:26:17.810 [2024-04-18 12:02:01.129287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.810 [2024-04-18 12:02:01.129302] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.810 [2024-04-18 12:02:01.129314] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.810 [2024-04-18 12:02:01.129328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3792 len:8 PRP1 0x0 PRP2 0x0 00:26:17.810 [2024-04-18 12:02:01.129344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.810 [2024-04-18 12:02:01.129359] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.810 [2024-04-18 12:02:01.129371] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.810 [2024-04-18 12:02:01.129384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3800 len:8 PRP1 0x0 PRP2 0x0 00:26:17.810 [2024-04-18 12:02:01.129399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.810 [2024-04-18 12:02:01.129415] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.810 [2024-04-18 12:02:01.129427] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.810 [2024-04-18 12:02:01.129440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3808 len:8 PRP1 0x0 PRP2 0x0 00:26:17.810 [2024-04-18 12:02:01.129472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.810 [2024-04-18 12:02:01.129488] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.810 [2024-04-18 12:02:01.129501] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.810 [2024-04-18 12:02:01.129514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3816 len:8 PRP1 0x0 PRP2 0x0 00:26:17.810 [2024-04-18 12:02:01.129529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.810 [2024-04-18 12:02:01.129544] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.810 [2024-04-18 12:02:01.129557] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.810 [2024-04-18 12:02:01.129571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3824 len:8 PRP1 0x0 PRP2 0x0 00:26:17.810 [2024-04-18 12:02:01.129585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.810 [2024-04-18 12:02:01.129601] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.810 [2024-04-18 12:02:01.129613] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.810 [2024-04-18 12:02:01.129626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3832 len:8 PRP1 0x0 PRP2 0x0 00:26:17.810 [2024-04-18 12:02:01.129641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.810 [2024-04-18 12:02:01.129662] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.810 [2024-04-18 12:02:01.129674] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.810 [2024-04-18 12:02:01.129687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3840 len:8 PRP1 0x0 PRP2 0x0 00:26:17.810 [2024-04-18 12:02:01.129703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.810 [2024-04-18 12:02:01.129718] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.810 [2024-04-18 12:02:01.129731] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.810 [2024-04-18 12:02:01.129743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3848 len:8 PRP1 0x0 PRP2 0x0 00:26:17.810 [2024-04-18 12:02:01.129760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.810 [2024-04-18 12:02:01.129775] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.810 [2024-04-18 12:02:01.129786] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.810 [2024-04-18 12:02:01.129800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3856 len:8 PRP1 0x0 PRP2 0x0 00:26:17.810 [2024-04-18 12:02:01.129815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.810 [2024-04-18 12:02:01.129831] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.810 [2024-04-18 12:02:01.129843] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.810 [2024-04-18 12:02:01.129856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3864 len:8 PRP1 0x0 PRP2 0x0 00:26:17.810 [2024-04-18 12:02:01.129872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.810 [2024-04-18 12:02:01.129887] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.810 [2024-04-18 12:02:01.129899] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.810 [2024-04-18 12:02:01.129913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3872 len:8 PRP1 0x0 PRP2 0x0 00:26:17.810 [2024-04-18 12:02:01.129928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.810 [2024-04-18 12:02:01.129943] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.810 [2024-04-18 12:02:01.129955] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.810 [2024-04-18 12:02:01.129968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3880 len:8 PRP1 0x0 PRP2 0x0 00:26:17.810 [2024-04-18 12:02:01.129984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.810 [2024-04-18 12:02:01.129998] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.810 [2024-04-18 12:02:01.130011] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.810 [2024-04-18 12:02:01.130024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3888 len:8 PRP1 0x0 PRP2 0x0 00:26:17.810 [2024-04-18 12:02:01.130039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.810 [2024-04-18 12:02:01.130054] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.810 [2024-04-18 12:02:01.130067] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.810 [2024-04-18 12:02:01.130080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3896 len:8 PRP1 0x0 PRP2 0x0 00:26:17.810 [2024-04-18 12:02:01.130097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.810 [2024-04-18 12:02:01.130112] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.810 [2024-04-18 12:02:01.130124] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.810 [2024-04-18 12:02:01.130138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3904 len:8 PRP1 0x0 PRP2 0x0 00:26:17.810 [2024-04-18 12:02:01.130153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.810 [2024-04-18 12:02:01.130167] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.810 [2024-04-18 12:02:01.130180] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.810 [2024-04-18 12:02:01.130193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3912 len:8 PRP1 0x0 PRP2 0x0 00:26:17.810 [2024-04-18 12:02:01.130208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.810 [2024-04-18 12:02:01.130223] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.810 [2024-04-18 12:02:01.130235] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.810 [2024-04-18 12:02:01.130248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3920 len:8 PRP1 0x0 PRP2 0x0 00:26:17.810 [2024-04-18 12:02:01.130263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.810 [2024-04-18 12:02:01.130278] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.810 [2024-04-18 12:02:01.130291] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.810 [2024-04-18 12:02:01.130303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3928 len:8 PRP1 0x0 PRP2 0x0 00:26:17.810 [2024-04-18 12:02:01.130319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.811 [2024-04-18 12:02:01.130333] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.811 [2024-04-18 12:02:01.130346] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.811 [2024-04-18 12:02:01.130359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3936 len:8 PRP1 0x0 PRP2 0x0 00:26:17.811 [2024-04-18 12:02:01.130374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.811 [2024-04-18 12:02:01.130396] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.811 [2024-04-18 12:02:01.130408] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.811 [2024-04-18 12:02:01.130422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3944 len:8 PRP1 0x0 PRP2 0x0 00:26:17.811 [2024-04-18 12:02:01.130437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.811 [2024-04-18 12:02:01.130457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.811 [2024-04-18 12:02:01.130470] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.811 [2024-04-18 12:02:01.130483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3952 len:8 PRP1 0x0 PRP2 0x0 00:26:17.811 [2024-04-18 12:02:01.130499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.811 [2024-04-18 12:02:01.130514] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.811 [2024-04-18 12:02:01.130525] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.811 [2024-04-18 12:02:01.130541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3960 len:8 PRP1 0x0 PRP2 0x0 00:26:17.811 [2024-04-18 12:02:01.130556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.811 [2024-04-18 12:02:01.130572] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.811 [2024-04-18 12:02:01.130584] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.811 [2024-04-18 12:02:01.130597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3968 len:8 PRP1 0x0 PRP2 0x0 00:26:17.811 [2024-04-18 12:02:01.130612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.811 [2024-04-18 12:02:01.130627] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.811 [2024-04-18 12:02:01.130640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.811 [2024-04-18 12:02:01.130653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3976 len:8 PRP1 0x0 PRP2 0x0 00:26:17.811 [2024-04-18 12:02:01.130669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.811 [2024-04-18 12:02:01.130683] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.811 [2024-04-18 12:02:01.130695] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.811 [2024-04-18 12:02:01.130709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3984 len:8 PRP1 0x0 PRP2 0x0 00:26:17.811 [2024-04-18 12:02:01.130724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.811 [2024-04-18 12:02:01.130739] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.811 [2024-04-18 12:02:01.130751] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.811 [2024-04-18 12:02:01.130765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3992 len:8 PRP1 0x0 PRP2 0x0 00:26:17.811 [2024-04-18 12:02:01.130780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.811 [2024-04-18 12:02:01.130795] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.811 [2024-04-18 12:02:01.130807] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.811 [2024-04-18 12:02:01.130820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4000 len:8 PRP1 0x0 PRP2 0x0 00:26:17.811 [2024-04-18 12:02:01.130836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.811 [2024-04-18 12:02:01.130852] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.811 [2024-04-18 12:02:01.130864] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.811 [2024-04-18 12:02:01.130877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4008 len:8 PRP1 0x0 PRP2 0x0 00:26:17.811 [2024-04-18 12:02:01.130892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.811 [2024-04-18 12:02:01.130907] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.811 [2024-04-18 12:02:01.130920] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.811 [2024-04-18 12:02:01.130933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4016 len:8 PRP1 0x0 PRP2 0x0 00:26:17.811 [2024-04-18 12:02:01.130948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.811 [2024-04-18 12:02:01.130963] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.811 [2024-04-18 12:02:01.130977] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.811 [2024-04-18 12:02:01.130991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4024 len:8 PRP1 0x0 PRP2 0x0 00:26:17.811 [2024-04-18 12:02:01.131005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.811 [2024-04-18 12:02:01.131021] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.811 [2024-04-18 12:02:01.131033] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.811 [2024-04-18 12:02:01.131046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4032 len:8 PRP1 0x0 PRP2 0x0 00:26:17.811 [2024-04-18 12:02:01.131061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.811 [2024-04-18 12:02:01.131076] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.811 [2024-04-18 12:02:01.131089] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.811 [2024-04-18 12:02:01.131101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4040 len:8 PRP1 0x0 PRP2 0x0 00:26:17.811 [2024-04-18 12:02:01.131116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.811 [2024-04-18 12:02:01.131131] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.811 [2024-04-18 12:02:01.131143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.811 [2024-04-18 12:02:01.131158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4048 len:8 PRP1 0x0 PRP2 0x0 00:26:17.811 [2024-04-18 12:02:01.131174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.811 [2024-04-18 12:02:01.131190] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.811 [2024-04-18 12:02:01.131203] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.811 [2024-04-18 12:02:01.131216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4056 len:8 PRP1 0x0 PRP2 0x0 00:26:17.811 [2024-04-18 12:02:01.131232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.811 [2024-04-18 12:02:01.131247] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.811 [2024-04-18 12:02:01.131258] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.811 [2024-04-18 12:02:01.131272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:8 PRP1 0x0 PRP2 0x0 00:26:17.811 [2024-04-18 12:02:01.131287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.811 [2024-04-18 12:02:01.131303] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.811 [2024-04-18 12:02:01.131315] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.811 [2024-04-18 12:02:01.131329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4072 len:8 PRP1 0x0 PRP2 0x0 00:26:17.811 [2024-04-18 12:02:01.131344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.811 [2024-04-18 12:02:01.131359] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.811 [2024-04-18 12:02:01.131371] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.811 [2024-04-18 12:02:01.131384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4080 len:8 PRP1 0x0 PRP2 0x0 00:26:17.811 [2024-04-18 12:02:01.131399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.811 [2024-04-18 12:02:01.131417] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.811 [2024-04-18 12:02:01.131429] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.811 [2024-04-18 12:02:01.131442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4088 len:8 PRP1 0x0 PRP2 0x0 00:26:17.811 [2024-04-18 12:02:01.131462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.811 [2024-04-18 12:02:01.131477] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.811 [2024-04-18 12:02:01.131489] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.811 [2024-04-18 12:02:01.131502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:8 PRP1 0x0 PRP2 0x0 00:26:17.811 [2024-04-18 12:02:01.131518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.811 [2024-04-18 12:02:01.131533] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.811 [2024-04-18 12:02:01.131545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.811 [2024-04-18 12:02:01.131559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4104 len:8 PRP1 0x0 PRP2 0x0 00:26:17.811 [2024-04-18 12:02:01.131574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.811 [2024-04-18 12:02:01.131589] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.811 [2024-04-18 12:02:01.131602] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.811 [2024-04-18 12:02:01.131615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4112 len:8 PRP1 0x0 PRP2 0x0 00:26:17.811 [2024-04-18 12:02:01.131631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.811 [2024-04-18 12:02:01.131645] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.811 [2024-04-18 12:02:01.131658] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.812 [2024-04-18 12:02:01.131671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4120 len:8 PRP1 0x0 PRP2 0x0 00:26:17.812 [2024-04-18 12:02:01.131686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.812 [2024-04-18 12:02:01.131701] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.812 [2024-04-18 12:02:01.131713] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.812 [2024-04-18 12:02:01.131726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:8 PRP1 0x0 PRP2 0x0 00:26:17.812 [2024-04-18 12:02:01.131741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.812 [2024-04-18 12:02:01.131757] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.812 [2024-04-18 12:02:01.131769] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.812 [2024-04-18 12:02:01.131782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4136 len:8 PRP1 0x0 PRP2 0x0 00:26:17.812 [2024-04-18 12:02:01.131798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.812 [2024-04-18 12:02:01.131813] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.812 [2024-04-18 12:02:01.131824] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.812 [2024-04-18 12:02:01.131838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4144 len:8 PRP1 0x0 PRP2 0x0 00:26:17.812 [2024-04-18 12:02:01.131855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.812 [2024-04-18 12:02:01.131870] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.812 [2024-04-18 12:02:01.131882] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.812 [2024-04-18 12:02:01.131895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4152 len:8 PRP1 0x0 PRP2 0x0 00:26:17.812 [2024-04-18 12:02:01.131911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.812 [2024-04-18 12:02:01.131925] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.812 [2024-04-18 12:02:01.131938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.812 [2024-04-18 12:02:01.131951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:8 PRP1 0x0 PRP2 0x0 00:26:17.812 [2024-04-18 12:02:01.131966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.812 [2024-04-18 12:02:01.131982] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.812 [2024-04-18 12:02:01.131994] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.812 [2024-04-18 12:02:01.132008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4168 len:8 PRP1 0x0 PRP2 0x0 00:26:17.812 [2024-04-18 12:02:01.132022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.812 [2024-04-18 12:02:01.132038] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.812 [2024-04-18 12:02:01.132050] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.812 [2024-04-18 12:02:01.132063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4176 len:8 PRP1 0x0 PRP2 0x0 00:26:17.812 [2024-04-18 12:02:01.132078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.812 [2024-04-18 12:02:01.132092] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.812 [2024-04-18 12:02:01.132104] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.812 [2024-04-18 12:02:01.132118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4184 len:8 PRP1 0x0 PRP2 0x0 00:26:17.812 [2024-04-18 12:02:01.132133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.812 [2024-04-18 12:02:01.132148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.812 [2024-04-18 12:02:01.132160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.812 [2024-04-18 12:02:01.132173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:8 PRP1 0x0 PRP2 0x0 00:26:17.812 [2024-04-18 12:02:01.132189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.812 [2024-04-18 12:02:01.132211] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.812 [2024-04-18 12:02:01.132224] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.812 [2024-04-18 12:02:01.132237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4200 len:8 PRP1 0x0 PRP2 0x0 00:26:17.812 [2024-04-18 12:02:01.132252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.812 [2024-04-18 12:02:01.132267] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.812 [2024-04-18 12:02:01.132280] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.812 [2024-04-18 12:02:01.132295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4208 len:8 PRP1 0x0 PRP2 0x0 00:26:17.812 [2024-04-18 12:02:01.132312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.812 [2024-04-18 12:02:01.132328] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.812 [2024-04-18 12:02:01.132340] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.812 [2024-04-18 12:02:01.132354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4216 len:8 PRP1 0x0 PRP2 0x0 00:26:17.812 [2024-04-18 12:02:01.132369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.812 [2024-04-18 12:02:01.132384] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.812 [2024-04-18 12:02:01.132396] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.812 [2024-04-18 12:02:01.132409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:8 PRP1 0x0 PRP2 0x0 00:26:17.812 [2024-04-18 12:02:01.132425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.812 [2024-04-18 12:02:01.132440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.812 [2024-04-18 12:02:01.132457] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.812 [2024-04-18 12:02:01.132471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4232 len:8 PRP1 0x0 PRP2 0x0 00:26:17.812 [2024-04-18 12:02:01.132486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.812 [2024-04-18 12:02:01.132501] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.812 [2024-04-18 12:02:01.132513] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.812 [2024-04-18 12:02:01.132527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4240 len:8 PRP1 0x0 PRP2 0x0 00:26:17.812 [2024-04-18 12:02:01.132542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.812 [2024-04-18 12:02:01.132557] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.812 [2024-04-18 12:02:01.132571] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.812 [2024-04-18 12:02:01.132584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4248 len:8 PRP1 0x0 PRP2 0x0 00:26:17.812 [2024-04-18 12:02:01.132600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.812 [2024-04-18 12:02:01.132615] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.812 [2024-04-18 12:02:01.132627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.812 [2024-04-18 12:02:01.132641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:8 PRP1 0x0 PRP2 0x0 00:26:17.812 [2024-04-18 12:02:01.132656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.812 [2024-04-18 12:02:01.132672] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.812 [2024-04-18 12:02:01.132683] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.812 [2024-04-18 12:02:01.132697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4264 len:8 PRP1 0x0 PRP2 0x0 00:26:17.812 [2024-04-18 12:02:01.132712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.812 [2024-04-18 12:02:01.132727] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.812 [2024-04-18 12:02:01.132742] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.812 [2024-04-18 12:02:01.132754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4272 len:8 PRP1 0x0 PRP2 0x0 00:26:17.812 [2024-04-18 12:02:01.132770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.812 [2024-04-18 12:02:01.132785] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.812 [2024-04-18 12:02:01.132797] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.812 [2024-04-18 12:02:01.132810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4280 len:8 PRP1 0x0 PRP2 0x0 00:26:17.813 [2024-04-18 12:02:01.132825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.813 [2024-04-18 12:02:01.132840] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.813 [2024-04-18 12:02:01.132852] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.813 [2024-04-18 12:02:01.132865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:8 PRP1 0x0 PRP2 0x0 00:26:17.813 [2024-04-18 12:02:01.132880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.813 [2024-04-18 12:02:01.132895] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.813 [2024-04-18 12:02:01.132907] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.813 [2024-04-18 12:02:01.132921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4296 len:8 PRP1 0x0 PRP2 0x0 00:26:17.813 [2024-04-18 12:02:01.132935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.813 [2024-04-18 12:02:01.132951] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.813 [2024-04-18 12:02:01.132962] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.813 [2024-04-18 12:02:01.132976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4304 len:8 PRP1 0x0 PRP2 0x0 00:26:17.813 [2024-04-18 12:02:01.132991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.813 [2024-04-18 12:02:01.133005] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.813 [2024-04-18 12:02:01.133019] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.813 [2024-04-18 12:02:01.133033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4312 len:8 PRP1 0x0 PRP2 0x0 00:26:17.813 [2024-04-18 12:02:01.133048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.813 [2024-04-18 12:02:01.133063] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.813 [2024-04-18 12:02:01.133074] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.813 [2024-04-18 12:02:01.133088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:8 PRP1 0x0 PRP2 0x0 00:26:17.813 [2024-04-18 12:02:01.133104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.813 [2024-04-18 12:02:01.133120] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.813 [2024-04-18 12:02:01.133132] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.813 [2024-04-18 12:02:01.133145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4328 len:8 PRP1 0x0 PRP2 0x0 00:26:17.813 [2024-04-18 12:02:01.133160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.813 [2024-04-18 12:02:01.133177] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.813 [2024-04-18 12:02:01.133189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.813 [2024-04-18 12:02:01.133202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4336 len:8 PRP1 0x0 PRP2 0x0 00:26:17.813 [2024-04-18 12:02:01.133217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.813 [2024-04-18 12:02:01.133232] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.813 [2024-04-18 12:02:01.133244] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.813 [2024-04-18 12:02:01.133257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4344 len:8 PRP1 0x0 PRP2 0x0 00:26:17.813 [2024-04-18 12:02:01.133272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.813 [2024-04-18 12:02:01.133287] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.813 [2024-04-18 12:02:01.133299] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.813 [2024-04-18 12:02:01.133312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:8 PRP1 0x0 PRP2 0x0 00:26:17.813 [2024-04-18 12:02:01.133328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.813 [2024-04-18 12:02:01.133343] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.813 [2024-04-18 12:02:01.133355] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.813 [2024-04-18 12:02:01.133368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4360 len:8 PRP1 0x0 PRP2 0x0 00:26:17.813 [2024-04-18 12:02:01.133383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.813 [2024-04-18 12:02:01.133398] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.813 [2024-04-18 12:02:01.133410] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.813 [2024-04-18 12:02:01.133423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4368 len:8 PRP1 0x0 PRP2 0x0 00:26:17.813 [2024-04-18 12:02:01.133438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.813 [2024-04-18 12:02:01.133456] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.813 [2024-04-18 12:02:01.133469] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.813 [2024-04-18 12:02:01.133482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4376 len:8 PRP1 0x0 PRP2 0x0 00:26:17.813 [2024-04-18 12:02:01.133496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.813 [2024-04-18 12:02:01.133511] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.813 [2024-04-18 12:02:01.133523] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.813 [2024-04-18 12:02:01.133536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:8 PRP1 0x0 PRP2 0x0 00:26:17.813 [2024-04-18 12:02:01.133553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.813 [2024-04-18 12:02:01.133568] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.813 [2024-04-18 12:02:01.133581] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.813 [2024-04-18 12:02:01.133594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4392 len:8 PRP1 0x0 PRP2 0x0 00:26:17.813 [2024-04-18 12:02:01.133611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.813 [2024-04-18 12:02:01.133626] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.813 [2024-04-18 12:02:01.133638] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.813 [2024-04-18 12:02:01.133651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4400 len:8 PRP1 0x0 PRP2 0x0 00:26:17.813 [2024-04-18 12:02:01.133666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.813 [2024-04-18 12:02:01.133682] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.813 [2024-04-18 12:02:01.133693] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.813 [2024-04-18 12:02:01.133706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4408 len:8 PRP1 0x0 PRP2 0x0 00:26:17.813 [2024-04-18 12:02:01.133722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.813 [2024-04-18 12:02:01.133736] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.813 [2024-04-18 12:02:01.133749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.813 [2024-04-18 12:02:01.133762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:8 PRP1 0x0 PRP2 0x0 00:26:17.813 [2024-04-18 12:02:01.133777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.813 [2024-04-18 12:02:01.133792] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.813 [2024-04-18 12:02:01.133803] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.813 [2024-04-18 12:02:01.133817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4424 len:8 PRP1 0x0 PRP2 0x0 00:26:17.813 [2024-04-18 12:02:01.133832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.813 [2024-04-18 12:02:01.140226] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.813 [2024-04-18 12:02:01.140243] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.813 [2024-04-18 12:02:01.140257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4432 len:8 PRP1 0x0 PRP2 0x0 00:26:17.813 [2024-04-18 12:02:01.140273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.813 [2024-04-18 12:02:01.140289] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.813 [2024-04-18 12:02:01.140301] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.813 [2024-04-18 12:02:01.140315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4440 len:8 PRP1 0x0 PRP2 0x0 00:26:17.813 [2024-04-18 12:02:01.140331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.813 [2024-04-18 12:02:01.140347] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.813 [2024-04-18 12:02:01.140360] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.813 [2024-04-18 12:02:01.140373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:8 PRP1 0x0 PRP2 0x0 00:26:17.813 [2024-04-18 12:02:01.140390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.813 [2024-04-18 12:02:01.140413] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.813 [2024-04-18 12:02:01.140428] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.813 [2024-04-18 12:02:01.140442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4456 len:8 PRP1 0x0 PRP2 0x0 00:26:17.813 [2024-04-18 12:02:01.140464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.813 [2024-04-18 12:02:01.140480] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.813 [2024-04-18 12:02:01.140492] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.813 [2024-04-18 12:02:01.140507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4464 len:8 PRP1 0x0 PRP2 0x0 00:26:17.813 [2024-04-18 12:02:01.140522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.813 [2024-04-18 12:02:01.140538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.814 [2024-04-18 12:02:01.140550] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.814 [2024-04-18 12:02:01.140565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4472 len:8 PRP1 0x0 PRP2 0x0 00:26:17.814 [2024-04-18 12:02:01.140581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.814 [2024-04-18 12:02:01.140596] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.814 [2024-04-18 12:02:01.140609] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.814 [2024-04-18 12:02:01.140623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:8 PRP1 0x0 PRP2 0x0 00:26:17.814 [2024-04-18 12:02:01.140640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.814 [2024-04-18 12:02:01.140655] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.814 [2024-04-18 12:02:01.140668] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.814 [2024-04-18 12:02:01.140682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4488 len:8 PRP1 0x0 PRP2 0x0 00:26:17.814 [2024-04-18 12:02:01.140698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.814 [2024-04-18 12:02:01.140713] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.814 [2024-04-18 12:02:01.140725] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.814 [2024-04-18 12:02:01.140739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4496 len:8 PRP1 0x0 PRP2 0x0 00:26:17.814 [2024-04-18 12:02:01.140755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.814 [2024-04-18 12:02:01.140771] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.814 [2024-04-18 12:02:01.140784] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.814 [2024-04-18 12:02:01.140798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4504 len:8 PRP1 0x0 PRP2 0x0 00:26:17.814 [2024-04-18 12:02:01.140814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.814 [2024-04-18 12:02:01.140830] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.814 [2024-04-18 12:02:01.140843] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.814 [2024-04-18 12:02:01.140856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:8 PRP1 0x0 PRP2 0x0 00:26:17.814 [2024-04-18 12:02:01.140872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.814 [2024-04-18 12:02:01.140891] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.814 [2024-04-18 12:02:01.140904] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.814 [2024-04-18 12:02:01.140918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4520 len:8 PRP1 0x0 PRP2 0x0 00:26:17.814 [2024-04-18 12:02:01.140934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.814 [2024-04-18 12:02:01.140950] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.814 [2024-04-18 12:02:01.140962] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.814 [2024-04-18 12:02:01.140975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4528 len:8 PRP1 0x0 PRP2 0x0 00:26:17.814 [2024-04-18 12:02:01.140991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.814 [2024-04-18 12:02:01.141007] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.814 [2024-04-18 12:02:01.141020] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.814 [2024-04-18 12:02:01.141033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4536 len:8 PRP1 0x0 PRP2 0x0 00:26:17.814 [2024-04-18 12:02:01.141049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.814 [2024-04-18 12:02:01.141065] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.814 [2024-04-18 12:02:01.141077] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.814 [2024-04-18 12:02:01.141091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:8 PRP1 0x0 PRP2 0x0 00:26:17.814 [2024-04-18 12:02:01.141107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.814 [2024-04-18 12:02:01.141123] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.814 [2024-04-18 12:02:01.141143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.814 [2024-04-18 12:02:01.141157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4552 len:8 PRP1 0x0 PRP2 0x0 00:26:17.814 [2024-04-18 12:02:01.141172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.814 [2024-04-18 12:02:01.141188] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.814 [2024-04-18 12:02:01.141202] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.814 [2024-04-18 12:02:01.141216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4560 len:8 PRP1 0x0 PRP2 0x0 00:26:17.814 [2024-04-18 12:02:01.141232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.814 [2024-04-18 12:02:01.141248] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.814 [2024-04-18 12:02:01.141261] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.814 [2024-04-18 12:02:01.141274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4568 len:8 PRP1 0x0 PRP2 0x0 00:26:17.814 [2024-04-18 12:02:01.141290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.814 [2024-04-18 12:02:01.141305] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.814 [2024-04-18 12:02:01.141317] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.814 [2024-04-18 12:02:01.141331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:8 PRP1 0x0 PRP2 0x0 00:26:17.814 [2024-04-18 12:02:01.141350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.814 [2024-04-18 12:02:01.141365] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.814 [2024-04-18 12:02:01.141378] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.814 [2024-04-18 12:02:01.141393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4584 len:8 PRP1 0x0 PRP2 0x0 00:26:17.814 [2024-04-18 12:02:01.141408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.814 [2024-04-18 12:02:01.141424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.814 [2024-04-18 12:02:01.141436] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.814 [2024-04-18 12:02:01.141455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4592 len:8 PRP1 0x0 PRP2 0x0 00:26:17.814 [2024-04-18 12:02:01.141472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.814 [2024-04-18 12:02:01.141487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.814 [2024-04-18 12:02:01.141500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.814 [2024-04-18 12:02:01.141513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4600 len:8 PRP1 0x0 PRP2 0x0 00:26:17.814 [2024-04-18 12:02:01.141530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.814 [2024-04-18 12:02:01.141545] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.814 [2024-04-18 12:02:01.141558] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.814 [2024-04-18 12:02:01.141572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:8 PRP1 0x0 PRP2 0x0 00:26:17.814 [2024-04-18 12:02:01.141588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.814 [2024-04-18 12:02:01.141604] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.814 [2024-04-18 12:02:01.141616] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.814 [2024-04-18 12:02:01.141630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4616 len:8 PRP1 0x0 PRP2 0x0 00:26:17.814 [2024-04-18 12:02:01.141647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.814 [2024-04-18 12:02:01.141662] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.814 [2024-04-18 12:02:01.141675] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.814 [2024-04-18 12:02:01.141689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4624 len:8 PRP1 0x0 PRP2 0x0 00:26:17.814 [2024-04-18 12:02:01.141705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.814 [2024-04-18 12:02:01.141721] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.814 [2024-04-18 12:02:01.141733] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.814 [2024-04-18 12:02:01.141747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4632 len:8 PRP1 0x0 PRP2 0x0 00:26:17.814 [2024-04-18 12:02:01.141763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.814 [2024-04-18 12:02:01.142164] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x614000009240 was disconnected and freed. reset controller. 00:26:17.814 [2024-04-18 12:02:01.142184] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:26:17.814 [2024-04-18 12:02:01.142203] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.814 [2024-04-18 12:02:01.142260] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004a40 (9): Bad file descriptor 00:26:17.814 [2024-04-18 12:02:01.146321] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.814 [2024-04-18 12:02:01.179436] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:17.814 00:26:17.814 Latency(us) 00:26:17.814 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:17.814 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:17.814 Verification LBA range: start 0x0 length 0x4000 00:26:17.814 NVMe0n1 : 15.01 10132.27 39.58 797.28 0.00 11686.83 910.95 40894.46 00:26:17.814 =================================================================================================================== 00:26:17.815 Total : 10132.27 39.58 797.28 0.00 11686.83 910.95 40894.46 00:26:17.815 Received shutdown signal, test time was about 15.000000 seconds 00:26:17.815 00:26:17.815 Latency(us) 00:26:17.815 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:17.815 =================================================================================================================== 00:26:17.815 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:17.815 12:02:08 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:26:17.815 12:02:08 -- host/failover.sh@65 -- # count=3 00:26:17.815 12:02:08 -- host/failover.sh@67 -- # (( count != 3 )) 00:26:17.815 12:02:08 -- host/failover.sh@73 -- # bdevperf_pid=2598393 00:26:17.815 12:02:08 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:26:17.815 12:02:08 -- host/failover.sh@75 -- # waitforlisten 2598393 /var/tmp/bdevperf.sock 00:26:17.815 12:02:08 -- common/autotest_common.sh@817 -- # '[' -z 2598393 ']' 00:26:17.815 12:02:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:17.815 12:02:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:17.815 12:02:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:17.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:17.815 12:02:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:17.815 12:02:08 -- common/autotest_common.sh@10 -- # set +x 00:26:18.748 12:02:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:18.748 12:02:09 -- common/autotest_common.sh@850 -- # return 0 00:26:18.748 12:02:09 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:18.748 [2024-04-18 12:02:09.274972] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:19.005 12:02:09 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:19.005 [2024-04-18 12:02:09.463617] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:19.006 12:02:09 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:19.570 NVMe0n1 00:26:19.570 12:02:09 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:19.830 00:26:19.830 12:02:10 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:20.397 00:26:20.397 12:02:10 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:20.397 12:02:10 -- host/failover.sh@82 -- # grep -q NVMe0 00:26:20.397 12:02:10 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:20.654 12:02:11 -- host/failover.sh@87 -- # sleep 3 00:26:23.932 12:02:14 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:23.932 12:02:14 -- host/failover.sh@88 -- # grep -q NVMe0 00:26:23.932 12:02:14 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:23.932 12:02:14 -- host/failover.sh@90 -- # run_test_pid=2599311 00:26:23.932 12:02:14 -- host/failover.sh@92 -- # wait 2599311 00:26:24.865 0 00:26:24.865 12:02:15 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:24.865 [2024-04-18 12:02:08.351773] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:26:24.865 [2024-04-18 12:02:08.351873] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2598393 ] 00:26:24.865 EAL: No free 2048 kB hugepages reported on node 1 00:26:24.865 [2024-04-18 12:02:08.476875] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.865 [2024-04-18 12:02:08.700211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:24.865 [2024-04-18 12:02:10.986588] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:24.865 [2024-04-18 12:02:10.986663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.865 [2024-04-18 12:02:10.986682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.865 [2024-04-18 12:02:10.986699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.865 [2024-04-18 12:02:10.986712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.865 [2024-04-18 12:02:10.986725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.865 [2024-04-18 12:02:10.986738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.865 [2024-04-18 12:02:10.986751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.865 [2024-04-18 12:02:10.986764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.865 [2024-04-18 12:02:10.986776] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.865 [2024-04-18 12:02:10.986828] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.865 [2024-04-18 12:02:10.986857] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004a40 (9): Bad file descriptor 00:26:24.865 [2024-04-18 12:02:11.035536] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:24.865 Running I/O for 1 seconds... 00:26:24.865 00:26:24.865 Latency(us) 00:26:24.865 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:24.865 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:24.865 Verification LBA range: start 0x0 length 0x4000 00:26:24.865 NVMe0n1 : 1.01 10146.59 39.64 0.00 0.00 12566.81 2686.98 14050.92 00:26:24.865 =================================================================================================================== 00:26:24.865 Total : 10146.59 39.64 0.00 0.00 12566.81 2686.98 14050.92 00:26:24.865 12:02:15 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:24.865 12:02:15 -- host/failover.sh@95 -- # grep -q NVMe0 00:26:25.123 12:02:15 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:25.123 12:02:15 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:25.123 12:02:15 -- host/failover.sh@99 -- # grep -q NVMe0 00:26:25.380 12:02:15 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:25.637 12:02:16 -- host/failover.sh@101 -- # sleep 3 00:26:28.912 12:02:19 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:28.912 12:02:19 -- host/failover.sh@103 -- # grep -q NVMe0 00:26:28.912 12:02:19 -- host/failover.sh@108 -- # killprocess 2598393 00:26:28.912 12:02:19 -- common/autotest_common.sh@936 -- # '[' -z 2598393 ']' 00:26:28.912 12:02:19 -- common/autotest_common.sh@940 -- # kill -0 2598393 00:26:28.912 12:02:19 -- common/autotest_common.sh@941 -- # uname 00:26:28.912 12:02:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:28.912 12:02:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2598393 00:26:28.912 12:02:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:28.912 12:02:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:28.912 12:02:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2598393' 00:26:28.912 killing process with pid 2598393 00:26:28.912 12:02:19 -- common/autotest_common.sh@955 -- # kill 2598393 00:26:28.912 12:02:19 -- common/autotest_common.sh@960 -- # wait 2598393 00:26:29.845 12:02:20 -- host/failover.sh@110 -- # sync 00:26:29.845 12:02:20 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:30.102 12:02:20 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:26:30.102 12:02:20 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:30.102 12:02:20 -- host/failover.sh@116 -- # nvmftestfini 00:26:30.102 12:02:20 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:30.102 12:02:20 -- nvmf/common.sh@117 -- # sync 00:26:30.103 12:02:20 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:30.103 12:02:20 -- nvmf/common.sh@120 -- # set +e 00:26:30.103 12:02:20 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:30.103 12:02:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:30.103 rmmod nvme_tcp 00:26:30.103 rmmod nvme_fabrics 00:26:30.103 rmmod nvme_keyring 00:26:30.103 12:02:20 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:30.103 12:02:20 -- nvmf/common.sh@124 -- # set -e 00:26:30.103 12:02:20 -- nvmf/common.sh@125 -- # return 0 00:26:30.103 12:02:20 -- nvmf/common.sh@478 -- # '[' -n 2594912 ']' 00:26:30.103 12:02:20 -- nvmf/common.sh@479 -- # killprocess 2594912 00:26:30.103 12:02:20 -- common/autotest_common.sh@936 -- # '[' -z 2594912 ']' 00:26:30.103 12:02:20 -- common/autotest_common.sh@940 -- # kill -0 2594912 00:26:30.103 12:02:20 -- common/autotest_common.sh@941 -- # uname 00:26:30.103 12:02:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:30.103 12:02:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2594912 00:26:30.103 12:02:20 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:30.103 12:02:20 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:30.103 12:02:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2594912' 00:26:30.103 killing process with pid 2594912 00:26:30.103 12:02:20 -- common/autotest_common.sh@955 -- # kill 2594912 00:26:30.103 12:02:20 -- common/autotest_common.sh@960 -- # wait 2594912 00:26:32.027 12:02:22 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:32.027 12:02:22 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:32.027 12:02:22 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:32.027 12:02:22 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:32.027 12:02:22 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:32.027 12:02:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:32.027 12:02:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:32.027 12:02:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:33.928 12:02:24 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:33.928 00:26:33.928 real 0m42.938s 00:26:33.928 user 2m12.083s 00:26:33.928 sys 0m10.131s 00:26:33.928 12:02:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:33.928 12:02:24 -- common/autotest_common.sh@10 -- # set +x 00:26:33.928 ************************************ 00:26:33.928 END TEST nvmf_failover 00:26:33.928 ************************************ 00:26:33.928 12:02:24 -- nvmf/nvmf.sh@99 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:33.928 12:02:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:33.928 12:02:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:33.928 12:02:24 -- common/autotest_common.sh@10 -- # set +x 00:26:33.928 ************************************ 00:26:33.928 START TEST nvmf_discovery 00:26:33.928 ************************************ 00:26:33.928 12:02:24 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:33.928 * Looking for test storage... 00:26:33.928 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:33.928 12:02:24 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:33.928 12:02:24 -- nvmf/common.sh@7 -- # uname -s 00:26:33.928 12:02:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:33.928 12:02:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:33.928 12:02:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:33.928 12:02:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:33.928 12:02:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:33.928 12:02:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:33.928 12:02:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:33.928 12:02:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:33.928 12:02:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:33.928 12:02:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:33.928 12:02:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:26:33.928 12:02:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:26:33.928 12:02:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:33.928 12:02:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:33.928 12:02:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:33.928 12:02:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:33.928 12:02:24 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:33.928 12:02:24 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:33.928 12:02:24 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:33.928 12:02:24 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:33.928 12:02:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.928 12:02:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.928 12:02:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.928 12:02:24 -- paths/export.sh@5 -- # export PATH 00:26:33.928 12:02:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.928 12:02:24 -- nvmf/common.sh@47 -- # : 0 00:26:33.928 12:02:24 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:33.928 12:02:24 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:33.929 12:02:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:33.929 12:02:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:33.929 12:02:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:33.929 12:02:24 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:33.929 12:02:24 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:33.929 12:02:24 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:33.929 12:02:24 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:26:33.929 12:02:24 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:26:33.929 12:02:24 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:26:33.929 12:02:24 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:26:33.929 12:02:24 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:26:33.929 12:02:24 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:26:33.929 12:02:24 -- host/discovery.sh@25 -- # nvmftestinit 00:26:33.929 12:02:24 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:33.929 12:02:24 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:33.929 12:02:24 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:33.929 12:02:24 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:33.929 12:02:24 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:33.929 12:02:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:33.929 12:02:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:33.929 12:02:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:34.186 12:02:24 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:26:34.186 12:02:24 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:26:34.186 12:02:24 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:34.186 12:02:24 -- common/autotest_common.sh@10 -- # set +x 00:26:40.747 12:02:30 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:40.747 12:02:30 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:40.747 12:02:30 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:40.747 12:02:30 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:40.747 12:02:30 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:40.747 12:02:30 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:40.747 12:02:30 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:40.747 12:02:30 -- nvmf/common.sh@295 -- # net_devs=() 00:26:40.747 12:02:30 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:40.747 12:02:30 -- nvmf/common.sh@296 -- # e810=() 00:26:40.747 12:02:30 -- nvmf/common.sh@296 -- # local -ga e810 00:26:40.747 12:02:30 -- nvmf/common.sh@297 -- # x722=() 00:26:40.747 12:02:30 -- nvmf/common.sh@297 -- # local -ga x722 00:26:40.747 12:02:30 -- nvmf/common.sh@298 -- # mlx=() 00:26:40.747 12:02:30 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:40.747 12:02:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:40.747 12:02:30 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:40.747 12:02:30 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:40.747 12:02:30 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:40.747 12:02:30 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:40.747 12:02:30 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:40.747 12:02:30 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:40.747 12:02:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:40.747 12:02:30 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:40.747 12:02:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:40.747 12:02:30 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:40.747 12:02:30 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:40.747 12:02:30 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:40.747 12:02:30 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:40.747 12:02:30 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:40.747 12:02:30 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:40.747 12:02:30 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:40.747 12:02:30 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:40.747 12:02:30 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:40.747 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:40.747 12:02:30 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:40.747 12:02:30 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:40.747 12:02:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:40.747 12:02:30 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:40.747 12:02:30 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:40.747 12:02:30 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:40.747 12:02:30 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:40.747 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:40.747 12:02:30 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:40.747 12:02:30 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:40.747 12:02:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:40.747 12:02:30 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:40.747 12:02:30 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:40.747 12:02:30 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:40.747 12:02:30 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:40.747 12:02:30 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:40.747 12:02:30 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:40.747 12:02:30 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:40.747 12:02:30 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:40.747 12:02:30 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:40.747 12:02:30 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:40.747 Found net devices under 0000:af:00.0: cvl_0_0 00:26:40.747 12:02:30 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:40.747 12:02:30 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:40.747 12:02:30 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:40.747 12:02:30 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:40.747 12:02:30 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:40.747 12:02:30 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:40.747 Found net devices under 0000:af:00.1: cvl_0_1 00:26:40.747 12:02:30 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:40.747 12:02:30 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:26:40.747 12:02:30 -- nvmf/common.sh@403 -- # is_hw=yes 00:26:40.747 12:02:30 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:26:40.747 12:02:30 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:26:40.747 12:02:30 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:26:40.747 12:02:30 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:40.747 12:02:30 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:40.747 12:02:30 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:40.747 12:02:30 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:40.747 12:02:30 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:40.747 12:02:30 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:40.747 12:02:30 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:40.747 12:02:30 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:40.747 12:02:30 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:40.747 12:02:30 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:40.747 12:02:30 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:40.747 12:02:30 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:40.747 12:02:30 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:40.747 12:02:31 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:40.747 12:02:31 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:40.747 12:02:31 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:40.747 12:02:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:40.747 12:02:31 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:40.747 12:02:31 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:40.747 12:02:31 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:40.747 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:40.747 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:26:40.747 00:26:40.747 --- 10.0.0.2 ping statistics --- 00:26:40.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:40.747 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:26:40.747 12:02:31 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:40.747 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:40.747 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:26:40.747 00:26:40.747 --- 10.0.0.1 ping statistics --- 00:26:40.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:40.747 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:26:40.747 12:02:31 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:40.747 12:02:31 -- nvmf/common.sh@411 -- # return 0 00:26:40.747 12:02:31 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:40.747 12:02:31 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:40.747 12:02:31 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:40.747 12:02:31 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:40.747 12:02:31 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:40.747 12:02:31 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:40.747 12:02:31 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:40.747 12:02:31 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:26:40.747 12:02:31 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:40.747 12:02:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:40.747 12:02:31 -- common/autotest_common.sh@10 -- # set +x 00:26:41.006 12:02:31 -- nvmf/common.sh@470 -- # nvmfpid=2604248 00:26:41.006 12:02:31 -- nvmf/common.sh@471 -- # waitforlisten 2604248 00:26:41.006 12:02:31 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:41.006 12:02:31 -- common/autotest_common.sh@817 -- # '[' -z 2604248 ']' 00:26:41.006 12:02:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:41.006 12:02:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:41.006 12:02:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:41.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:41.006 12:02:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:41.006 12:02:31 -- common/autotest_common.sh@10 -- # set +x 00:26:41.006 [2024-04-18 12:02:31.393295] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:26:41.006 [2024-04-18 12:02:31.393398] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:41.006 EAL: No free 2048 kB hugepages reported on node 1 00:26:41.006 [2024-04-18 12:02:31.522891] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:41.265 [2024-04-18 12:02:31.746095] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:41.265 [2024-04-18 12:02:31.746146] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:41.265 [2024-04-18 12:02:31.746159] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:41.265 [2024-04-18 12:02:31.746172] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:41.265 [2024-04-18 12:02:31.746182] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:41.265 [2024-04-18 12:02:31.746220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:41.832 12:02:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:41.832 12:02:32 -- common/autotest_common.sh@850 -- # return 0 00:26:41.832 12:02:32 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:41.832 12:02:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:41.832 12:02:32 -- common/autotest_common.sh@10 -- # set +x 00:26:41.832 12:02:32 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:41.832 12:02:32 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:41.832 12:02:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:41.832 12:02:32 -- common/autotest_common.sh@10 -- # set +x 00:26:41.832 [2024-04-18 12:02:32.191765] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:41.832 12:02:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:41.832 12:02:32 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:26:41.832 12:02:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:41.832 12:02:32 -- common/autotest_common.sh@10 -- # set +x 00:26:41.832 [2024-04-18 12:02:32.199941] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:41.832 12:02:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:41.832 12:02:32 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:26:41.832 12:02:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:41.832 12:02:32 -- common/autotest_common.sh@10 -- # set +x 00:26:41.832 null0 00:26:41.832 12:02:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:41.832 12:02:32 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:26:41.832 12:02:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:41.832 12:02:32 -- common/autotest_common.sh@10 -- # set +x 00:26:41.832 null1 00:26:41.832 12:02:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:41.832 12:02:32 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:26:41.832 12:02:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:41.832 12:02:32 -- common/autotest_common.sh@10 -- # set +x 00:26:41.832 12:02:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:41.832 12:02:32 -- host/discovery.sh@45 -- # hostpid=2604522 00:26:41.832 12:02:32 -- host/discovery.sh@46 -- # waitforlisten 2604522 /tmp/host.sock 00:26:41.832 12:02:32 -- common/autotest_common.sh@817 -- # '[' -z 2604522 ']' 00:26:41.832 12:02:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:26:41.832 12:02:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:41.832 12:02:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:41.832 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:41.832 12:02:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:41.832 12:02:32 -- common/autotest_common.sh@10 -- # set +x 00:26:41.832 12:02:32 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:26:41.832 [2024-04-18 12:02:32.313200] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:26:41.832 [2024-04-18 12:02:32.313287] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2604522 ] 00:26:42.091 EAL: No free 2048 kB hugepages reported on node 1 00:26:42.091 [2024-04-18 12:02:32.436575] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:42.349 [2024-04-18 12:02:32.648313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:42.608 12:02:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:42.608 12:02:33 -- common/autotest_common.sh@850 -- # return 0 00:26:42.608 12:02:33 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:42.608 12:02:33 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:26:42.608 12:02:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:42.608 12:02:33 -- common/autotest_common.sh@10 -- # set +x 00:26:42.608 12:02:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:42.608 12:02:33 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:26:42.608 12:02:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:42.608 12:02:33 -- common/autotest_common.sh@10 -- # set +x 00:26:42.608 12:02:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:42.608 12:02:33 -- host/discovery.sh@72 -- # notify_id=0 00:26:42.608 12:02:33 -- host/discovery.sh@83 -- # get_subsystem_names 00:26:42.608 12:02:33 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:42.608 12:02:33 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:42.608 12:02:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:42.608 12:02:33 -- host/discovery.sh@59 -- # sort 00:26:42.608 12:02:33 -- common/autotest_common.sh@10 -- # set +x 00:26:42.608 12:02:33 -- host/discovery.sh@59 -- # xargs 00:26:42.608 12:02:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:42.608 12:02:33 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:26:42.608 12:02:33 -- host/discovery.sh@84 -- # get_bdev_list 00:26:42.608 12:02:33 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:42.608 12:02:33 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:42.608 12:02:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:42.608 12:02:33 -- common/autotest_common.sh@10 -- # set +x 00:26:42.608 12:02:33 -- host/discovery.sh@55 -- # sort 00:26:42.608 12:02:33 -- host/discovery.sh@55 -- # xargs 00:26:42.608 12:02:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:42.867 12:02:33 -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:26:42.867 12:02:33 -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:26:42.867 12:02:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:42.867 12:02:33 -- common/autotest_common.sh@10 -- # set +x 00:26:42.867 12:02:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:42.867 12:02:33 -- host/discovery.sh@87 -- # get_subsystem_names 00:26:42.867 12:02:33 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:42.867 12:02:33 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:42.867 12:02:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:42.867 12:02:33 -- common/autotest_common.sh@10 -- # set +x 00:26:42.867 12:02:33 -- host/discovery.sh@59 -- # sort 00:26:42.867 12:02:33 -- host/discovery.sh@59 -- # xargs 00:26:42.867 12:02:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:42.867 12:02:33 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:26:42.867 12:02:33 -- host/discovery.sh@88 -- # get_bdev_list 00:26:42.867 12:02:33 -- host/discovery.sh@55 -- # xargs 00:26:42.867 12:02:33 -- host/discovery.sh@55 -- # sort 00:26:42.867 12:02:33 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:42.867 12:02:33 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:42.867 12:02:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:42.867 12:02:33 -- common/autotest_common.sh@10 -- # set +x 00:26:42.867 12:02:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:42.867 12:02:33 -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:26:42.867 12:02:33 -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:26:42.867 12:02:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:42.867 12:02:33 -- common/autotest_common.sh@10 -- # set +x 00:26:42.867 12:02:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:42.867 12:02:33 -- host/discovery.sh@91 -- # get_subsystem_names 00:26:42.867 12:02:33 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:42.867 12:02:33 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:42.867 12:02:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:42.867 12:02:33 -- host/discovery.sh@59 -- # sort 00:26:42.867 12:02:33 -- common/autotest_common.sh@10 -- # set +x 00:26:42.867 12:02:33 -- host/discovery.sh@59 -- # xargs 00:26:42.867 12:02:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:42.867 12:02:33 -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:26:42.867 12:02:33 -- host/discovery.sh@92 -- # get_bdev_list 00:26:42.867 12:02:33 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:42.867 12:02:33 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:42.867 12:02:33 -- host/discovery.sh@55 -- # sort 00:26:42.867 12:02:33 -- host/discovery.sh@55 -- # xargs 00:26:42.867 12:02:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:42.867 12:02:33 -- common/autotest_common.sh@10 -- # set +x 00:26:42.867 12:02:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:42.867 12:02:33 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:26:42.867 12:02:33 -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:42.867 12:02:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:42.867 12:02:33 -- common/autotest_common.sh@10 -- # set +x 00:26:42.867 [2024-04-18 12:02:33.395139] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:42.867 12:02:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:42.867 12:02:33 -- host/discovery.sh@97 -- # get_subsystem_names 00:26:42.867 12:02:33 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:42.867 12:02:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:42.867 12:02:33 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:42.867 12:02:33 -- common/autotest_common.sh@10 -- # set +x 00:26:42.867 12:02:33 -- host/discovery.sh@59 -- # sort 00:26:42.867 12:02:33 -- host/discovery.sh@59 -- # xargs 00:26:42.867 12:02:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:43.126 12:02:33 -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:26:43.126 12:02:33 -- host/discovery.sh@98 -- # get_bdev_list 00:26:43.126 12:02:33 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:43.126 12:02:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:43.126 12:02:33 -- common/autotest_common.sh@10 -- # set +x 00:26:43.126 12:02:33 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:43.126 12:02:33 -- host/discovery.sh@55 -- # sort 00:26:43.126 12:02:33 -- host/discovery.sh@55 -- # xargs 00:26:43.126 12:02:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:43.126 12:02:33 -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:26:43.126 12:02:33 -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:26:43.126 12:02:33 -- host/discovery.sh@79 -- # expected_count=0 00:26:43.126 12:02:33 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:43.126 12:02:33 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:43.126 12:02:33 -- common/autotest_common.sh@901 -- # local max=10 00:26:43.126 12:02:33 -- common/autotest_common.sh@902 -- # (( max-- )) 00:26:43.126 12:02:33 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:43.126 12:02:33 -- common/autotest_common.sh@903 -- # get_notification_count 00:26:43.126 12:02:33 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:43.126 12:02:33 -- host/discovery.sh@74 -- # jq '. | length' 00:26:43.126 12:02:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:43.126 12:02:33 -- common/autotest_common.sh@10 -- # set +x 00:26:43.126 12:02:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:43.126 12:02:33 -- host/discovery.sh@74 -- # notification_count=0 00:26:43.126 12:02:33 -- host/discovery.sh@75 -- # notify_id=0 00:26:43.126 12:02:33 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:26:43.126 12:02:33 -- common/autotest_common.sh@904 -- # return 0 00:26:43.126 12:02:33 -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:26:43.126 12:02:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:43.126 12:02:33 -- common/autotest_common.sh@10 -- # set +x 00:26:43.126 12:02:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:43.126 12:02:33 -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:43.126 12:02:33 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:43.126 12:02:33 -- common/autotest_common.sh@901 -- # local max=10 00:26:43.126 12:02:33 -- common/autotest_common.sh@902 -- # (( max-- )) 00:26:43.126 12:02:33 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:43.126 12:02:33 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:26:43.126 12:02:33 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:43.126 12:02:33 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:43.126 12:02:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:43.126 12:02:33 -- common/autotest_common.sh@10 -- # set +x 00:26:43.126 12:02:33 -- host/discovery.sh@59 -- # sort 00:26:43.126 12:02:33 -- host/discovery.sh@59 -- # xargs 00:26:43.126 12:02:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:43.126 12:02:33 -- common/autotest_common.sh@903 -- # [[ '' == \n\v\m\e\0 ]] 00:26:43.126 12:02:33 -- common/autotest_common.sh@906 -- # sleep 1 00:26:43.693 [2024-04-18 12:02:34.122386] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:43.693 [2024-04-18 12:02:34.122417] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:43.693 [2024-04-18 12:02:34.122442] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:43.951 [2024-04-18 12:02:34.250856] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:43.951 [2024-04-18 12:02:34.434284] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:43.951 [2024-04-18 12:02:34.434310] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:44.210 12:02:34 -- common/autotest_common.sh@902 -- # (( max-- )) 00:26:44.210 12:02:34 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:44.210 12:02:34 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:26:44.210 12:02:34 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:44.210 12:02:34 -- host/discovery.sh@59 -- # xargs 00:26:44.210 12:02:34 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:44.210 12:02:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:44.210 12:02:34 -- host/discovery.sh@59 -- # sort 00:26:44.210 12:02:34 -- common/autotest_common.sh@10 -- # set +x 00:26:44.210 12:02:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:44.210 12:02:34 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.210 12:02:34 -- common/autotest_common.sh@904 -- # return 0 00:26:44.210 12:02:34 -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:44.210 12:02:34 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:44.210 12:02:34 -- common/autotest_common.sh@901 -- # local max=10 00:26:44.210 12:02:34 -- common/autotest_common.sh@902 -- # (( max-- )) 00:26:44.210 12:02:34 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:26:44.210 12:02:34 -- common/autotest_common.sh@903 -- # get_bdev_list 00:26:44.210 12:02:34 -- host/discovery.sh@55 -- # xargs 00:26:44.210 12:02:34 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:44.210 12:02:34 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:44.210 12:02:34 -- host/discovery.sh@55 -- # sort 00:26:44.210 12:02:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:44.210 12:02:34 -- common/autotest_common.sh@10 -- # set +x 00:26:44.210 12:02:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:44.210 12:02:34 -- common/autotest_common.sh@903 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:26:44.210 12:02:34 -- common/autotest_common.sh@904 -- # return 0 00:26:44.210 12:02:34 -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:44.210 12:02:34 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:44.210 12:02:34 -- common/autotest_common.sh@901 -- # local max=10 00:26:44.210 12:02:34 -- common/autotest_common.sh@902 -- # (( max-- )) 00:26:44.210 12:02:34 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:26:44.210 12:02:34 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:26:44.210 12:02:34 -- host/discovery.sh@63 -- # sort -n 00:26:44.210 12:02:34 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:44.210 12:02:34 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:44.210 12:02:34 -- host/discovery.sh@63 -- # xargs 00:26:44.211 12:02:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:44.211 12:02:34 -- common/autotest_common.sh@10 -- # set +x 00:26:44.211 12:02:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:44.469 12:02:34 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0 ]] 00:26:44.469 12:02:34 -- common/autotest_common.sh@904 -- # return 0 00:26:44.469 12:02:34 -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:26:44.469 12:02:34 -- host/discovery.sh@79 -- # expected_count=1 00:26:44.469 12:02:34 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:44.469 12:02:34 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:44.469 12:02:34 -- common/autotest_common.sh@901 -- # local max=10 00:26:44.469 12:02:34 -- common/autotest_common.sh@902 -- # (( max-- )) 00:26:44.469 12:02:34 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:44.469 12:02:34 -- common/autotest_common.sh@903 -- # get_notification_count 00:26:44.469 12:02:34 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:44.469 12:02:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:44.469 12:02:34 -- common/autotest_common.sh@10 -- # set +x 00:26:44.469 12:02:34 -- host/discovery.sh@74 -- # jq '. | length' 00:26:44.469 12:02:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:44.469 12:02:34 -- host/discovery.sh@74 -- # notification_count=1 00:26:44.469 12:02:34 -- host/discovery.sh@75 -- # notify_id=1 00:26:44.469 12:02:34 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:26:44.469 12:02:34 -- common/autotest_common.sh@904 -- # return 0 00:26:44.469 12:02:34 -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:26:44.469 12:02:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:44.469 12:02:34 -- common/autotest_common.sh@10 -- # set +x 00:26:44.469 12:02:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:44.469 12:02:34 -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:44.469 12:02:34 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:44.469 12:02:34 -- common/autotest_common.sh@901 -- # local max=10 00:26:44.469 12:02:34 -- common/autotest_common.sh@902 -- # (( max-- )) 00:26:44.469 12:02:34 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:44.469 12:02:34 -- common/autotest_common.sh@903 -- # get_bdev_list 00:26:44.469 12:02:34 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:44.469 12:02:34 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:44.469 12:02:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:44.470 12:02:34 -- common/autotest_common.sh@10 -- # set +x 00:26:44.470 12:02:34 -- host/discovery.sh@55 -- # sort 00:26:44.470 12:02:34 -- host/discovery.sh@55 -- # xargs 00:26:44.470 12:02:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:44.470 12:02:34 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:44.470 12:02:34 -- common/autotest_common.sh@904 -- # return 0 00:26:44.470 12:02:34 -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:26:44.470 12:02:34 -- host/discovery.sh@79 -- # expected_count=1 00:26:44.470 12:02:34 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:44.470 12:02:34 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:44.470 12:02:34 -- common/autotest_common.sh@901 -- # local max=10 00:26:44.470 12:02:34 -- common/autotest_common.sh@902 -- # (( max-- )) 00:26:44.470 12:02:34 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:44.470 12:02:34 -- common/autotest_common.sh@903 -- # get_notification_count 00:26:44.470 12:02:34 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:26:44.470 12:02:34 -- host/discovery.sh@74 -- # jq '. | length' 00:26:44.470 12:02:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:44.470 12:02:34 -- common/autotest_common.sh@10 -- # set +x 00:26:44.470 12:02:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:44.470 12:02:34 -- host/discovery.sh@74 -- # notification_count=1 00:26:44.470 12:02:34 -- host/discovery.sh@75 -- # notify_id=2 00:26:44.470 12:02:34 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:26:44.470 12:02:34 -- common/autotest_common.sh@904 -- # return 0 00:26:44.470 12:02:34 -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:26:44.470 12:02:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:44.470 12:02:34 -- common/autotest_common.sh@10 -- # set +x 00:26:44.470 [2024-04-18 12:02:34.916585] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:44.470 [2024-04-18 12:02:34.917696] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:44.470 [2024-04-18 12:02:34.917734] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:44.470 12:02:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:44.470 12:02:34 -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:44.470 12:02:34 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:44.470 12:02:34 -- common/autotest_common.sh@901 -- # local max=10 00:26:44.470 12:02:34 -- common/autotest_common.sh@902 -- # (( max-- )) 00:26:44.470 12:02:34 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:44.470 12:02:34 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:26:44.470 12:02:34 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:44.470 12:02:34 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:44.470 12:02:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:44.470 12:02:34 -- host/discovery.sh@59 -- # sort 00:26:44.470 12:02:34 -- common/autotest_common.sh@10 -- # set +x 00:26:44.470 12:02:34 -- host/discovery.sh@59 -- # xargs 00:26:44.470 12:02:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:44.470 12:02:34 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.470 12:02:34 -- common/autotest_common.sh@904 -- # return 0 00:26:44.470 12:02:34 -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:44.470 12:02:34 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:44.470 12:02:34 -- common/autotest_common.sh@901 -- # local max=10 00:26:44.470 12:02:34 -- common/autotest_common.sh@902 -- # (( max-- )) 00:26:44.470 12:02:34 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:44.470 12:02:34 -- common/autotest_common.sh@903 -- # get_bdev_list 00:26:44.470 12:02:34 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:44.470 12:02:34 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:44.470 12:02:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:44.470 12:02:34 -- common/autotest_common.sh@10 -- # set +x 00:26:44.470 12:02:34 -- host/discovery.sh@55 -- # sort 00:26:44.470 12:02:34 -- host/discovery.sh@55 -- # xargs 00:26:44.728 12:02:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:44.728 12:02:35 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:44.728 12:02:35 -- common/autotest_common.sh@904 -- # return 0 00:26:44.728 12:02:35 -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:44.728 12:02:35 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:44.728 12:02:35 -- common/autotest_common.sh@901 -- # local max=10 00:26:44.728 12:02:35 -- common/autotest_common.sh@902 -- # (( max-- )) 00:26:44.728 12:02:35 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:44.728 12:02:35 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:26:44.728 12:02:35 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:44.728 12:02:35 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:44.728 12:02:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:44.728 12:02:35 -- common/autotest_common.sh@10 -- # set +x 00:26:44.728 12:02:35 -- host/discovery.sh@63 -- # sort -n 00:26:44.728 12:02:35 -- host/discovery.sh@63 -- # xargs 00:26:44.728 [2024-04-18 12:02:35.045134] bdev_nvme.c:6830:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:26:44.728 12:02:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:44.728 12:02:35 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:26:44.728 12:02:35 -- common/autotest_common.sh@906 -- # sleep 1 00:26:44.728 [2024-04-18 12:02:35.267446] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:44.728 [2024-04-18 12:02:35.267480] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:44.728 [2024-04-18 12:02:35.267490] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:45.663 12:02:36 -- common/autotest_common.sh@902 -- # (( max-- )) 00:26:45.663 12:02:36 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:45.663 12:02:36 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:26:45.663 12:02:36 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:45.663 12:02:36 -- host/discovery.sh@63 -- # xargs 00:26:45.663 12:02:36 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:45.663 12:02:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.663 12:02:36 -- common/autotest_common.sh@10 -- # set +x 00:26:45.663 12:02:36 -- host/discovery.sh@63 -- # sort -n 00:26:45.663 12:02:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.663 12:02:36 -- common/autotest_common.sh@903 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:26:45.663 12:02:36 -- common/autotest_common.sh@904 -- # return 0 00:26:45.663 12:02:36 -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:26:45.663 12:02:36 -- host/discovery.sh@79 -- # expected_count=0 00:26:45.663 12:02:36 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:45.663 12:02:36 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:45.663 12:02:36 -- common/autotest_common.sh@901 -- # local max=10 00:26:45.663 12:02:36 -- common/autotest_common.sh@902 -- # (( max-- )) 00:26:45.663 12:02:36 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:45.663 12:02:36 -- common/autotest_common.sh@903 -- # get_notification_count 00:26:45.663 12:02:36 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:45.663 12:02:36 -- host/discovery.sh@74 -- # jq '. | length' 00:26:45.663 12:02:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.663 12:02:36 -- common/autotest_common.sh@10 -- # set +x 00:26:45.663 12:02:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.663 12:02:36 -- host/discovery.sh@74 -- # notification_count=0 00:26:45.663 12:02:36 -- host/discovery.sh@75 -- # notify_id=2 00:26:45.663 12:02:36 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:26:45.663 12:02:36 -- common/autotest_common.sh@904 -- # return 0 00:26:45.663 12:02:36 -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:45.663 12:02:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.663 12:02:36 -- common/autotest_common.sh@10 -- # set +x 00:26:45.663 [2024-04-18 12:02:36.188985] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:45.663 [2024-04-18 12:02:36.189016] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:45.663 12:02:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.663 12:02:36 -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:45.663 12:02:36 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:45.663 12:02:36 -- common/autotest_common.sh@901 -- # local max=10 00:26:45.663 12:02:36 -- common/autotest_common.sh@902 -- # (( max-- )) 00:26:45.663 12:02:36 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:45.663 [2024-04-18 12:02:36.195725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:45.663 [2024-04-18 12:02:36.195756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.663 [2024-04-18 12:02:36.195771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:45.663 [2024-04-18 12:02:36.195784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.663 [2024-04-18 12:02:36.195796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:45.663 [2024-04-18 12:02:36.195808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.663 [2024-04-18 12:02:36.195824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:45.663 [2024-04-18 12:02:36.195836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.663 [2024-04-18 12:02:36.195847] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:26:45.663 12:02:36 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:26:45.663 12:02:36 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:45.663 12:02:36 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:45.663 12:02:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.663 12:02:36 -- common/autotest_common.sh@10 -- # set +x 00:26:45.663 12:02:36 -- host/discovery.sh@59 -- # sort 00:26:45.663 12:02:36 -- host/discovery.sh@59 -- # xargs 00:26:45.663 [2024-04-18 12:02:36.205732] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:26:45.922 [2024-04-18 12:02:36.215772] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:45.922 [2024-04-18 12:02:36.216037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.922 [2024-04-18 12:02:36.216402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.922 [2024-04-18 12:02:36.216419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005440 with addr=10.0.0.2, port=4420 00:26:45.922 [2024-04-18 12:02:36.216433] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:26:45.922 [2024-04-18 12:02:36.216461] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:26:45.922 [2024-04-18 12:02:36.216490] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:45.922 [2024-04-18 12:02:36.216503] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:45.922 [2024-04-18 12:02:36.216516] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:45.922 [2024-04-18 12:02:36.216538] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.922 12:02:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.922 [2024-04-18 12:02:36.225847] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:45.922 [2024-04-18 12:02:36.226256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.922 [2024-04-18 12:02:36.226595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.922 [2024-04-18 12:02:36.226612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005440 with addr=10.0.0.2, port=4420 00:26:45.922 [2024-04-18 12:02:36.226626] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:26:45.922 [2024-04-18 12:02:36.226643] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:26:45.922 [2024-04-18 12:02:36.226668] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:45.922 [2024-04-18 12:02:36.226681] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:45.922 [2024-04-18 12:02:36.226692] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:45.922 [2024-04-18 12:02:36.226709] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.922 [2024-04-18 12:02:36.235918] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:45.922 [2024-04-18 12:02:36.236317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.922 [2024-04-18 12:02:36.236671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.922 [2024-04-18 12:02:36.236687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005440 with addr=10.0.0.2, port=4420 00:26:45.922 [2024-04-18 12:02:36.236700] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:26:45.922 [2024-04-18 12:02:36.236717] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:26:45.922 [2024-04-18 12:02:36.236744] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:45.922 [2024-04-18 12:02:36.236756] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:45.922 [2024-04-18 12:02:36.236768] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:45.922 [2024-04-18 12:02:36.236783] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.922 12:02:36 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.922 12:02:36 -- common/autotest_common.sh@904 -- # return 0 00:26:45.922 12:02:36 -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:45.922 12:02:36 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:45.922 12:02:36 -- common/autotest_common.sh@901 -- # local max=10 00:26:45.922 12:02:36 -- common/autotest_common.sh@902 -- # (( max-- )) 00:26:45.922 [2024-04-18 12:02:36.246003] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controlle 12:02:36 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:45.922 r 00:26:45.922 [2024-04-18 12:02:36.246331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.922 [2024-04-18 12:02:36.246561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.922 [2024-04-18 12:02:36.246577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005440 with addr=10.0.0.2, port=4420 00:26:45.922 [2024-04-18 12:02:36.246589] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:26:45.922 [2024-04-18 12:02:36.246606] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:26:45.922 [2024-04-18 12:02:36.246622] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:45.922 [2024-04-18 12:02:36.246632] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:45.922 [2024-04-18 12:02:36.246643] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:45.922 [2024-04-18 12:02:36.246658] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.922 12:02:36 -- common/autotest_common.sh@903 -- # get_bdev_list 00:26:45.922 12:02:36 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:45.922 12:02:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.922 12:02:36 -- common/autotest_common.sh@10 -- # set +x 00:26:45.922 12:02:36 -- host/discovery.sh@55 -- # xargs 00:26:45.922 12:02:36 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:45.922 12:02:36 -- host/discovery.sh@55 -- # sort 00:26:45.922 [2024-04-18 12:02:36.256077] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:45.922 [2024-04-18 12:02:36.256420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.922 [2024-04-18 12:02:36.256759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.922 [2024-04-18 12:02:36.256776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005440 with addr=10.0.0.2, port=4420 00:26:45.922 [2024-04-18 12:02:36.256788] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:26:45.922 [2024-04-18 12:02:36.256806] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:26:45.922 [2024-04-18 12:02:36.256825] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:45.922 [2024-04-18 12:02:36.256836] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:45.923 [2024-04-18 12:02:36.256847] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:45.923 [2024-04-18 12:02:36.256863] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.923 [2024-04-18 12:02:36.266158] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:45.923 [2024-04-18 12:02:36.266564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.923 [2024-04-18 12:02:36.266839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.923 [2024-04-18 12:02:36.266855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005440 with addr=10.0.0.2, port=4420 00:26:45.923 [2024-04-18 12:02:36.266867] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:26:45.923 [2024-04-18 12:02:36.266884] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:26:45.923 [2024-04-18 12:02:36.266901] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:45.923 [2024-04-18 12:02:36.266912] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:45.923 [2024-04-18 12:02:36.266923] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:45.923 [2024-04-18 12:02:36.266939] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.923 [2024-04-18 12:02:36.276104] bdev_nvme.c:6693:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:26:45.923 [2024-04-18 12:02:36.276132] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:45.923 12:02:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.923 12:02:36 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:45.923 12:02:36 -- common/autotest_common.sh@904 -- # return 0 00:26:45.923 12:02:36 -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:45.923 12:02:36 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:45.923 12:02:36 -- common/autotest_common.sh@901 -- # local max=10 00:26:45.923 12:02:36 -- common/autotest_common.sh@902 -- # (( max-- )) 00:26:45.923 12:02:36 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:26:45.923 12:02:36 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:26:45.923 12:02:36 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:45.923 12:02:36 -- host/discovery.sh@63 -- # sort -n 00:26:45.923 12:02:36 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:45.923 12:02:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.923 12:02:36 -- common/autotest_common.sh@10 -- # set +x 00:26:45.923 12:02:36 -- host/discovery.sh@63 -- # xargs 00:26:45.923 12:02:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.923 12:02:36 -- common/autotest_common.sh@903 -- # [[ 4421 == \4\4\2\1 ]] 00:26:45.923 12:02:36 -- common/autotest_common.sh@904 -- # return 0 00:26:45.923 12:02:36 -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:26:45.923 12:02:36 -- host/discovery.sh@79 -- # expected_count=0 00:26:45.923 12:02:36 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:45.923 12:02:36 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:45.923 12:02:36 -- common/autotest_common.sh@901 -- # local max=10 00:26:45.923 12:02:36 -- common/autotest_common.sh@902 -- # (( max-- )) 00:26:45.923 12:02:36 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:45.923 12:02:36 -- common/autotest_common.sh@903 -- # get_notification_count 00:26:45.923 12:02:36 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:45.923 12:02:36 -- host/discovery.sh@74 -- # jq '. | length' 00:26:45.923 12:02:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.923 12:02:36 -- common/autotest_common.sh@10 -- # set +x 00:26:45.923 12:02:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.923 12:02:36 -- host/discovery.sh@74 -- # notification_count=0 00:26:45.923 12:02:36 -- host/discovery.sh@75 -- # notify_id=2 00:26:45.923 12:02:36 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:26:45.923 12:02:36 -- common/autotest_common.sh@904 -- # return 0 00:26:45.923 12:02:36 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:26:45.923 12:02:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.923 12:02:36 -- common/autotest_common.sh@10 -- # set +x 00:26:45.923 12:02:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.923 12:02:36 -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:26:45.923 12:02:36 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:26:45.923 12:02:36 -- common/autotest_common.sh@901 -- # local max=10 00:26:45.923 12:02:36 -- common/autotest_common.sh@902 -- # (( max-- )) 00:26:45.923 12:02:36 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:26:45.923 12:02:36 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:26:45.923 12:02:36 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:45.923 12:02:36 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:45.923 12:02:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.923 12:02:36 -- common/autotest_common.sh@10 -- # set +x 00:26:45.923 12:02:36 -- host/discovery.sh@59 -- # sort 00:26:45.923 12:02:36 -- host/discovery.sh@59 -- # xargs 00:26:45.923 12:02:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:46.181 12:02:36 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:26:46.181 12:02:36 -- common/autotest_common.sh@904 -- # return 0 00:26:46.181 12:02:36 -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:26:46.181 12:02:36 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:26:46.181 12:02:36 -- common/autotest_common.sh@901 -- # local max=10 00:26:46.181 12:02:36 -- common/autotest_common.sh@902 -- # (( max-- )) 00:26:46.181 12:02:36 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:26:46.181 12:02:36 -- common/autotest_common.sh@903 -- # get_bdev_list 00:26:46.181 12:02:36 -- host/discovery.sh@55 -- # sort 00:26:46.181 12:02:36 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:46.181 12:02:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:46.181 12:02:36 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:46.181 12:02:36 -- common/autotest_common.sh@10 -- # set +x 00:26:46.181 12:02:36 -- host/discovery.sh@55 -- # xargs 00:26:46.181 12:02:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:46.181 12:02:36 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:26:46.181 12:02:36 -- common/autotest_common.sh@904 -- # return 0 00:26:46.181 12:02:36 -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:26:46.181 12:02:36 -- host/discovery.sh@79 -- # expected_count=2 00:26:46.181 12:02:36 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:46.181 12:02:36 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:46.181 12:02:36 -- common/autotest_common.sh@901 -- # local max=10 00:26:46.181 12:02:36 -- common/autotest_common.sh@902 -- # (( max-- )) 00:26:46.181 12:02:36 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:46.181 12:02:36 -- common/autotest_common.sh@903 -- # get_notification_count 00:26:46.181 12:02:36 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:46.181 12:02:36 -- host/discovery.sh@74 -- # jq '. | length' 00:26:46.181 12:02:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:46.181 12:02:36 -- common/autotest_common.sh@10 -- # set +x 00:26:46.181 12:02:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:46.181 12:02:36 -- host/discovery.sh@74 -- # notification_count=2 00:26:46.181 12:02:36 -- host/discovery.sh@75 -- # notify_id=4 00:26:46.181 12:02:36 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:26:46.181 12:02:36 -- common/autotest_common.sh@904 -- # return 0 00:26:46.181 12:02:36 -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:46.181 12:02:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:46.181 12:02:36 -- common/autotest_common.sh@10 -- # set +x 00:26:47.140 [2024-04-18 12:02:37.630205] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:47.140 [2024-04-18 12:02:37.630230] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:47.140 [2024-04-18 12:02:37.630251] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:47.398 [2024-04-18 12:02:37.718545] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:26:47.656 [2024-04-18 12:02:37.989317] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:47.656 [2024-04-18 12:02:37.989354] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:47.656 12:02:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:47.656 12:02:37 -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:47.656 12:02:37 -- common/autotest_common.sh@638 -- # local es=0 00:26:47.656 12:02:37 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:47.656 12:02:37 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:26:47.656 12:02:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:47.656 12:02:37 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:26:47.656 12:02:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:47.656 12:02:37 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:47.656 12:02:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:47.656 12:02:37 -- common/autotest_common.sh@10 -- # set +x 00:26:47.656 request: 00:26:47.656 { 00:26:47.656 "name": "nvme", 00:26:47.656 "trtype": "tcp", 00:26:47.656 "traddr": "10.0.0.2", 00:26:47.656 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:47.656 "adrfam": "ipv4", 00:26:47.656 "trsvcid": "8009", 00:26:47.656 "wait_for_attach": true, 00:26:47.656 "method": "bdev_nvme_start_discovery", 00:26:47.656 "req_id": 1 00:26:47.656 } 00:26:47.656 Got JSON-RPC error response 00:26:47.656 response: 00:26:47.656 { 00:26:47.656 "code": -17, 00:26:47.656 "message": "File exists" 00:26:47.656 } 00:26:47.656 12:02:38 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:26:47.656 12:02:38 -- common/autotest_common.sh@641 -- # es=1 00:26:47.656 12:02:38 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:26:47.656 12:02:38 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:26:47.656 12:02:38 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:26:47.656 12:02:38 -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:26:47.656 12:02:38 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:47.656 12:02:38 -- host/discovery.sh@67 -- # xargs 00:26:47.656 12:02:38 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:47.656 12:02:38 -- host/discovery.sh@67 -- # sort 00:26:47.656 12:02:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:47.656 12:02:38 -- common/autotest_common.sh@10 -- # set +x 00:26:47.656 12:02:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:47.656 12:02:38 -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:26:47.656 12:02:38 -- host/discovery.sh@146 -- # get_bdev_list 00:26:47.656 12:02:38 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:47.656 12:02:38 -- host/discovery.sh@55 -- # xargs 00:26:47.656 12:02:38 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:47.656 12:02:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:47.656 12:02:38 -- common/autotest_common.sh@10 -- # set +x 00:26:47.656 12:02:38 -- host/discovery.sh@55 -- # sort 00:26:47.656 12:02:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:47.656 12:02:38 -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:47.656 12:02:38 -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:47.656 12:02:38 -- common/autotest_common.sh@638 -- # local es=0 00:26:47.656 12:02:38 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:47.656 12:02:38 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:26:47.656 12:02:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:47.656 12:02:38 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:26:47.656 12:02:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:47.656 12:02:38 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:47.656 12:02:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:47.656 12:02:38 -- common/autotest_common.sh@10 -- # set +x 00:26:47.656 request: 00:26:47.656 { 00:26:47.656 "name": "nvme_second", 00:26:47.656 "trtype": "tcp", 00:26:47.656 "traddr": "10.0.0.2", 00:26:47.656 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:47.656 "adrfam": "ipv4", 00:26:47.656 "trsvcid": "8009", 00:26:47.656 "wait_for_attach": true, 00:26:47.656 "method": "bdev_nvme_start_discovery", 00:26:47.656 "req_id": 1 00:26:47.656 } 00:26:47.656 Got JSON-RPC error response 00:26:47.656 response: 00:26:47.656 { 00:26:47.656 "code": -17, 00:26:47.656 "message": "File exists" 00:26:47.656 } 00:26:47.656 12:02:38 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:26:47.656 12:02:38 -- common/autotest_common.sh@641 -- # es=1 00:26:47.656 12:02:38 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:26:47.656 12:02:38 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:26:47.656 12:02:38 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:26:47.656 12:02:38 -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:26:47.656 12:02:38 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:47.656 12:02:38 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:47.656 12:02:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:47.656 12:02:38 -- common/autotest_common.sh@10 -- # set +x 00:26:47.656 12:02:38 -- host/discovery.sh@67 -- # sort 00:26:47.656 12:02:38 -- host/discovery.sh@67 -- # xargs 00:26:47.656 12:02:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:47.656 12:02:38 -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:26:47.656 12:02:38 -- host/discovery.sh@152 -- # get_bdev_list 00:26:47.656 12:02:38 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:47.656 12:02:38 -- host/discovery.sh@55 -- # xargs 00:26:47.656 12:02:38 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:47.656 12:02:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:47.656 12:02:38 -- common/autotest_common.sh@10 -- # set +x 00:26:47.656 12:02:38 -- host/discovery.sh@55 -- # sort 00:26:47.914 12:02:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:47.914 12:02:38 -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:47.914 12:02:38 -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:47.914 12:02:38 -- common/autotest_common.sh@638 -- # local es=0 00:26:47.914 12:02:38 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:47.914 12:02:38 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:26:47.915 12:02:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:47.915 12:02:38 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:26:47.915 12:02:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:47.915 12:02:38 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:47.915 12:02:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:47.915 12:02:38 -- common/autotest_common.sh@10 -- # set +x 00:26:48.850 [2024-04-18 12:02:39.249145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.850 [2024-04-18 12:02:39.249529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.850 [2024-04-18 12:02:39.249551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=8010 00:26:48.850 [2024-04-18 12:02:39.249609] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:48.850 [2024-04-18 12:02:39.249622] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:48.850 [2024-04-18 12:02:39.249634] bdev_nvme.c:6968:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:49.783 [2024-04-18 12:02:40.251538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.783 [2024-04-18 12:02:40.251927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.783 [2024-04-18 12:02:40.251945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010240 with addr=10.0.0.2, port=8010 00:26:49.783 [2024-04-18 12:02:40.252016] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:49.783 [2024-04-18 12:02:40.252029] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:49.783 [2024-04-18 12:02:40.252041] bdev_nvme.c:6968:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:50.748 [2024-04-18 12:02:41.253497] bdev_nvme.c:6949:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:26:50.748 request: 00:26:50.748 { 00:26:50.748 "name": "nvme_second", 00:26:50.748 "trtype": "tcp", 00:26:50.748 "traddr": "10.0.0.2", 00:26:50.748 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:50.748 "adrfam": "ipv4", 00:26:50.748 "trsvcid": "8010", 00:26:50.748 "attach_timeout_ms": 3000, 00:26:50.748 "method": "bdev_nvme_start_discovery", 00:26:50.748 "req_id": 1 00:26:50.748 } 00:26:50.748 Got JSON-RPC error response 00:26:50.748 response: 00:26:50.748 { 00:26:50.748 "code": -110, 00:26:50.748 "message": "Connection timed out" 00:26:50.748 } 00:26:50.748 12:02:41 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:26:50.748 12:02:41 -- common/autotest_common.sh@641 -- # es=1 00:26:50.748 12:02:41 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:26:50.748 12:02:41 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:26:50.748 12:02:41 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:26:50.748 12:02:41 -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:26:50.748 12:02:41 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:50.748 12:02:41 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:50.748 12:02:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:50.748 12:02:41 -- host/discovery.sh@67 -- # sort 00:26:50.748 12:02:41 -- common/autotest_common.sh@10 -- # set +x 00:26:50.748 12:02:41 -- host/discovery.sh@67 -- # xargs 00:26:50.748 12:02:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:51.007 12:02:41 -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:26:51.007 12:02:41 -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:26:51.007 12:02:41 -- host/discovery.sh@161 -- # kill 2604522 00:26:51.007 12:02:41 -- host/discovery.sh@162 -- # nvmftestfini 00:26:51.007 12:02:41 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:51.007 12:02:41 -- nvmf/common.sh@117 -- # sync 00:26:51.007 12:02:41 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:51.007 12:02:41 -- nvmf/common.sh@120 -- # set +e 00:26:51.007 12:02:41 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:51.007 12:02:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:51.007 rmmod nvme_tcp 00:26:51.007 rmmod nvme_fabrics 00:26:51.007 rmmod nvme_keyring 00:26:51.007 12:02:41 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:51.007 12:02:41 -- nvmf/common.sh@124 -- # set -e 00:26:51.007 12:02:41 -- nvmf/common.sh@125 -- # return 0 00:26:51.007 12:02:41 -- nvmf/common.sh@478 -- # '[' -n 2604248 ']' 00:26:51.007 12:02:41 -- nvmf/common.sh@479 -- # killprocess 2604248 00:26:51.007 12:02:41 -- common/autotest_common.sh@936 -- # '[' -z 2604248 ']' 00:26:51.007 12:02:41 -- common/autotest_common.sh@940 -- # kill -0 2604248 00:26:51.007 12:02:41 -- common/autotest_common.sh@941 -- # uname 00:26:51.007 12:02:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:51.007 12:02:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2604248 00:26:51.007 12:02:41 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:51.007 12:02:41 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:51.007 12:02:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2604248' 00:26:51.007 killing process with pid 2604248 00:26:51.007 12:02:41 -- common/autotest_common.sh@955 -- # kill 2604248 00:26:51.007 12:02:41 -- common/autotest_common.sh@960 -- # wait 2604248 00:26:52.384 12:02:42 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:52.384 12:02:42 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:52.384 12:02:42 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:52.384 12:02:42 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:52.384 12:02:42 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:52.384 12:02:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:52.384 12:02:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:52.384 12:02:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:54.286 12:02:44 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:54.286 00:26:54.286 real 0m20.454s 00:26:54.286 user 0m24.400s 00:26:54.286 sys 0m7.272s 00:26:54.286 12:02:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:54.286 12:02:44 -- common/autotest_common.sh@10 -- # set +x 00:26:54.286 ************************************ 00:26:54.286 END TEST nvmf_discovery 00:26:54.286 ************************************ 00:26:54.286 12:02:44 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:54.286 12:02:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:54.286 12:02:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:54.286 12:02:44 -- common/autotest_common.sh@10 -- # set +x 00:26:54.545 ************************************ 00:26:54.545 START TEST nvmf_discovery_remove_ifc 00:26:54.545 ************************************ 00:26:54.545 12:02:44 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:54.804 * Looking for test storage... 00:26:54.804 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:54.804 12:02:45 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:54.804 12:02:45 -- nvmf/common.sh@7 -- # uname -s 00:26:54.804 12:02:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:54.804 12:02:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:54.804 12:02:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:54.804 12:02:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:54.805 12:02:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:54.805 12:02:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:54.805 12:02:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:54.805 12:02:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:54.805 12:02:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:54.805 12:02:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:54.805 12:02:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:26:54.805 12:02:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:26:54.805 12:02:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:54.805 12:02:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:54.805 12:02:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:54.805 12:02:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:54.805 12:02:45 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:54.805 12:02:45 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:54.805 12:02:45 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:54.805 12:02:45 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:54.805 12:02:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.805 12:02:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.805 12:02:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.805 12:02:45 -- paths/export.sh@5 -- # export PATH 00:26:54.805 12:02:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.805 12:02:45 -- nvmf/common.sh@47 -- # : 0 00:26:54.805 12:02:45 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:54.805 12:02:45 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:54.805 12:02:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:54.805 12:02:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:54.805 12:02:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:54.805 12:02:45 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:54.805 12:02:45 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:54.805 12:02:45 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:54.805 12:02:45 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:54.805 12:02:45 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:54.805 12:02:45 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:54.805 12:02:45 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:54.805 12:02:45 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:54.805 12:02:45 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:54.805 12:02:45 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:54.805 12:02:45 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:54.805 12:02:45 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:54.805 12:02:45 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:54.805 12:02:45 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:54.805 12:02:45 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:54.805 12:02:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:54.805 12:02:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:54.805 12:02:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:54.805 12:02:45 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:26:54.805 12:02:45 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:26:54.805 12:02:45 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:54.805 12:02:45 -- common/autotest_common.sh@10 -- # set +x 00:27:01.371 12:02:51 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:01.371 12:02:51 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:01.371 12:02:51 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:01.371 12:02:51 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:01.371 12:02:51 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:01.371 12:02:51 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:01.371 12:02:51 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:01.371 12:02:51 -- nvmf/common.sh@295 -- # net_devs=() 00:27:01.371 12:02:51 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:01.371 12:02:51 -- nvmf/common.sh@296 -- # e810=() 00:27:01.371 12:02:51 -- nvmf/common.sh@296 -- # local -ga e810 00:27:01.371 12:02:51 -- nvmf/common.sh@297 -- # x722=() 00:27:01.371 12:02:51 -- nvmf/common.sh@297 -- # local -ga x722 00:27:01.371 12:02:51 -- nvmf/common.sh@298 -- # mlx=() 00:27:01.371 12:02:51 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:01.371 12:02:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:01.371 12:02:51 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:01.371 12:02:51 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:01.371 12:02:51 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:01.371 12:02:51 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:01.371 12:02:51 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:01.371 12:02:51 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:01.371 12:02:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:01.371 12:02:51 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:01.371 12:02:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:01.371 12:02:51 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:01.371 12:02:51 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:01.371 12:02:51 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:01.371 12:02:51 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:01.371 12:02:51 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:01.371 12:02:51 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:01.371 12:02:51 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:01.371 12:02:51 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:01.371 12:02:51 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:01.371 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:01.371 12:02:51 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:01.371 12:02:51 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:01.371 12:02:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:01.371 12:02:51 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:01.371 12:02:51 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:01.371 12:02:51 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:01.371 12:02:51 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:01.371 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:01.371 12:02:51 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:01.372 12:02:51 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:01.372 12:02:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:01.372 12:02:51 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:01.372 12:02:51 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:01.372 12:02:51 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:01.372 12:02:51 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:01.372 12:02:51 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:01.372 12:02:51 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:01.372 12:02:51 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:01.372 12:02:51 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:01.372 12:02:51 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:01.372 12:02:51 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:01.372 Found net devices under 0000:af:00.0: cvl_0_0 00:27:01.372 12:02:51 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:01.372 12:02:51 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:01.372 12:02:51 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:01.372 12:02:51 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:01.372 12:02:51 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:01.372 12:02:51 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:01.372 Found net devices under 0000:af:00.1: cvl_0_1 00:27:01.372 12:02:51 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:01.372 12:02:51 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:27:01.372 12:02:51 -- nvmf/common.sh@403 -- # is_hw=yes 00:27:01.372 12:02:51 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:27:01.372 12:02:51 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:27:01.372 12:02:51 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:27:01.372 12:02:51 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:01.372 12:02:51 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:01.372 12:02:51 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:01.372 12:02:51 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:01.372 12:02:51 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:01.372 12:02:51 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:01.372 12:02:51 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:01.372 12:02:51 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:01.372 12:02:51 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:01.372 12:02:51 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:01.372 12:02:51 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:01.372 12:02:51 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:01.372 12:02:51 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:01.372 12:02:51 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:01.372 12:02:51 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:01.372 12:02:51 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:01.372 12:02:51 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:01.372 12:02:51 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:01.372 12:02:51 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:01.372 12:02:51 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:01.372 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:01.372 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:27:01.372 00:27:01.372 --- 10.0.0.2 ping statistics --- 00:27:01.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:01.372 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:27:01.372 12:02:51 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:01.372 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:01.372 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:27:01.372 00:27:01.372 --- 10.0.0.1 ping statistics --- 00:27:01.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:01.372 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:27:01.372 12:02:51 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:01.372 12:02:51 -- nvmf/common.sh@411 -- # return 0 00:27:01.372 12:02:51 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:27:01.372 12:02:51 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:01.372 12:02:51 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:27:01.372 12:02:51 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:27:01.372 12:02:51 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:01.372 12:02:51 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:27:01.372 12:02:51 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:27:01.372 12:02:51 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:27:01.372 12:02:51 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:27:01.372 12:02:51 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:01.372 12:02:51 -- common/autotest_common.sh@10 -- # set +x 00:27:01.372 12:02:51 -- nvmf/common.sh@470 -- # nvmfpid=2609988 00:27:01.372 12:02:51 -- nvmf/common.sh@471 -- # waitforlisten 2609988 00:27:01.372 12:02:51 -- common/autotest_common.sh@817 -- # '[' -z 2609988 ']' 00:27:01.372 12:02:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:01.372 12:02:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:01.372 12:02:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:01.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:01.372 12:02:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:01.372 12:02:51 -- common/autotest_common.sh@10 -- # set +x 00:27:01.372 12:02:51 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:01.372 [2024-04-18 12:02:51.546652] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:27:01.372 [2024-04-18 12:02:51.546741] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:01.372 EAL: No free 2048 kB hugepages reported on node 1 00:27:01.372 [2024-04-18 12:02:51.676329] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:01.372 [2024-04-18 12:02:51.882952] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:01.372 [2024-04-18 12:02:51.882998] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:01.372 [2024-04-18 12:02:51.883011] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:01.372 [2024-04-18 12:02:51.883024] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:01.372 [2024-04-18 12:02:51.883033] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:01.372 [2024-04-18 12:02:51.883069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:01.939 12:02:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:01.939 12:02:52 -- common/autotest_common.sh@850 -- # return 0 00:27:01.939 12:02:52 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:27:01.939 12:02:52 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:01.939 12:02:52 -- common/autotest_common.sh@10 -- # set +x 00:27:01.939 12:02:52 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:01.939 12:02:52 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:27:01.939 12:02:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:01.939 12:02:52 -- common/autotest_common.sh@10 -- # set +x 00:27:01.939 [2024-04-18 12:02:52.331888] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:01.939 [2024-04-18 12:02:52.340075] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:01.939 null0 00:27:01.939 [2024-04-18 12:02:52.372079] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:01.939 12:02:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:01.939 12:02:52 -- host/discovery_remove_ifc.sh@59 -- # hostpid=2610024 00:27:01.939 12:02:52 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:27:01.939 12:02:52 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2610024 /tmp/host.sock 00:27:01.939 12:02:52 -- common/autotest_common.sh@817 -- # '[' -z 2610024 ']' 00:27:01.940 12:02:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:27:01.940 12:02:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:01.940 12:02:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:01.940 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:01.940 12:02:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:01.940 12:02:52 -- common/autotest_common.sh@10 -- # set +x 00:27:01.940 [2024-04-18 12:02:52.458627] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:27:01.940 [2024-04-18 12:02:52.458720] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2610024 ] 00:27:02.198 EAL: No free 2048 kB hugepages reported on node 1 00:27:02.198 [2024-04-18 12:02:52.582255] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.457 [2024-04-18 12:02:52.799246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:02.717 12:02:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:02.717 12:02:53 -- common/autotest_common.sh@850 -- # return 0 00:27:02.717 12:02:53 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:02.717 12:02:53 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:27:02.717 12:02:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:02.717 12:02:53 -- common/autotest_common.sh@10 -- # set +x 00:27:02.717 12:02:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:02.717 12:02:53 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:27:02.717 12:02:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:02.717 12:02:53 -- common/autotest_common.sh@10 -- # set +x 00:27:03.284 12:02:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:03.284 12:02:53 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:27:03.284 12:02:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:03.284 12:02:53 -- common/autotest_common.sh@10 -- # set +x 00:27:04.219 [2024-04-18 12:02:54.643741] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:04.219 [2024-04-18 12:02:54.643771] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:04.219 [2024-04-18 12:02:54.643803] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:04.219 [2024-04-18 12:02:54.730062] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:04.478 [2024-04-18 12:02:54.833030] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:04.478 [2024-04-18 12:02:54.833091] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:04.478 [2024-04-18 12:02:54.833151] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:04.478 [2024-04-18 12:02:54.833171] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:04.478 [2024-04-18 12:02:54.833201] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:04.478 12:02:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:04.478 12:02:54 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:27:04.478 12:02:54 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:04.478 12:02:54 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:04.478 12:02:54 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:04.478 12:02:54 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:04.478 12:02:54 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:04.478 12:02:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:04.478 12:02:54 -- common/autotest_common.sh@10 -- # set +x 00:27:04.478 [2024-04-18 12:02:54.842412] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x614000006840 was disconnected and freed. delete nvme_qpair. 00:27:04.478 12:02:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:04.478 12:02:54 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:27:04.478 12:02:54 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:27:04.478 12:02:54 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:27:04.478 12:02:55 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:27:04.478 12:02:55 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:04.478 12:02:55 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:04.478 12:02:55 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:04.478 12:02:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:04.478 12:02:55 -- common/autotest_common.sh@10 -- # set +x 00:27:04.478 12:02:55 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:04.478 12:02:55 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:04.735 12:02:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:04.735 12:02:55 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:04.735 12:02:55 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:05.669 12:02:56 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:05.669 12:02:56 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:05.669 12:02:56 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:05.669 12:02:56 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:05.669 12:02:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:05.669 12:02:56 -- common/autotest_common.sh@10 -- # set +x 00:27:05.669 12:02:56 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:05.669 12:02:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:05.669 12:02:56 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:05.669 12:02:56 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:06.603 12:02:57 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:06.603 12:02:57 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:06.603 12:02:57 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:06.603 12:02:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:06.603 12:02:57 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:06.603 12:02:57 -- common/autotest_common.sh@10 -- # set +x 00:27:06.603 12:02:57 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:06.862 12:02:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:06.862 12:02:57 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:06.862 12:02:57 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:07.799 12:02:58 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:07.799 12:02:58 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:07.799 12:02:58 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:07.799 12:02:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:07.799 12:02:58 -- common/autotest_common.sh@10 -- # set +x 00:27:07.799 12:02:58 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:07.799 12:02:58 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:07.799 12:02:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:07.799 12:02:58 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:07.799 12:02:58 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:08.736 12:02:59 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:08.736 12:02:59 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:08.736 12:02:59 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:08.736 12:02:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:08.736 12:02:59 -- common/autotest_common.sh@10 -- # set +x 00:27:08.736 12:02:59 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:08.736 12:02:59 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:08.736 12:02:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:08.995 12:02:59 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:08.996 12:02:59 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:09.932 [2024-04-18 12:03:00.273882] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:27:09.932 [2024-04-18 12:03:00.273945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:09.932 [2024-04-18 12:03:00.273962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.932 [2024-04-18 12:03:00.273978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:09.932 [2024-04-18 12:03:00.273990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.932 [2024-04-18 12:03:00.274003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:09.932 [2024-04-18 12:03:00.274015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.932 [2024-04-18 12:03:00.274028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:09.932 [2024-04-18 12:03:00.274040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.932 [2024-04-18 12:03:00.274053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:09.932 [2024-04-18 12:03:00.274065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.932 [2024-04-18 12:03:00.274076] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005640 is same with the state(5) to be set 00:27:09.932 [2024-04-18 12:03:00.283897] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005640 (9): Bad file descriptor 00:27:09.932 [2024-04-18 12:03:00.293943] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:09.932 12:03:00 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:09.932 12:03:00 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:09.932 12:03:00 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:09.932 12:03:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:09.932 12:03:00 -- common/autotest_common.sh@10 -- # set +x 00:27:09.932 12:03:00 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:09.932 12:03:00 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:10.933 [2024-04-18 12:03:01.328516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:27:11.869 [2024-04-18 12:03:02.352496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:27:11.869 [2024-04-18 12:03:02.352576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005640 with addr=10.0.0.2, port=4420 00:27:11.869 [2024-04-18 12:03:02.352603] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005640 is same with the state(5) to be set 00:27:11.869 [2024-04-18 12:03:02.353217] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005640 (9): Bad file descriptor 00:27:11.869 [2024-04-18 12:03:02.353262] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:11.870 [2024-04-18 12:03:02.353311] bdev_nvme.c:6657:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:27:11.870 [2024-04-18 12:03:02.353353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:11.870 [2024-04-18 12:03:02.353375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.870 [2024-04-18 12:03:02.353396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:11.870 [2024-04-18 12:03:02.353413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.870 [2024-04-18 12:03:02.353431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:11.870 [2024-04-18 12:03:02.353447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.870 [2024-04-18 12:03:02.353483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:11.870 [2024-04-18 12:03:02.353499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.870 [2024-04-18 12:03:02.353518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:11.870 [2024-04-18 12:03:02.353534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.870 [2024-04-18 12:03:02.353551] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:27:11.870 [2024-04-18 12:03:02.353657] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005240 (9): Bad file descriptor 00:27:11.870 [2024-04-18 12:03:02.354741] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:27:11.870 [2024-04-18 12:03:02.354771] nvme_ctrlr.c:1148:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:27:11.870 12:03:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:11.870 12:03:02 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:11.870 12:03:02 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:13.249 12:03:03 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:13.249 12:03:03 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:13.249 12:03:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:13.249 12:03:03 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:13.249 12:03:03 -- common/autotest_common.sh@10 -- # set +x 00:27:13.249 12:03:03 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:13.249 12:03:03 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:13.249 12:03:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:13.249 12:03:03 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:27:13.249 12:03:03 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:13.249 12:03:03 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:13.249 12:03:03 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:27:13.249 12:03:03 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:13.249 12:03:03 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:13.249 12:03:03 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:13.249 12:03:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:13.249 12:03:03 -- common/autotest_common.sh@10 -- # set +x 00:27:13.249 12:03:03 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:13.249 12:03:03 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:13.249 12:03:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:13.249 12:03:03 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:13.249 12:03:03 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:14.187 [2024-04-18 12:03:04.409676] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:14.187 [2024-04-18 12:03:04.409705] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:14.187 [2024-04-18 12:03:04.409738] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:14.187 [2024-04-18 12:03:04.495999] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:27:14.187 12:03:04 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:14.187 12:03:04 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:14.187 12:03:04 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:14.187 12:03:04 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:14.187 12:03:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:14.187 12:03:04 -- common/autotest_common.sh@10 -- # set +x 00:27:14.187 12:03:04 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:14.187 12:03:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:14.187 [2024-04-18 12:03:04.599871] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:14.187 [2024-04-18 12:03:04.599922] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:14.187 [2024-04-18 12:03:04.599978] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:14.187 [2024-04-18 12:03:04.599997] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:27:14.187 [2024-04-18 12:03:04.600011] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:14.187 [2024-04-18 12:03:04.607151] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x614000009e40 was disconnected and freed. delete nvme_qpair. 00:27:14.187 12:03:04 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:14.187 12:03:04 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:15.125 12:03:05 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:15.125 12:03:05 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:15.125 12:03:05 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:15.125 12:03:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:15.125 12:03:05 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:15.125 12:03:05 -- common/autotest_common.sh@10 -- # set +x 00:27:15.125 12:03:05 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:15.125 12:03:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:15.384 12:03:05 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:27:15.384 12:03:05 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:27:15.384 12:03:05 -- host/discovery_remove_ifc.sh@90 -- # killprocess 2610024 00:27:15.384 12:03:05 -- common/autotest_common.sh@936 -- # '[' -z 2610024 ']' 00:27:15.384 12:03:05 -- common/autotest_common.sh@940 -- # kill -0 2610024 00:27:15.384 12:03:05 -- common/autotest_common.sh@941 -- # uname 00:27:15.384 12:03:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:15.384 12:03:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2610024 00:27:15.384 12:03:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:15.384 12:03:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:15.384 12:03:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2610024' 00:27:15.384 killing process with pid 2610024 00:27:15.384 12:03:05 -- common/autotest_common.sh@955 -- # kill 2610024 00:27:15.384 12:03:05 -- common/autotest_common.sh@960 -- # wait 2610024 00:27:16.323 12:03:06 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:27:16.323 12:03:06 -- nvmf/common.sh@477 -- # nvmfcleanup 00:27:16.323 12:03:06 -- nvmf/common.sh@117 -- # sync 00:27:16.323 12:03:06 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:16.323 12:03:06 -- nvmf/common.sh@120 -- # set +e 00:27:16.323 12:03:06 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:16.323 12:03:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:16.323 rmmod nvme_tcp 00:27:16.323 rmmod nvme_fabrics 00:27:16.323 rmmod nvme_keyring 00:27:16.323 12:03:06 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:16.323 12:03:06 -- nvmf/common.sh@124 -- # set -e 00:27:16.323 12:03:06 -- nvmf/common.sh@125 -- # return 0 00:27:16.323 12:03:06 -- nvmf/common.sh@478 -- # '[' -n 2609988 ']' 00:27:16.323 12:03:06 -- nvmf/common.sh@479 -- # killprocess 2609988 00:27:16.323 12:03:06 -- common/autotest_common.sh@936 -- # '[' -z 2609988 ']' 00:27:16.323 12:03:06 -- common/autotest_common.sh@940 -- # kill -0 2609988 00:27:16.323 12:03:06 -- common/autotest_common.sh@941 -- # uname 00:27:16.323 12:03:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:16.323 12:03:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2609988 00:27:16.582 12:03:06 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:27:16.582 12:03:06 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:27:16.582 12:03:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2609988' 00:27:16.582 killing process with pid 2609988 00:27:16.582 12:03:06 -- common/autotest_common.sh@955 -- # kill 2609988 00:27:16.582 12:03:06 -- common/autotest_common.sh@960 -- # wait 2609988 00:27:17.961 12:03:08 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:27:17.961 12:03:08 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:27:17.961 12:03:08 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:27:17.961 12:03:08 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:17.961 12:03:08 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:17.961 12:03:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:17.961 12:03:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:17.961 12:03:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:19.868 12:03:10 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:19.868 00:27:19.868 real 0m25.196s 00:27:19.868 user 0m30.191s 00:27:19.868 sys 0m6.999s 00:27:19.868 12:03:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:19.868 12:03:10 -- common/autotest_common.sh@10 -- # set +x 00:27:19.868 ************************************ 00:27:19.868 END TEST nvmf_discovery_remove_ifc 00:27:19.868 ************************************ 00:27:19.868 12:03:10 -- nvmf/nvmf.sh@101 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:19.868 12:03:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:19.868 12:03:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:19.868 12:03:10 -- common/autotest_common.sh@10 -- # set +x 00:27:19.868 ************************************ 00:27:19.868 START TEST nvmf_identify_kernel_target 00:27:19.868 ************************************ 00:27:19.868 12:03:10 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:20.128 * Looking for test storage... 00:27:20.128 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:20.128 12:03:10 -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:20.128 12:03:10 -- nvmf/common.sh@7 -- # uname -s 00:27:20.128 12:03:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:20.128 12:03:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:20.128 12:03:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:20.128 12:03:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:20.128 12:03:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:20.128 12:03:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:20.128 12:03:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:20.128 12:03:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:20.128 12:03:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:20.128 12:03:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:20.128 12:03:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:27:20.128 12:03:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:27:20.128 12:03:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:20.128 12:03:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:20.128 12:03:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:20.128 12:03:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:20.128 12:03:10 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:20.128 12:03:10 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:20.128 12:03:10 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:20.128 12:03:10 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:20.128 12:03:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.128 12:03:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.128 12:03:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.128 12:03:10 -- paths/export.sh@5 -- # export PATH 00:27:20.128 12:03:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.128 12:03:10 -- nvmf/common.sh@47 -- # : 0 00:27:20.128 12:03:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:20.128 12:03:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:20.128 12:03:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:20.128 12:03:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:20.128 12:03:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:20.128 12:03:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:20.128 12:03:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:20.128 12:03:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:20.128 12:03:10 -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:27:20.128 12:03:10 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:27:20.128 12:03:10 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:20.128 12:03:10 -- nvmf/common.sh@437 -- # prepare_net_devs 00:27:20.128 12:03:10 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:27:20.128 12:03:10 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:27:20.128 12:03:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:20.128 12:03:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:20.128 12:03:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:20.128 12:03:10 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:27:20.128 12:03:10 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:27:20.128 12:03:10 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:20.128 12:03:10 -- common/autotest_common.sh@10 -- # set +x 00:27:26.697 12:03:16 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:26.697 12:03:16 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:26.697 12:03:16 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:26.697 12:03:16 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:26.697 12:03:16 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:26.697 12:03:16 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:26.697 12:03:16 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:26.697 12:03:16 -- nvmf/common.sh@295 -- # net_devs=() 00:27:26.697 12:03:16 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:26.697 12:03:16 -- nvmf/common.sh@296 -- # e810=() 00:27:26.697 12:03:16 -- nvmf/common.sh@296 -- # local -ga e810 00:27:26.697 12:03:16 -- nvmf/common.sh@297 -- # x722=() 00:27:26.697 12:03:16 -- nvmf/common.sh@297 -- # local -ga x722 00:27:26.697 12:03:16 -- nvmf/common.sh@298 -- # mlx=() 00:27:26.697 12:03:16 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:26.697 12:03:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:26.697 12:03:16 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:26.697 12:03:16 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:26.697 12:03:16 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:26.697 12:03:16 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:26.697 12:03:16 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:26.697 12:03:16 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:26.697 12:03:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:26.697 12:03:16 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:26.697 12:03:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:26.697 12:03:16 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:26.697 12:03:16 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:26.697 12:03:16 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:26.697 12:03:16 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:26.697 12:03:16 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:26.697 12:03:16 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:26.697 12:03:16 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:26.697 12:03:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:26.697 12:03:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:26.697 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:26.697 12:03:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:26.697 12:03:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:26.697 12:03:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:26.697 12:03:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:26.697 12:03:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:26.697 12:03:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:26.697 12:03:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:26.697 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:26.697 12:03:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:26.697 12:03:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:26.697 12:03:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:26.697 12:03:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:26.697 12:03:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:26.697 12:03:16 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:26.697 12:03:16 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:26.697 12:03:16 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:26.697 12:03:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:26.697 12:03:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:26.697 12:03:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:26.697 12:03:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:26.697 12:03:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:26.697 Found net devices under 0000:af:00.0: cvl_0_0 00:27:26.697 12:03:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:26.697 12:03:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:26.697 12:03:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:26.697 12:03:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:26.697 12:03:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:26.697 12:03:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:26.697 Found net devices under 0000:af:00.1: cvl_0_1 00:27:26.697 12:03:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:26.697 12:03:16 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:27:26.697 12:03:16 -- nvmf/common.sh@403 -- # is_hw=yes 00:27:26.697 12:03:16 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:27:26.697 12:03:16 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:27:26.697 12:03:16 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:27:26.697 12:03:16 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:26.697 12:03:16 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:26.697 12:03:16 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:26.698 12:03:16 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:26.698 12:03:16 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:26.698 12:03:16 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:26.698 12:03:16 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:26.698 12:03:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:26.698 12:03:16 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:26.698 12:03:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:26.698 12:03:16 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:26.698 12:03:16 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:26.698 12:03:16 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:26.698 12:03:17 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:26.698 12:03:17 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:26.698 12:03:17 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:26.698 12:03:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:26.698 12:03:17 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:26.957 12:03:17 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:26.957 12:03:17 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:26.957 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:26.957 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.156 ms 00:27:26.957 00:27:26.957 --- 10.0.0.2 ping statistics --- 00:27:26.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:26.957 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:27:26.957 12:03:17 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:26.957 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:26.957 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:27:26.957 00:27:26.957 --- 10.0.0.1 ping statistics --- 00:27:26.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:26.957 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:27:26.957 12:03:17 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:26.957 12:03:17 -- nvmf/common.sh@411 -- # return 0 00:27:26.957 12:03:17 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:27:26.957 12:03:17 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:26.957 12:03:17 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:27:26.957 12:03:17 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:27:26.957 12:03:17 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:26.957 12:03:17 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:27:26.957 12:03:17 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:27:26.957 12:03:17 -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:27:26.957 12:03:17 -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:27:26.957 12:03:17 -- nvmf/common.sh@717 -- # local ip 00:27:26.957 12:03:17 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:26.957 12:03:17 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:26.957 12:03:17 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.957 12:03:17 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.957 12:03:17 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:26.957 12:03:17 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.957 12:03:17 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:26.957 12:03:17 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:26.957 12:03:17 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:26.957 12:03:17 -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:27:26.957 12:03:17 -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:26.957 12:03:17 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:26.957 12:03:17 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:27:26.957 12:03:17 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:26.957 12:03:17 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:26.957 12:03:17 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:26.957 12:03:17 -- nvmf/common.sh@628 -- # local block nvme 00:27:26.957 12:03:17 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:27:26.957 12:03:17 -- nvmf/common.sh@631 -- # modprobe nvmet 00:27:26.957 12:03:17 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:26.957 12:03:17 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:30.250 Waiting for block devices as requested 00:27:30.250 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:27:30.250 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:27:30.250 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:27:30.510 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:27:30.510 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:27:30.510 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:27:30.769 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:27:30.769 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:27:30.769 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:27:30.769 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:27:31.028 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:27:31.028 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:27:31.287 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:27:31.287 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:27:31.287 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:27:31.546 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:27:31.546 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:27:31.546 12:03:22 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:27:31.546 12:03:22 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:31.546 12:03:22 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:27:31.546 12:03:22 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:27:31.546 12:03:22 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:31.546 12:03:22 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:31.546 12:03:22 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:27:31.546 12:03:22 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:31.546 12:03:22 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:31.844 No valid GPT data, bailing 00:27:31.844 12:03:22 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:31.844 12:03:22 -- scripts/common.sh@391 -- # pt= 00:27:31.844 12:03:22 -- scripts/common.sh@392 -- # return 1 00:27:31.844 12:03:22 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:27:31.844 12:03:22 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:27:31.844 12:03:22 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:31.844 12:03:22 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:31.844 12:03:22 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:31.844 12:03:22 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:31.844 12:03:22 -- nvmf/common.sh@656 -- # echo 1 00:27:31.844 12:03:22 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:27:31.844 12:03:22 -- nvmf/common.sh@658 -- # echo 1 00:27:31.844 12:03:22 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:27:31.844 12:03:22 -- nvmf/common.sh@661 -- # echo tcp 00:27:31.844 12:03:22 -- nvmf/common.sh@662 -- # echo 4420 00:27:31.844 12:03:22 -- nvmf/common.sh@663 -- # echo ipv4 00:27:31.844 12:03:22 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:31.844 12:03:22 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.1 -t tcp -s 4420 00:27:31.844 00:27:31.844 Discovery Log Number of Records 2, Generation counter 2 00:27:31.844 =====Discovery Log Entry 0====== 00:27:31.844 trtype: tcp 00:27:31.844 adrfam: ipv4 00:27:31.844 subtype: current discovery subsystem 00:27:31.844 treq: not specified, sq flow control disable supported 00:27:31.844 portid: 1 00:27:31.844 trsvcid: 4420 00:27:31.844 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:31.844 traddr: 10.0.0.1 00:27:31.845 eflags: none 00:27:31.845 sectype: none 00:27:31.845 =====Discovery Log Entry 1====== 00:27:31.845 trtype: tcp 00:27:31.845 adrfam: ipv4 00:27:31.845 subtype: nvme subsystem 00:27:31.845 treq: not specified, sq flow control disable supported 00:27:31.845 portid: 1 00:27:31.845 trsvcid: 4420 00:27:31.845 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:31.845 traddr: 10.0.0.1 00:27:31.845 eflags: none 00:27:31.845 sectype: none 00:27:31.845 12:03:22 -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:27:31.845 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:27:31.845 EAL: No free 2048 kB hugepages reported on node 1 00:27:31.845 ===================================================== 00:27:31.845 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:31.845 ===================================================== 00:27:31.845 Controller Capabilities/Features 00:27:31.845 ================================ 00:27:31.845 Vendor ID: 0000 00:27:31.845 Subsystem Vendor ID: 0000 00:27:31.845 Serial Number: e042f280a99c043ba7e9 00:27:31.845 Model Number: Linux 00:27:31.845 Firmware Version: 6.7.0-68 00:27:31.845 Recommended Arb Burst: 0 00:27:31.845 IEEE OUI Identifier: 00 00 00 00:27:31.845 Multi-path I/O 00:27:31.845 May have multiple subsystem ports: No 00:27:31.845 May have multiple controllers: No 00:27:31.845 Associated with SR-IOV VF: No 00:27:31.845 Max Data Transfer Size: Unlimited 00:27:31.845 Max Number of Namespaces: 0 00:27:31.845 Max Number of I/O Queues: 1024 00:27:31.845 NVMe Specification Version (VS): 1.3 00:27:31.845 NVMe Specification Version (Identify): 1.3 00:27:31.845 Maximum Queue Entries: 1024 00:27:31.845 Contiguous Queues Required: No 00:27:31.845 Arbitration Mechanisms Supported 00:27:31.845 Weighted Round Robin: Not Supported 00:27:31.845 Vendor Specific: Not Supported 00:27:31.845 Reset Timeout: 7500 ms 00:27:31.845 Doorbell Stride: 4 bytes 00:27:31.845 NVM Subsystem Reset: Not Supported 00:27:31.845 Command Sets Supported 00:27:31.845 NVM Command Set: Supported 00:27:31.845 Boot Partition: Not Supported 00:27:31.845 Memory Page Size Minimum: 4096 bytes 00:27:31.845 Memory Page Size Maximum: 4096 bytes 00:27:31.845 Persistent Memory Region: Not Supported 00:27:31.845 Optional Asynchronous Events Supported 00:27:31.845 Namespace Attribute Notices: Not Supported 00:27:31.845 Firmware Activation Notices: Not Supported 00:27:31.845 ANA Change Notices: Not Supported 00:27:31.845 PLE Aggregate Log Change Notices: Not Supported 00:27:31.845 LBA Status Info Alert Notices: Not Supported 00:27:31.845 EGE Aggregate Log Change Notices: Not Supported 00:27:31.845 Normal NVM Subsystem Shutdown event: Not Supported 00:27:31.845 Zone Descriptor Change Notices: Not Supported 00:27:31.845 Discovery Log Change Notices: Supported 00:27:31.845 Controller Attributes 00:27:31.845 128-bit Host Identifier: Not Supported 00:27:31.845 Non-Operational Permissive Mode: Not Supported 00:27:31.845 NVM Sets: Not Supported 00:27:31.845 Read Recovery Levels: Not Supported 00:27:31.845 Endurance Groups: Not Supported 00:27:31.845 Predictable Latency Mode: Not Supported 00:27:31.845 Traffic Based Keep ALive: Not Supported 00:27:31.845 Namespace Granularity: Not Supported 00:27:31.845 SQ Associations: Not Supported 00:27:31.845 UUID List: Not Supported 00:27:31.845 Multi-Domain Subsystem: Not Supported 00:27:31.845 Fixed Capacity Management: Not Supported 00:27:31.845 Variable Capacity Management: Not Supported 00:27:31.845 Delete Endurance Group: Not Supported 00:27:31.845 Delete NVM Set: Not Supported 00:27:31.845 Extended LBA Formats Supported: Not Supported 00:27:31.845 Flexible Data Placement Supported: Not Supported 00:27:31.845 00:27:31.845 Controller Memory Buffer Support 00:27:31.845 ================================ 00:27:31.845 Supported: No 00:27:31.845 00:27:31.845 Persistent Memory Region Support 00:27:31.845 ================================ 00:27:31.845 Supported: No 00:27:31.845 00:27:31.845 Admin Command Set Attributes 00:27:31.845 ============================ 00:27:31.845 Security Send/Receive: Not Supported 00:27:31.845 Format NVM: Not Supported 00:27:31.845 Firmware Activate/Download: Not Supported 00:27:31.845 Namespace Management: Not Supported 00:27:31.845 Device Self-Test: Not Supported 00:27:31.845 Directives: Not Supported 00:27:31.845 NVMe-MI: Not Supported 00:27:31.845 Virtualization Management: Not Supported 00:27:31.845 Doorbell Buffer Config: Not Supported 00:27:31.845 Get LBA Status Capability: Not Supported 00:27:31.845 Command & Feature Lockdown Capability: Not Supported 00:27:31.845 Abort Command Limit: 1 00:27:31.845 Async Event Request Limit: 1 00:27:31.845 Number of Firmware Slots: N/A 00:27:31.845 Firmware Slot 1 Read-Only: N/A 00:27:31.845 Firmware Activation Without Reset: N/A 00:27:31.845 Multiple Update Detection Support: N/A 00:27:31.845 Firmware Update Granularity: No Information Provided 00:27:31.845 Per-Namespace SMART Log: No 00:27:31.845 Asymmetric Namespace Access Log Page: Not Supported 00:27:31.845 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:31.845 Command Effects Log Page: Not Supported 00:27:31.845 Get Log Page Extended Data: Supported 00:27:31.845 Telemetry Log Pages: Not Supported 00:27:31.845 Persistent Event Log Pages: Not Supported 00:27:31.845 Supported Log Pages Log Page: May Support 00:27:31.845 Commands Supported & Effects Log Page: Not Supported 00:27:31.845 Feature Identifiers & Effects Log Page:May Support 00:27:31.845 NVMe-MI Commands & Effects Log Page: May Support 00:27:31.845 Data Area 4 for Telemetry Log: Not Supported 00:27:31.845 Error Log Page Entries Supported: 1 00:27:31.845 Keep Alive: Not Supported 00:27:31.845 00:27:31.845 NVM Command Set Attributes 00:27:31.845 ========================== 00:27:31.845 Submission Queue Entry Size 00:27:31.845 Max: 1 00:27:31.845 Min: 1 00:27:31.845 Completion Queue Entry Size 00:27:31.845 Max: 1 00:27:31.845 Min: 1 00:27:31.845 Number of Namespaces: 0 00:27:31.845 Compare Command: Not Supported 00:27:31.845 Write Uncorrectable Command: Not Supported 00:27:31.845 Dataset Management Command: Not Supported 00:27:31.845 Write Zeroes Command: Not Supported 00:27:31.845 Set Features Save Field: Not Supported 00:27:31.845 Reservations: Not Supported 00:27:31.845 Timestamp: Not Supported 00:27:31.845 Copy: Not Supported 00:27:31.845 Volatile Write Cache: Not Present 00:27:31.845 Atomic Write Unit (Normal): 1 00:27:31.845 Atomic Write Unit (PFail): 1 00:27:31.845 Atomic Compare & Write Unit: 1 00:27:31.845 Fused Compare & Write: Not Supported 00:27:31.845 Scatter-Gather List 00:27:31.845 SGL Command Set: Supported 00:27:31.845 SGL Keyed: Not Supported 00:27:31.845 SGL Bit Bucket Descriptor: Not Supported 00:27:31.845 SGL Metadata Pointer: Not Supported 00:27:31.845 Oversized SGL: Not Supported 00:27:31.845 SGL Metadata Address: Not Supported 00:27:31.845 SGL Offset: Supported 00:27:31.845 Transport SGL Data Block: Not Supported 00:27:31.845 Replay Protected Memory Block: Not Supported 00:27:31.845 00:27:31.845 Firmware Slot Information 00:27:31.845 ========================= 00:27:31.845 Active slot: 0 00:27:31.845 00:27:31.845 00:27:31.845 Error Log 00:27:31.845 ========= 00:27:31.845 00:27:31.845 Active Namespaces 00:27:31.845 ================= 00:27:31.845 Discovery Log Page 00:27:31.845 ================== 00:27:31.845 Generation Counter: 2 00:27:31.845 Number of Records: 2 00:27:31.845 Record Format: 0 00:27:31.845 00:27:31.845 Discovery Log Entry 0 00:27:31.845 ---------------------- 00:27:31.845 Transport Type: 3 (TCP) 00:27:31.845 Address Family: 1 (IPv4) 00:27:31.845 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:31.845 Entry Flags: 00:27:31.845 Duplicate Returned Information: 0 00:27:31.845 Explicit Persistent Connection Support for Discovery: 0 00:27:31.845 Transport Requirements: 00:27:31.845 Secure Channel: Not Specified 00:27:31.845 Port ID: 1 (0x0001) 00:27:31.845 Controller ID: 65535 (0xffff) 00:27:31.845 Admin Max SQ Size: 32 00:27:31.845 Transport Service Identifier: 4420 00:27:31.845 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:31.845 Transport Address: 10.0.0.1 00:27:31.845 Discovery Log Entry 1 00:27:31.845 ---------------------- 00:27:31.845 Transport Type: 3 (TCP) 00:27:31.845 Address Family: 1 (IPv4) 00:27:31.845 Subsystem Type: 2 (NVM Subsystem) 00:27:31.845 Entry Flags: 00:27:31.845 Duplicate Returned Information: 0 00:27:31.845 Explicit Persistent Connection Support for Discovery: 0 00:27:31.845 Transport Requirements: 00:27:31.845 Secure Channel: Not Specified 00:27:31.845 Port ID: 1 (0x0001) 00:27:31.845 Controller ID: 65535 (0xffff) 00:27:31.845 Admin Max SQ Size: 32 00:27:31.845 Transport Service Identifier: 4420 00:27:31.846 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:27:31.846 Transport Address: 10.0.0.1 00:27:31.846 12:03:22 -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:32.111 EAL: No free 2048 kB hugepages reported on node 1 00:27:32.111 get_feature(0x01) failed 00:27:32.111 get_feature(0x02) failed 00:27:32.111 get_feature(0x04) failed 00:27:32.111 ===================================================== 00:27:32.111 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:32.111 ===================================================== 00:27:32.111 Controller Capabilities/Features 00:27:32.111 ================================ 00:27:32.111 Vendor ID: 0000 00:27:32.111 Subsystem Vendor ID: 0000 00:27:32.111 Serial Number: 8baa196fd02526b2b494 00:27:32.111 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:27:32.111 Firmware Version: 6.7.0-68 00:27:32.111 Recommended Arb Burst: 6 00:27:32.111 IEEE OUI Identifier: 00 00 00 00:27:32.111 Multi-path I/O 00:27:32.111 May have multiple subsystem ports: Yes 00:27:32.111 May have multiple controllers: Yes 00:27:32.111 Associated with SR-IOV VF: No 00:27:32.111 Max Data Transfer Size: Unlimited 00:27:32.111 Max Number of Namespaces: 1024 00:27:32.111 Max Number of I/O Queues: 128 00:27:32.111 NVMe Specification Version (VS): 1.3 00:27:32.111 NVMe Specification Version (Identify): 1.3 00:27:32.111 Maximum Queue Entries: 1024 00:27:32.111 Contiguous Queues Required: No 00:27:32.111 Arbitration Mechanisms Supported 00:27:32.111 Weighted Round Robin: Not Supported 00:27:32.111 Vendor Specific: Not Supported 00:27:32.111 Reset Timeout: 7500 ms 00:27:32.111 Doorbell Stride: 4 bytes 00:27:32.111 NVM Subsystem Reset: Not Supported 00:27:32.111 Command Sets Supported 00:27:32.111 NVM Command Set: Supported 00:27:32.111 Boot Partition: Not Supported 00:27:32.111 Memory Page Size Minimum: 4096 bytes 00:27:32.111 Memory Page Size Maximum: 4096 bytes 00:27:32.111 Persistent Memory Region: Not Supported 00:27:32.111 Optional Asynchronous Events Supported 00:27:32.111 Namespace Attribute Notices: Supported 00:27:32.111 Firmware Activation Notices: Not Supported 00:27:32.111 ANA Change Notices: Supported 00:27:32.111 PLE Aggregate Log Change Notices: Not Supported 00:27:32.111 LBA Status Info Alert Notices: Not Supported 00:27:32.111 EGE Aggregate Log Change Notices: Not Supported 00:27:32.111 Normal NVM Subsystem Shutdown event: Not Supported 00:27:32.111 Zone Descriptor Change Notices: Not Supported 00:27:32.111 Discovery Log Change Notices: Not Supported 00:27:32.111 Controller Attributes 00:27:32.111 128-bit Host Identifier: Supported 00:27:32.111 Non-Operational Permissive Mode: Not Supported 00:27:32.111 NVM Sets: Not Supported 00:27:32.111 Read Recovery Levels: Not Supported 00:27:32.111 Endurance Groups: Not Supported 00:27:32.111 Predictable Latency Mode: Not Supported 00:27:32.111 Traffic Based Keep ALive: Supported 00:27:32.111 Namespace Granularity: Not Supported 00:27:32.111 SQ Associations: Not Supported 00:27:32.111 UUID List: Not Supported 00:27:32.111 Multi-Domain Subsystem: Not Supported 00:27:32.111 Fixed Capacity Management: Not Supported 00:27:32.111 Variable Capacity Management: Not Supported 00:27:32.111 Delete Endurance Group: Not Supported 00:27:32.111 Delete NVM Set: Not Supported 00:27:32.111 Extended LBA Formats Supported: Not Supported 00:27:32.111 Flexible Data Placement Supported: Not Supported 00:27:32.111 00:27:32.111 Controller Memory Buffer Support 00:27:32.111 ================================ 00:27:32.111 Supported: No 00:27:32.111 00:27:32.111 Persistent Memory Region Support 00:27:32.111 ================================ 00:27:32.111 Supported: No 00:27:32.111 00:27:32.111 Admin Command Set Attributes 00:27:32.111 ============================ 00:27:32.111 Security Send/Receive: Not Supported 00:27:32.111 Format NVM: Not Supported 00:27:32.111 Firmware Activate/Download: Not Supported 00:27:32.111 Namespace Management: Not Supported 00:27:32.111 Device Self-Test: Not Supported 00:27:32.111 Directives: Not Supported 00:27:32.111 NVMe-MI: Not Supported 00:27:32.111 Virtualization Management: Not Supported 00:27:32.111 Doorbell Buffer Config: Not Supported 00:27:32.111 Get LBA Status Capability: Not Supported 00:27:32.111 Command & Feature Lockdown Capability: Not Supported 00:27:32.111 Abort Command Limit: 4 00:27:32.111 Async Event Request Limit: 4 00:27:32.111 Number of Firmware Slots: N/A 00:27:32.111 Firmware Slot 1 Read-Only: N/A 00:27:32.111 Firmware Activation Without Reset: N/A 00:27:32.111 Multiple Update Detection Support: N/A 00:27:32.111 Firmware Update Granularity: No Information Provided 00:27:32.111 Per-Namespace SMART Log: Yes 00:27:32.111 Asymmetric Namespace Access Log Page: Supported 00:27:32.111 ANA Transition Time : 10 sec 00:27:32.111 00:27:32.111 Asymmetric Namespace Access Capabilities 00:27:32.111 ANA Optimized State : Supported 00:27:32.111 ANA Non-Optimized State : Supported 00:27:32.111 ANA Inaccessible State : Supported 00:27:32.111 ANA Persistent Loss State : Supported 00:27:32.111 ANA Change State : Supported 00:27:32.111 ANAGRPID is not changed : No 00:27:32.111 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:27:32.111 00:27:32.111 ANA Group Identifier Maximum : 128 00:27:32.111 Number of ANA Group Identifiers : 128 00:27:32.111 Max Number of Allowed Namespaces : 1024 00:27:32.111 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:27:32.111 Command Effects Log Page: Supported 00:27:32.111 Get Log Page Extended Data: Supported 00:27:32.111 Telemetry Log Pages: Not Supported 00:27:32.111 Persistent Event Log Pages: Not Supported 00:27:32.111 Supported Log Pages Log Page: May Support 00:27:32.111 Commands Supported & Effects Log Page: Not Supported 00:27:32.111 Feature Identifiers & Effects Log Page:May Support 00:27:32.111 NVMe-MI Commands & Effects Log Page: May Support 00:27:32.111 Data Area 4 for Telemetry Log: Not Supported 00:27:32.111 Error Log Page Entries Supported: 128 00:27:32.111 Keep Alive: Supported 00:27:32.111 Keep Alive Granularity: 1000 ms 00:27:32.111 00:27:32.111 NVM Command Set Attributes 00:27:32.111 ========================== 00:27:32.111 Submission Queue Entry Size 00:27:32.111 Max: 64 00:27:32.111 Min: 64 00:27:32.111 Completion Queue Entry Size 00:27:32.111 Max: 16 00:27:32.111 Min: 16 00:27:32.111 Number of Namespaces: 1024 00:27:32.111 Compare Command: Not Supported 00:27:32.111 Write Uncorrectable Command: Not Supported 00:27:32.111 Dataset Management Command: Supported 00:27:32.111 Write Zeroes Command: Supported 00:27:32.111 Set Features Save Field: Not Supported 00:27:32.111 Reservations: Not Supported 00:27:32.111 Timestamp: Not Supported 00:27:32.111 Copy: Not Supported 00:27:32.111 Volatile Write Cache: Present 00:27:32.111 Atomic Write Unit (Normal): 1 00:27:32.111 Atomic Write Unit (PFail): 1 00:27:32.111 Atomic Compare & Write Unit: 1 00:27:32.111 Fused Compare & Write: Not Supported 00:27:32.111 Scatter-Gather List 00:27:32.111 SGL Command Set: Supported 00:27:32.111 SGL Keyed: Not Supported 00:27:32.111 SGL Bit Bucket Descriptor: Not Supported 00:27:32.111 SGL Metadata Pointer: Not Supported 00:27:32.111 Oversized SGL: Not Supported 00:27:32.111 SGL Metadata Address: Not Supported 00:27:32.111 SGL Offset: Supported 00:27:32.111 Transport SGL Data Block: Not Supported 00:27:32.111 Replay Protected Memory Block: Not Supported 00:27:32.111 00:27:32.111 Firmware Slot Information 00:27:32.111 ========================= 00:27:32.111 Active slot: 0 00:27:32.111 00:27:32.111 Asymmetric Namespace Access 00:27:32.111 =========================== 00:27:32.111 Change Count : 0 00:27:32.111 Number of ANA Group Descriptors : 1 00:27:32.111 ANA Group Descriptor : 0 00:27:32.111 ANA Group ID : 1 00:27:32.111 Number of NSID Values : 1 00:27:32.112 Change Count : 0 00:27:32.112 ANA State : 1 00:27:32.112 Namespace Identifier : 1 00:27:32.112 00:27:32.112 Commands Supported and Effects 00:27:32.112 ============================== 00:27:32.112 Admin Commands 00:27:32.112 -------------- 00:27:32.112 Get Log Page (02h): Supported 00:27:32.112 Identify (06h): Supported 00:27:32.112 Abort (08h): Supported 00:27:32.112 Set Features (09h): Supported 00:27:32.112 Get Features (0Ah): Supported 00:27:32.112 Asynchronous Event Request (0Ch): Supported 00:27:32.112 Keep Alive (18h): Supported 00:27:32.112 I/O Commands 00:27:32.112 ------------ 00:27:32.112 Flush (00h): Supported 00:27:32.112 Write (01h): Supported LBA-Change 00:27:32.112 Read (02h): Supported 00:27:32.112 Write Zeroes (08h): Supported LBA-Change 00:27:32.112 Dataset Management (09h): Supported 00:27:32.112 00:27:32.112 Error Log 00:27:32.112 ========= 00:27:32.112 Entry: 0 00:27:32.112 Error Count: 0x3 00:27:32.112 Submission Queue Id: 0x0 00:27:32.112 Command Id: 0x5 00:27:32.112 Phase Bit: 0 00:27:32.112 Status Code: 0x2 00:27:32.112 Status Code Type: 0x0 00:27:32.112 Do Not Retry: 1 00:27:32.112 Error Location: 0x28 00:27:32.112 LBA: 0x0 00:27:32.112 Namespace: 0x0 00:27:32.112 Vendor Log Page: 0x0 00:27:32.112 ----------- 00:27:32.112 Entry: 1 00:27:32.112 Error Count: 0x2 00:27:32.112 Submission Queue Id: 0x0 00:27:32.112 Command Id: 0x5 00:27:32.112 Phase Bit: 0 00:27:32.112 Status Code: 0x2 00:27:32.112 Status Code Type: 0x0 00:27:32.112 Do Not Retry: 1 00:27:32.112 Error Location: 0x28 00:27:32.112 LBA: 0x0 00:27:32.112 Namespace: 0x0 00:27:32.112 Vendor Log Page: 0x0 00:27:32.112 ----------- 00:27:32.112 Entry: 2 00:27:32.112 Error Count: 0x1 00:27:32.112 Submission Queue Id: 0x0 00:27:32.112 Command Id: 0x4 00:27:32.112 Phase Bit: 0 00:27:32.112 Status Code: 0x2 00:27:32.112 Status Code Type: 0x0 00:27:32.112 Do Not Retry: 1 00:27:32.112 Error Location: 0x28 00:27:32.112 LBA: 0x0 00:27:32.112 Namespace: 0x0 00:27:32.112 Vendor Log Page: 0x0 00:27:32.112 00:27:32.112 Number of Queues 00:27:32.112 ================ 00:27:32.112 Number of I/O Submission Queues: 128 00:27:32.112 Number of I/O Completion Queues: 128 00:27:32.112 00:27:32.112 ZNS Specific Controller Data 00:27:32.112 ============================ 00:27:32.112 Zone Append Size Limit: 0 00:27:32.112 00:27:32.112 00:27:32.112 Active Namespaces 00:27:32.112 ================= 00:27:32.112 get_feature(0x05) failed 00:27:32.112 Namespace ID:1 00:27:32.112 Command Set Identifier: NVM (00h) 00:27:32.112 Deallocate: Supported 00:27:32.112 Deallocated/Unwritten Error: Not Supported 00:27:32.112 Deallocated Read Value: Unknown 00:27:32.112 Deallocate in Write Zeroes: Not Supported 00:27:32.112 Deallocated Guard Field: 0xFFFF 00:27:32.112 Flush: Supported 00:27:32.112 Reservation: Not Supported 00:27:32.112 Namespace Sharing Capabilities: Multiple Controllers 00:27:32.112 Size (in LBAs): 3125627568 (1490GiB) 00:27:32.112 Capacity (in LBAs): 3125627568 (1490GiB) 00:27:32.112 Utilization (in LBAs): 3125627568 (1490GiB) 00:27:32.112 UUID: b70db0fa-9499-4c09-87d8-10570b1bc69e 00:27:32.112 Thin Provisioning: Not Supported 00:27:32.112 Per-NS Atomic Units: Yes 00:27:32.112 Atomic Boundary Size (Normal): 0 00:27:32.112 Atomic Boundary Size (PFail): 0 00:27:32.112 Atomic Boundary Offset: 0 00:27:32.112 NGUID/EUI64 Never Reused: No 00:27:32.112 ANA group ID: 1 00:27:32.112 Namespace Write Protected: No 00:27:32.112 Number of LBA Formats: 1 00:27:32.112 Current LBA Format: LBA Format #00 00:27:32.112 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:32.112 00:27:32.112 12:03:22 -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:27:32.112 12:03:22 -- nvmf/common.sh@477 -- # nvmfcleanup 00:27:32.112 12:03:22 -- nvmf/common.sh@117 -- # sync 00:27:32.112 12:03:22 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:32.112 12:03:22 -- nvmf/common.sh@120 -- # set +e 00:27:32.112 12:03:22 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:32.112 12:03:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:32.112 rmmod nvme_tcp 00:27:32.112 rmmod nvme_fabrics 00:27:32.112 12:03:22 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:32.112 12:03:22 -- nvmf/common.sh@124 -- # set -e 00:27:32.112 12:03:22 -- nvmf/common.sh@125 -- # return 0 00:27:32.112 12:03:22 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:27:32.112 12:03:22 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:27:32.112 12:03:22 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:27:32.112 12:03:22 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:27:32.112 12:03:22 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:32.112 12:03:22 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:32.112 12:03:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:32.112 12:03:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:32.112 12:03:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:34.644 12:03:24 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:34.644 12:03:24 -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:34.644 12:03:24 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:34.644 12:03:24 -- nvmf/common.sh@675 -- # echo 0 00:27:34.644 12:03:24 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:34.644 12:03:24 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:34.644 12:03:24 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:34.644 12:03:24 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:34.644 12:03:24 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:27:34.644 12:03:24 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:27:34.645 12:03:24 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:37.929 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:37.929 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:37.929 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:37.929 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:37.929 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:37.929 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:37.929 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:37.929 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:37.929 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:37.929 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:37.929 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:37.929 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:37.929 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:37.929 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:37.929 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:37.929 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:39.305 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:27:39.305 00:27:39.305 real 0m19.248s 00:27:39.305 user 0m4.646s 00:27:39.305 sys 0m10.239s 00:27:39.305 12:03:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:39.305 12:03:29 -- common/autotest_common.sh@10 -- # set +x 00:27:39.305 ************************************ 00:27:39.305 END TEST nvmf_identify_kernel_target 00:27:39.305 ************************************ 00:27:39.305 12:03:29 -- nvmf/nvmf.sh@102 -- # run_test nvmf_auth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:39.305 12:03:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:39.305 12:03:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:39.305 12:03:29 -- common/autotest_common.sh@10 -- # set +x 00:27:39.564 ************************************ 00:27:39.564 START TEST nvmf_auth 00:27:39.564 ************************************ 00:27:39.564 12:03:29 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:39.564 * Looking for test storage... 00:27:39.564 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:39.564 12:03:29 -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:39.564 12:03:29 -- nvmf/common.sh@7 -- # uname -s 00:27:39.564 12:03:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:39.564 12:03:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:39.564 12:03:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:39.564 12:03:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:39.564 12:03:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:39.564 12:03:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:39.564 12:03:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:39.564 12:03:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:39.564 12:03:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:39.564 12:03:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:39.564 12:03:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:27:39.564 12:03:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:27:39.564 12:03:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:39.564 12:03:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:39.564 12:03:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:39.564 12:03:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:39.564 12:03:30 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:39.564 12:03:30 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:39.564 12:03:30 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:39.564 12:03:30 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:39.564 12:03:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.564 12:03:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.564 12:03:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.565 12:03:30 -- paths/export.sh@5 -- # export PATH 00:27:39.565 12:03:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.565 12:03:30 -- nvmf/common.sh@47 -- # : 0 00:27:39.565 12:03:30 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:39.565 12:03:30 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:39.565 12:03:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:39.565 12:03:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:39.565 12:03:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:39.565 12:03:30 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:39.565 12:03:30 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:39.565 12:03:30 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:39.565 12:03:30 -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:39.565 12:03:30 -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:39.565 12:03:30 -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:39.565 12:03:30 -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:39.565 12:03:30 -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:39.565 12:03:30 -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:39.565 12:03:30 -- host/auth.sh@21 -- # keys=() 00:27:39.565 12:03:30 -- host/auth.sh@77 -- # nvmftestinit 00:27:39.565 12:03:30 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:27:39.565 12:03:30 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:39.565 12:03:30 -- nvmf/common.sh@437 -- # prepare_net_devs 00:27:39.565 12:03:30 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:27:39.565 12:03:30 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:27:39.565 12:03:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:39.565 12:03:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:39.565 12:03:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:39.565 12:03:30 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:27:39.565 12:03:30 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:27:39.565 12:03:30 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:39.565 12:03:30 -- common/autotest_common.sh@10 -- # set +x 00:27:46.121 12:03:36 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:46.121 12:03:36 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:46.121 12:03:36 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:46.121 12:03:36 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:46.121 12:03:36 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:46.121 12:03:36 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:46.121 12:03:36 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:46.121 12:03:36 -- nvmf/common.sh@295 -- # net_devs=() 00:27:46.121 12:03:36 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:46.121 12:03:36 -- nvmf/common.sh@296 -- # e810=() 00:27:46.121 12:03:36 -- nvmf/common.sh@296 -- # local -ga e810 00:27:46.121 12:03:36 -- nvmf/common.sh@297 -- # x722=() 00:27:46.121 12:03:36 -- nvmf/common.sh@297 -- # local -ga x722 00:27:46.121 12:03:36 -- nvmf/common.sh@298 -- # mlx=() 00:27:46.121 12:03:36 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:46.121 12:03:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:46.121 12:03:36 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:46.121 12:03:36 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:46.121 12:03:36 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:46.121 12:03:36 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:46.121 12:03:36 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:46.121 12:03:36 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:46.121 12:03:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:46.121 12:03:36 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:46.121 12:03:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:46.121 12:03:36 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:46.121 12:03:36 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:46.121 12:03:36 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:46.121 12:03:36 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:46.121 12:03:36 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:46.121 12:03:36 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:46.121 12:03:36 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:46.121 12:03:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:46.121 12:03:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:46.121 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:46.121 12:03:36 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:46.121 12:03:36 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:46.121 12:03:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:46.121 12:03:36 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:46.121 12:03:36 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:46.121 12:03:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:46.121 12:03:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:46.121 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:46.121 12:03:36 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:46.121 12:03:36 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:46.121 12:03:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:46.121 12:03:36 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:46.121 12:03:36 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:46.121 12:03:36 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:46.121 12:03:36 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:46.121 12:03:36 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:46.121 12:03:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:46.121 12:03:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:46.121 12:03:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:46.121 12:03:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:46.121 12:03:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:46.121 Found net devices under 0000:af:00.0: cvl_0_0 00:27:46.121 12:03:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:46.121 12:03:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:46.121 12:03:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:46.121 12:03:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:46.121 12:03:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:46.121 12:03:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:46.121 Found net devices under 0000:af:00.1: cvl_0_1 00:27:46.121 12:03:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:46.121 12:03:36 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:27:46.121 12:03:36 -- nvmf/common.sh@403 -- # is_hw=yes 00:27:46.121 12:03:36 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:27:46.121 12:03:36 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:27:46.121 12:03:36 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:27:46.121 12:03:36 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:46.121 12:03:36 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:46.121 12:03:36 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:46.121 12:03:36 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:46.121 12:03:36 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:46.121 12:03:36 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:46.121 12:03:36 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:46.121 12:03:36 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:46.121 12:03:36 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:46.121 12:03:36 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:46.121 12:03:36 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:46.121 12:03:36 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:46.121 12:03:36 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:46.379 12:03:36 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:46.379 12:03:36 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:46.379 12:03:36 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:46.379 12:03:36 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:46.379 12:03:36 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:46.379 12:03:36 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:46.379 12:03:36 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:46.379 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:46.379 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:27:46.379 00:27:46.379 --- 10.0.0.2 ping statistics --- 00:27:46.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:46.379 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:27:46.379 12:03:36 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:46.379 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:46.379 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:27:46.379 00:27:46.379 --- 10.0.0.1 ping statistics --- 00:27:46.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:46.379 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:27:46.379 12:03:36 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:46.379 12:03:36 -- nvmf/common.sh@411 -- # return 0 00:27:46.379 12:03:36 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:27:46.379 12:03:36 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:46.379 12:03:36 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:27:46.379 12:03:36 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:27:46.379 12:03:36 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:46.379 12:03:36 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:27:46.379 12:03:36 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:27:46.379 12:03:36 -- host/auth.sh@78 -- # nvmfappstart -L nvme_auth 00:27:46.379 12:03:36 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:27:46.379 12:03:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:46.379 12:03:36 -- common/autotest_common.sh@10 -- # set +x 00:27:46.379 12:03:36 -- nvmf/common.sh@470 -- # nvmfpid=2623910 00:27:46.379 12:03:36 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:46.379 12:03:36 -- nvmf/common.sh@471 -- # waitforlisten 2623910 00:27:46.379 12:03:36 -- common/autotest_common.sh@817 -- # '[' -z 2623910 ']' 00:27:46.379 12:03:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:46.379 12:03:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:46.379 12:03:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:46.379 12:03:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:46.379 12:03:36 -- common/autotest_common.sh@10 -- # set +x 00:27:47.313 12:03:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:47.313 12:03:37 -- common/autotest_common.sh@850 -- # return 0 00:27:47.313 12:03:37 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:27:47.313 12:03:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:47.313 12:03:37 -- common/autotest_common.sh@10 -- # set +x 00:27:47.313 12:03:37 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:47.313 12:03:37 -- host/auth.sh@79 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:47.313 12:03:37 -- host/auth.sh@81 -- # gen_key null 32 00:27:47.313 12:03:37 -- host/auth.sh@53 -- # local digest len file key 00:27:47.313 12:03:37 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:47.313 12:03:37 -- host/auth.sh@54 -- # local -A digests 00:27:47.313 12:03:37 -- host/auth.sh@56 -- # digest=null 00:27:47.313 12:03:37 -- host/auth.sh@56 -- # len=32 00:27:47.313 12:03:37 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:47.313 12:03:37 -- host/auth.sh@57 -- # key=1409eea94130fadc045b981b6140d0bc 00:27:47.313 12:03:37 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:27:47.313 12:03:37 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.Tbn 00:27:47.313 12:03:37 -- host/auth.sh@59 -- # format_dhchap_key 1409eea94130fadc045b981b6140d0bc 0 00:27:47.313 12:03:37 -- nvmf/common.sh@708 -- # format_key DHHC-1 1409eea94130fadc045b981b6140d0bc 0 00:27:47.313 12:03:37 -- nvmf/common.sh@691 -- # local prefix key digest 00:27:47.313 12:03:37 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:27:47.313 12:03:37 -- nvmf/common.sh@693 -- # key=1409eea94130fadc045b981b6140d0bc 00:27:47.313 12:03:37 -- nvmf/common.sh@693 -- # digest=0 00:27:47.313 12:03:37 -- nvmf/common.sh@694 -- # python - 00:27:47.313 12:03:37 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.Tbn 00:27:47.313 12:03:37 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.Tbn 00:27:47.313 12:03:37 -- host/auth.sh@81 -- # keys[0]=/tmp/spdk.key-null.Tbn 00:27:47.313 12:03:37 -- host/auth.sh@82 -- # gen_key null 48 00:27:47.313 12:03:37 -- host/auth.sh@53 -- # local digest len file key 00:27:47.313 12:03:37 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:47.313 12:03:37 -- host/auth.sh@54 -- # local -A digests 00:27:47.313 12:03:37 -- host/auth.sh@56 -- # digest=null 00:27:47.313 12:03:37 -- host/auth.sh@56 -- # len=48 00:27:47.313 12:03:37 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:47.313 12:03:37 -- host/auth.sh@57 -- # key=eaa7139e675bce37a4f2ef3370342402b9adb4c4bc9a0246 00:27:47.313 12:03:37 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:27:47.313 12:03:37 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.uP8 00:27:47.313 12:03:37 -- host/auth.sh@59 -- # format_dhchap_key eaa7139e675bce37a4f2ef3370342402b9adb4c4bc9a0246 0 00:27:47.313 12:03:37 -- nvmf/common.sh@708 -- # format_key DHHC-1 eaa7139e675bce37a4f2ef3370342402b9adb4c4bc9a0246 0 00:27:47.313 12:03:37 -- nvmf/common.sh@691 -- # local prefix key digest 00:27:47.313 12:03:37 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:27:47.313 12:03:37 -- nvmf/common.sh@693 -- # key=eaa7139e675bce37a4f2ef3370342402b9adb4c4bc9a0246 00:27:47.313 12:03:37 -- nvmf/common.sh@693 -- # digest=0 00:27:47.313 12:03:37 -- nvmf/common.sh@694 -- # python - 00:27:47.571 12:03:37 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.uP8 00:27:47.571 12:03:37 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.uP8 00:27:47.571 12:03:37 -- host/auth.sh@82 -- # keys[1]=/tmp/spdk.key-null.uP8 00:27:47.571 12:03:37 -- host/auth.sh@83 -- # gen_key sha256 32 00:27:47.571 12:03:37 -- host/auth.sh@53 -- # local digest len file key 00:27:47.571 12:03:37 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:47.571 12:03:37 -- host/auth.sh@54 -- # local -A digests 00:27:47.571 12:03:37 -- host/auth.sh@56 -- # digest=sha256 00:27:47.571 12:03:37 -- host/auth.sh@56 -- # len=32 00:27:47.571 12:03:37 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:47.571 12:03:37 -- host/auth.sh@57 -- # key=7909dbd79379c9bd522e8be5daec1e2b 00:27:47.571 12:03:37 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha256.XXX 00:27:47.571 12:03:37 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha256.uVV 00:27:47.571 12:03:37 -- host/auth.sh@59 -- # format_dhchap_key 7909dbd79379c9bd522e8be5daec1e2b 1 00:27:47.571 12:03:37 -- nvmf/common.sh@708 -- # format_key DHHC-1 7909dbd79379c9bd522e8be5daec1e2b 1 00:27:47.571 12:03:37 -- nvmf/common.sh@691 -- # local prefix key digest 00:27:47.571 12:03:37 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:27:47.571 12:03:37 -- nvmf/common.sh@693 -- # key=7909dbd79379c9bd522e8be5daec1e2b 00:27:47.571 12:03:37 -- nvmf/common.sh@693 -- # digest=1 00:27:47.571 12:03:37 -- nvmf/common.sh@694 -- # python - 00:27:47.571 12:03:37 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha256.uVV 00:27:47.571 12:03:37 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha256.uVV 00:27:47.571 12:03:37 -- host/auth.sh@83 -- # keys[2]=/tmp/spdk.key-sha256.uVV 00:27:47.571 12:03:37 -- host/auth.sh@84 -- # gen_key sha384 48 00:27:47.571 12:03:37 -- host/auth.sh@53 -- # local digest len file key 00:27:47.571 12:03:37 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:47.571 12:03:37 -- host/auth.sh@54 -- # local -A digests 00:27:47.571 12:03:37 -- host/auth.sh@56 -- # digest=sha384 00:27:47.571 12:03:37 -- host/auth.sh@56 -- # len=48 00:27:47.571 12:03:37 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:47.571 12:03:37 -- host/auth.sh@57 -- # key=892864a078e075e02c9206d97b7bc676611d8470c3ee41cf 00:27:47.571 12:03:37 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha384.XXX 00:27:47.571 12:03:37 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha384.FCi 00:27:47.571 12:03:37 -- host/auth.sh@59 -- # format_dhchap_key 892864a078e075e02c9206d97b7bc676611d8470c3ee41cf 2 00:27:47.571 12:03:37 -- nvmf/common.sh@708 -- # format_key DHHC-1 892864a078e075e02c9206d97b7bc676611d8470c3ee41cf 2 00:27:47.571 12:03:37 -- nvmf/common.sh@691 -- # local prefix key digest 00:27:47.571 12:03:37 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:27:47.571 12:03:37 -- nvmf/common.sh@693 -- # key=892864a078e075e02c9206d97b7bc676611d8470c3ee41cf 00:27:47.571 12:03:37 -- nvmf/common.sh@693 -- # digest=2 00:27:47.571 12:03:37 -- nvmf/common.sh@694 -- # python - 00:27:47.571 12:03:37 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha384.FCi 00:27:47.571 12:03:37 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha384.FCi 00:27:47.571 12:03:37 -- host/auth.sh@84 -- # keys[3]=/tmp/spdk.key-sha384.FCi 00:27:47.571 12:03:37 -- host/auth.sh@85 -- # gen_key sha512 64 00:27:47.571 12:03:37 -- host/auth.sh@53 -- # local digest len file key 00:27:47.571 12:03:38 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:47.571 12:03:38 -- host/auth.sh@54 -- # local -A digests 00:27:47.571 12:03:38 -- host/auth.sh@56 -- # digest=sha512 00:27:47.571 12:03:38 -- host/auth.sh@56 -- # len=64 00:27:47.571 12:03:38 -- host/auth.sh@57 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:47.571 12:03:38 -- host/auth.sh@57 -- # key=012c70f30da01717cc818024a83ebe9567b58a3df0762194afc018a1e0205952 00:27:47.571 12:03:38 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha512.XXX 00:27:47.571 12:03:38 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha512.Ogi 00:27:47.571 12:03:38 -- host/auth.sh@59 -- # format_dhchap_key 012c70f30da01717cc818024a83ebe9567b58a3df0762194afc018a1e0205952 3 00:27:47.571 12:03:38 -- nvmf/common.sh@708 -- # format_key DHHC-1 012c70f30da01717cc818024a83ebe9567b58a3df0762194afc018a1e0205952 3 00:27:47.571 12:03:38 -- nvmf/common.sh@691 -- # local prefix key digest 00:27:47.571 12:03:38 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:27:47.571 12:03:38 -- nvmf/common.sh@693 -- # key=012c70f30da01717cc818024a83ebe9567b58a3df0762194afc018a1e0205952 00:27:47.571 12:03:38 -- nvmf/common.sh@693 -- # digest=3 00:27:47.571 12:03:38 -- nvmf/common.sh@694 -- # python - 00:27:47.571 12:03:38 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha512.Ogi 00:27:47.571 12:03:38 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha512.Ogi 00:27:47.571 12:03:38 -- host/auth.sh@85 -- # keys[4]=/tmp/spdk.key-sha512.Ogi 00:27:47.571 12:03:38 -- host/auth.sh@87 -- # waitforlisten 2623910 00:27:47.571 12:03:38 -- common/autotest_common.sh@817 -- # '[' -z 2623910 ']' 00:27:47.571 12:03:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:47.571 12:03:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:47.571 12:03:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:47.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:47.571 12:03:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:47.571 12:03:38 -- common/autotest_common.sh@10 -- # set +x 00:27:47.829 12:03:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:47.829 12:03:38 -- common/autotest_common.sh@850 -- # return 0 00:27:47.829 12:03:38 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:27:47.829 12:03:38 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Tbn 00:27:47.829 12:03:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:47.829 12:03:38 -- common/autotest_common.sh@10 -- # set +x 00:27:47.829 12:03:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:47.829 12:03:38 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:27:47.829 12:03:38 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.uP8 00:27:47.829 12:03:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:47.829 12:03:38 -- common/autotest_common.sh@10 -- # set +x 00:27:47.829 12:03:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:47.829 12:03:38 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:27:47.829 12:03:38 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.uVV 00:27:47.829 12:03:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:47.829 12:03:38 -- common/autotest_common.sh@10 -- # set +x 00:27:47.829 12:03:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:47.829 12:03:38 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:27:47.829 12:03:38 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.FCi 00:27:47.829 12:03:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:47.829 12:03:38 -- common/autotest_common.sh@10 -- # set +x 00:27:47.829 12:03:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:47.829 12:03:38 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:27:47.829 12:03:38 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Ogi 00:27:47.829 12:03:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:47.829 12:03:38 -- common/autotest_common.sh@10 -- # set +x 00:27:47.829 12:03:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:47.829 12:03:38 -- host/auth.sh@92 -- # nvmet_auth_init 00:27:47.829 12:03:38 -- host/auth.sh@35 -- # get_main_ns_ip 00:27:47.829 12:03:38 -- nvmf/common.sh@717 -- # local ip 00:27:47.829 12:03:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:47.829 12:03:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:47.829 12:03:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.829 12:03:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.829 12:03:38 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:47.829 12:03:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.829 12:03:38 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:47.829 12:03:38 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:47.829 12:03:38 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:47.829 12:03:38 -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:47.829 12:03:38 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:47.829 12:03:38 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:27:47.829 12:03:38 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:47.829 12:03:38 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:47.829 12:03:38 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:47.829 12:03:38 -- nvmf/common.sh@628 -- # local block nvme 00:27:47.829 12:03:38 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:27:47.829 12:03:38 -- nvmf/common.sh@631 -- # modprobe nvmet 00:27:47.829 12:03:38 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:47.829 12:03:38 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:51.110 Waiting for block devices as requested 00:27:51.110 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:27:51.110 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:27:51.110 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:27:51.367 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:27:51.367 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:27:51.367 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:27:51.625 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:27:51.625 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:27:51.625 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:27:51.625 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:27:51.883 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:27:51.883 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:27:51.883 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:27:52.141 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:27:52.141 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:27:52.141 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:27:52.398 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:27:53.329 12:03:43 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:27:53.329 12:03:43 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:53.329 12:03:43 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:27:53.329 12:03:43 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:27:53.329 12:03:43 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:53.330 12:03:43 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:53.330 12:03:43 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:27:53.330 12:03:43 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:53.330 12:03:43 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:53.330 No valid GPT data, bailing 00:27:53.330 12:03:43 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:53.330 12:03:43 -- scripts/common.sh@391 -- # pt= 00:27:53.330 12:03:43 -- scripts/common.sh@392 -- # return 1 00:27:53.330 12:03:43 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:27:53.330 12:03:43 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:27:53.330 12:03:43 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:53.330 12:03:43 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:53.330 12:03:43 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:53.330 12:03:43 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:53.330 12:03:43 -- nvmf/common.sh@656 -- # echo 1 00:27:53.330 12:03:43 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:27:53.330 12:03:43 -- nvmf/common.sh@658 -- # echo 1 00:27:53.330 12:03:43 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:27:53.330 12:03:43 -- nvmf/common.sh@661 -- # echo tcp 00:27:53.330 12:03:43 -- nvmf/common.sh@662 -- # echo 4420 00:27:53.330 12:03:43 -- nvmf/common.sh@663 -- # echo ipv4 00:27:53.330 12:03:43 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:53.330 12:03:43 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.1 -t tcp -s 4420 00:27:53.330 00:27:53.330 Discovery Log Number of Records 2, Generation counter 2 00:27:53.330 =====Discovery Log Entry 0====== 00:27:53.330 trtype: tcp 00:27:53.330 adrfam: ipv4 00:27:53.330 subtype: current discovery subsystem 00:27:53.330 treq: not specified, sq flow control disable supported 00:27:53.330 portid: 1 00:27:53.330 trsvcid: 4420 00:27:53.330 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:53.330 traddr: 10.0.0.1 00:27:53.330 eflags: none 00:27:53.330 sectype: none 00:27:53.330 =====Discovery Log Entry 1====== 00:27:53.330 trtype: tcp 00:27:53.330 adrfam: ipv4 00:27:53.330 subtype: nvme subsystem 00:27:53.330 treq: not specified, sq flow control disable supported 00:27:53.330 portid: 1 00:27:53.330 trsvcid: 4420 00:27:53.330 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:53.330 traddr: 10.0.0.1 00:27:53.330 eflags: none 00:27:53.330 sectype: none 00:27:53.330 12:03:43 -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:53.330 12:03:43 -- host/auth.sh@37 -- # echo 0 00:27:53.330 12:03:43 -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:53.330 12:03:43 -- host/auth.sh@95 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:53.330 12:03:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:53.330 12:03:43 -- host/auth.sh@44 -- # digest=sha256 00:27:53.330 12:03:43 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:53.330 12:03:43 -- host/auth.sh@44 -- # keyid=1 00:27:53.330 12:03:43 -- host/auth.sh@45 -- # key=DHHC-1:00:ZWFhNzEzOWU2NzViY2UzN2E0ZjJlZjMzNzAzNDI0MDJiOWFkYjRjNGJjOWEwMjQ24CrVsw==: 00:27:53.330 12:03:43 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:53.330 12:03:43 -- host/auth.sh@48 -- # echo ffdhe2048 00:27:53.330 12:03:43 -- host/auth.sh@49 -- # echo DHHC-1:00:ZWFhNzEzOWU2NzViY2UzN2E0ZjJlZjMzNzAzNDI0MDJiOWFkYjRjNGJjOWEwMjQ24CrVsw==: 00:27:53.330 12:03:43 -- host/auth.sh@100 -- # IFS=, 00:27:53.330 12:03:43 -- host/auth.sh@101 -- # printf %s sha256,sha384,sha512 00:27:53.330 12:03:43 -- host/auth.sh@100 -- # IFS=, 00:27:53.330 12:03:43 -- host/auth.sh@101 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:53.330 12:03:43 -- host/auth.sh@100 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:53.330 12:03:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:53.330 12:03:43 -- host/auth.sh@68 -- # digest=sha256,sha384,sha512 00:27:53.330 12:03:43 -- host/auth.sh@68 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:53.330 12:03:43 -- host/auth.sh@68 -- # keyid=1 00:27:53.330 12:03:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:53.330 12:03:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:53.330 12:03:43 -- common/autotest_common.sh@10 -- # set +x 00:27:53.330 12:03:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:53.330 12:03:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:53.330 12:03:43 -- nvmf/common.sh@717 -- # local ip 00:27:53.330 12:03:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:53.330 12:03:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:53.330 12:03:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.330 12:03:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.330 12:03:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:53.330 12:03:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.330 12:03:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:53.330 12:03:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:53.330 12:03:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:53.330 12:03:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:27:53.330 12:03:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:53.330 12:03:43 -- common/autotest_common.sh@10 -- # set +x 00:27:53.330 nvme0n1 00:27:53.330 12:03:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:53.330 12:03:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:53.330 12:03:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.330 12:03:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:53.330 12:03:43 -- common/autotest_common.sh@10 -- # set +x 00:27:53.587 12:03:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:53.587 12:03:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.587 12:03:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.587 12:03:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:53.587 12:03:43 -- common/autotest_common.sh@10 -- # set +x 00:27:53.587 12:03:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:53.587 12:03:43 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:27:53.587 12:03:43 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:27:53.587 12:03:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:53.587 12:03:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:53.587 12:03:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:53.587 12:03:43 -- host/auth.sh@44 -- # digest=sha256 00:27:53.587 12:03:43 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:53.587 12:03:43 -- host/auth.sh@44 -- # keyid=0 00:27:53.587 12:03:43 -- host/auth.sh@45 -- # key=DHHC-1:00:MTQwOWVlYTk0MTMwZmFkYzA0NWI5ODFiNjE0MGQwYmMyrZ6G: 00:27:53.587 12:03:43 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:53.587 12:03:43 -- host/auth.sh@48 -- # echo ffdhe2048 00:27:53.587 12:03:43 -- host/auth.sh@49 -- # echo DHHC-1:00:MTQwOWVlYTk0MTMwZmFkYzA0NWI5ODFiNjE0MGQwYmMyrZ6G: 00:27:53.587 12:03:43 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 0 00:27:53.587 12:03:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:53.587 12:03:43 -- host/auth.sh@68 -- # digest=sha256 00:27:53.587 12:03:43 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:27:53.587 12:03:43 -- host/auth.sh@68 -- # keyid=0 00:27:53.587 12:03:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:53.587 12:03:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:53.587 12:03:43 -- common/autotest_common.sh@10 -- # set +x 00:27:53.587 12:03:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:53.587 12:03:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:53.587 12:03:43 -- nvmf/common.sh@717 -- # local ip 00:27:53.587 12:03:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:53.587 12:03:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:53.587 12:03:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.587 12:03:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.587 12:03:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:53.587 12:03:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.587 12:03:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:53.587 12:03:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:53.587 12:03:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:53.587 12:03:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:27:53.587 12:03:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:53.587 12:03:43 -- common/autotest_common.sh@10 -- # set +x 00:27:53.587 nvme0n1 00:27:53.587 12:03:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:53.587 12:03:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.587 12:03:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:53.587 12:03:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:53.587 12:03:44 -- common/autotest_common.sh@10 -- # set +x 00:27:53.587 12:03:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:53.587 12:03:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.587 12:03:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.587 12:03:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:53.587 12:03:44 -- common/autotest_common.sh@10 -- # set +x 00:27:53.587 12:03:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:53.845 12:03:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:53.845 12:03:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:53.845 12:03:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:53.845 12:03:44 -- host/auth.sh@44 -- # digest=sha256 00:27:53.845 12:03:44 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:53.845 12:03:44 -- host/auth.sh@44 -- # keyid=1 00:27:53.845 12:03:44 -- host/auth.sh@45 -- # key=DHHC-1:00:ZWFhNzEzOWU2NzViY2UzN2E0ZjJlZjMzNzAzNDI0MDJiOWFkYjRjNGJjOWEwMjQ24CrVsw==: 00:27:53.845 12:03:44 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:53.845 12:03:44 -- host/auth.sh@48 -- # echo ffdhe2048 00:27:53.845 12:03:44 -- host/auth.sh@49 -- # echo DHHC-1:00:ZWFhNzEzOWU2NzViY2UzN2E0ZjJlZjMzNzAzNDI0MDJiOWFkYjRjNGJjOWEwMjQ24CrVsw==: 00:27:53.845 12:03:44 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 1 00:27:53.845 12:03:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:53.845 12:03:44 -- host/auth.sh@68 -- # digest=sha256 00:27:53.845 12:03:44 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:27:53.845 12:03:44 -- host/auth.sh@68 -- # keyid=1 00:27:53.845 12:03:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:53.845 12:03:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:53.845 12:03:44 -- common/autotest_common.sh@10 -- # set +x 00:27:53.845 12:03:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:53.845 12:03:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:53.845 12:03:44 -- nvmf/common.sh@717 -- # local ip 00:27:53.845 12:03:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:53.845 12:03:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:53.845 12:03:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.845 12:03:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.845 12:03:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:53.845 12:03:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.845 12:03:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:53.845 12:03:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:53.845 12:03:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:53.845 12:03:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:27:53.845 12:03:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:53.845 12:03:44 -- common/autotest_common.sh@10 -- # set +x 00:27:53.845 nvme0n1 00:27:53.845 12:03:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:53.845 12:03:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.845 12:03:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:53.845 12:03:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:53.845 12:03:44 -- common/autotest_common.sh@10 -- # set +x 00:27:53.845 12:03:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:53.845 12:03:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.845 12:03:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.845 12:03:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:53.845 12:03:44 -- common/autotest_common.sh@10 -- # set +x 00:27:53.845 12:03:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:53.845 12:03:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:53.845 12:03:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:53.845 12:03:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:53.845 12:03:44 -- host/auth.sh@44 -- # digest=sha256 00:27:53.845 12:03:44 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:53.845 12:03:44 -- host/auth.sh@44 -- # keyid=2 00:27:53.845 12:03:44 -- host/auth.sh@45 -- # key=DHHC-1:01:NzkwOWRiZDc5Mzc5YzliZDUyMmU4YmU1ZGFlYzFlMmIQf9uH: 00:27:53.845 12:03:44 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:53.845 12:03:44 -- host/auth.sh@48 -- # echo ffdhe2048 00:27:53.845 12:03:44 -- host/auth.sh@49 -- # echo DHHC-1:01:NzkwOWRiZDc5Mzc5YzliZDUyMmU4YmU1ZGFlYzFlMmIQf9uH: 00:27:53.845 12:03:44 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 2 00:27:53.845 12:03:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:53.845 12:03:44 -- host/auth.sh@68 -- # digest=sha256 00:27:53.845 12:03:44 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:27:53.845 12:03:44 -- host/auth.sh@68 -- # keyid=2 00:27:53.845 12:03:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:53.845 12:03:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:53.845 12:03:44 -- common/autotest_common.sh@10 -- # set +x 00:27:53.845 12:03:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:53.845 12:03:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:53.845 12:03:44 -- nvmf/common.sh@717 -- # local ip 00:27:53.845 12:03:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:53.845 12:03:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:53.845 12:03:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.845 12:03:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.845 12:03:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:53.845 12:03:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.845 12:03:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:53.845 12:03:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:53.845 12:03:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:53.845 12:03:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:53.845 12:03:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:53.845 12:03:44 -- common/autotest_common.sh@10 -- # set +x 00:27:54.189 nvme0n1 00:27:54.189 12:03:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:54.189 12:03:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.189 12:03:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:54.189 12:03:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:54.189 12:03:44 -- common/autotest_common.sh@10 -- # set +x 00:27:54.189 12:03:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:54.189 12:03:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.189 12:03:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.189 12:03:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:54.189 12:03:44 -- common/autotest_common.sh@10 -- # set +x 00:27:54.189 12:03:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:54.189 12:03:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:54.189 12:03:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:54.189 12:03:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:54.189 12:03:44 -- host/auth.sh@44 -- # digest=sha256 00:27:54.189 12:03:44 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:54.189 12:03:44 -- host/auth.sh@44 -- # keyid=3 00:27:54.189 12:03:44 -- host/auth.sh@45 -- # key=DHHC-1:02:ODkyODY0YTA3OGUwNzVlMDJjOTIwNmQ5N2I3YmM2NzY2MTFkODQ3MGMzZWU0MWNmQVS8oQ==: 00:27:54.189 12:03:44 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:54.189 12:03:44 -- host/auth.sh@48 -- # echo ffdhe2048 00:27:54.189 12:03:44 -- host/auth.sh@49 -- # echo DHHC-1:02:ODkyODY0YTA3OGUwNzVlMDJjOTIwNmQ5N2I3YmM2NzY2MTFkODQ3MGMzZWU0MWNmQVS8oQ==: 00:27:54.189 12:03:44 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 3 00:27:54.189 12:03:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:54.189 12:03:44 -- host/auth.sh@68 -- # digest=sha256 00:27:54.189 12:03:44 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:27:54.189 12:03:44 -- host/auth.sh@68 -- # keyid=3 00:27:54.189 12:03:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:54.189 12:03:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:54.189 12:03:44 -- common/autotest_common.sh@10 -- # set +x 00:27:54.189 12:03:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:54.189 12:03:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:54.189 12:03:44 -- nvmf/common.sh@717 -- # local ip 00:27:54.189 12:03:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:54.189 12:03:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:54.189 12:03:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.189 12:03:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.189 12:03:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:54.189 12:03:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.189 12:03:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:54.189 12:03:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:54.189 12:03:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:54.189 12:03:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:27:54.189 12:03:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:54.189 12:03:44 -- common/autotest_common.sh@10 -- # set +x 00:27:54.446 nvme0n1 00:27:54.446 12:03:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:54.446 12:03:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.446 12:03:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:54.446 12:03:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:54.446 12:03:44 -- common/autotest_common.sh@10 -- # set +x 00:27:54.446 12:03:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:54.446 12:03:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.446 12:03:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.446 12:03:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:54.446 12:03:44 -- common/autotest_common.sh@10 -- # set +x 00:27:54.446 12:03:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:54.446 12:03:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:54.446 12:03:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:54.446 12:03:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:54.446 12:03:44 -- host/auth.sh@44 -- # digest=sha256 00:27:54.446 12:03:44 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:54.446 12:03:44 -- host/auth.sh@44 -- # keyid=4 00:27:54.446 12:03:44 -- host/auth.sh@45 -- # key=DHHC-1:03:MDEyYzcwZjMwZGEwMTcxN2NjODE4MDI0YTgzZWJlOTU2N2I1OGEzZGYwNzYyMTk0YWZjMDE4YTFlMDIwNTk1MvoohfI=: 00:27:54.446 12:03:44 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:54.446 12:03:44 -- host/auth.sh@48 -- # echo ffdhe2048 00:27:54.446 12:03:44 -- host/auth.sh@49 -- # echo DHHC-1:03:MDEyYzcwZjMwZGEwMTcxN2NjODE4MDI0YTgzZWJlOTU2N2I1OGEzZGYwNzYyMTk0YWZjMDE4YTFlMDIwNTk1MvoohfI=: 00:27:54.446 12:03:44 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 4 00:27:54.446 12:03:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:54.446 12:03:44 -- host/auth.sh@68 -- # digest=sha256 00:27:54.446 12:03:44 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:27:54.446 12:03:44 -- host/auth.sh@68 -- # keyid=4 00:27:54.446 12:03:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:54.446 12:03:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:54.446 12:03:44 -- common/autotest_common.sh@10 -- # set +x 00:27:54.446 12:03:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:54.446 12:03:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:54.446 12:03:44 -- nvmf/common.sh@717 -- # local ip 00:27:54.446 12:03:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:54.446 12:03:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:54.446 12:03:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.446 12:03:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.446 12:03:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:54.446 12:03:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.446 12:03:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:54.447 12:03:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:54.447 12:03:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:54.447 12:03:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:54.447 12:03:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:54.447 12:03:44 -- common/autotest_common.sh@10 -- # set +x 00:27:54.447 nvme0n1 00:27:54.447 12:03:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:54.447 12:03:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.447 12:03:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:54.447 12:03:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:54.447 12:03:44 -- common/autotest_common.sh@10 -- # set +x 00:27:54.447 12:03:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:54.703 12:03:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.703 12:03:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.703 12:03:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:54.703 12:03:45 -- common/autotest_common.sh@10 -- # set +x 00:27:54.703 12:03:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:54.703 12:03:45 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:27:54.703 12:03:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:54.703 12:03:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:54.703 12:03:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:54.703 12:03:45 -- host/auth.sh@44 -- # digest=sha256 00:27:54.703 12:03:45 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:54.703 12:03:45 -- host/auth.sh@44 -- # keyid=0 00:27:54.703 12:03:45 -- host/auth.sh@45 -- # key=DHHC-1:00:MTQwOWVlYTk0MTMwZmFkYzA0NWI5ODFiNjE0MGQwYmMyrZ6G: 00:27:54.703 12:03:45 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:54.703 12:03:45 -- host/auth.sh@48 -- # echo ffdhe3072 00:27:54.703 12:03:45 -- host/auth.sh@49 -- # echo DHHC-1:00:MTQwOWVlYTk0MTMwZmFkYzA0NWI5ODFiNjE0MGQwYmMyrZ6G: 00:27:54.703 12:03:45 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 0 00:27:54.703 12:03:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:54.703 12:03:45 -- host/auth.sh@68 -- # digest=sha256 00:27:54.703 12:03:45 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:27:54.703 12:03:45 -- host/auth.sh@68 -- # keyid=0 00:27:54.703 12:03:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:54.703 12:03:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:54.703 12:03:45 -- common/autotest_common.sh@10 -- # set +x 00:27:54.703 12:03:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:54.703 12:03:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:54.703 12:03:45 -- nvmf/common.sh@717 -- # local ip 00:27:54.703 12:03:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:54.703 12:03:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:54.703 12:03:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.703 12:03:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.703 12:03:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:54.703 12:03:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.703 12:03:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:54.703 12:03:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:54.703 12:03:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:54.703 12:03:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:27:54.703 12:03:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:54.703 12:03:45 -- common/autotest_common.sh@10 -- # set +x 00:27:54.703 nvme0n1 00:27:54.703 12:03:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:54.703 12:03:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.703 12:03:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:54.703 12:03:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:54.703 12:03:45 -- common/autotest_common.sh@10 -- # set +x 00:27:54.703 12:03:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:54.960 12:03:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.960 12:03:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.960 12:03:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:54.960 12:03:45 -- common/autotest_common.sh@10 -- # set +x 00:27:54.960 12:03:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:54.960 12:03:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:54.960 12:03:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:54.960 12:03:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:54.960 12:03:45 -- host/auth.sh@44 -- # digest=sha256 00:27:54.960 12:03:45 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:54.960 12:03:45 -- host/auth.sh@44 -- # keyid=1 00:27:54.960 12:03:45 -- host/auth.sh@45 -- # key=DHHC-1:00:ZWFhNzEzOWU2NzViY2UzN2E0ZjJlZjMzNzAzNDI0MDJiOWFkYjRjNGJjOWEwMjQ24CrVsw==: 00:27:54.960 12:03:45 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:54.960 12:03:45 -- host/auth.sh@48 -- # echo ffdhe3072 00:27:54.960 12:03:45 -- host/auth.sh@49 -- # echo DHHC-1:00:ZWFhNzEzOWU2NzViY2UzN2E0ZjJlZjMzNzAzNDI0MDJiOWFkYjRjNGJjOWEwMjQ24CrVsw==: 00:27:54.960 12:03:45 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 1 00:27:54.960 12:03:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:54.960 12:03:45 -- host/auth.sh@68 -- # digest=sha256 00:27:54.960 12:03:45 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:27:54.960 12:03:45 -- host/auth.sh@68 -- # keyid=1 00:27:54.960 12:03:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:54.960 12:03:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:54.960 12:03:45 -- common/autotest_common.sh@10 -- # set +x 00:27:54.960 12:03:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:54.961 12:03:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:54.961 12:03:45 -- nvmf/common.sh@717 -- # local ip 00:27:54.961 12:03:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:54.961 12:03:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:54.961 12:03:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.961 12:03:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.961 12:03:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:54.961 12:03:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.961 12:03:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:54.961 12:03:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:54.961 12:03:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:54.961 12:03:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:27:54.961 12:03:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:54.961 12:03:45 -- common/autotest_common.sh@10 -- # set +x 00:27:54.961 nvme0n1 00:27:54.961 12:03:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:54.961 12:03:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.961 12:03:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:54.961 12:03:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:54.961 12:03:45 -- common/autotest_common.sh@10 -- # set +x 00:27:54.961 12:03:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.217 12:03:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.217 12:03:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.217 12:03:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.217 12:03:45 -- common/autotest_common.sh@10 -- # set +x 00:27:55.217 12:03:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.217 12:03:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:55.217 12:03:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:55.217 12:03:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:55.217 12:03:45 -- host/auth.sh@44 -- # digest=sha256 00:27:55.217 12:03:45 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:55.217 12:03:45 -- host/auth.sh@44 -- # keyid=2 00:27:55.217 12:03:45 -- host/auth.sh@45 -- # key=DHHC-1:01:NzkwOWRiZDc5Mzc5YzliZDUyMmU4YmU1ZGFlYzFlMmIQf9uH: 00:27:55.217 12:03:45 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:55.217 12:03:45 -- host/auth.sh@48 -- # echo ffdhe3072 00:27:55.217 12:03:45 -- host/auth.sh@49 -- # echo DHHC-1:01:NzkwOWRiZDc5Mzc5YzliZDUyMmU4YmU1ZGFlYzFlMmIQf9uH: 00:27:55.217 12:03:45 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 2 00:27:55.217 12:03:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:55.217 12:03:45 -- host/auth.sh@68 -- # digest=sha256 00:27:55.217 12:03:45 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:27:55.217 12:03:45 -- host/auth.sh@68 -- # keyid=2 00:27:55.217 12:03:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:55.217 12:03:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.217 12:03:45 -- common/autotest_common.sh@10 -- # set +x 00:27:55.217 12:03:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.217 12:03:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:55.217 12:03:45 -- nvmf/common.sh@717 -- # local ip 00:27:55.217 12:03:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:55.217 12:03:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:55.217 12:03:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.217 12:03:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.217 12:03:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:55.217 12:03:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.218 12:03:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:55.218 12:03:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:55.218 12:03:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:55.218 12:03:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:55.218 12:03:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.218 12:03:45 -- common/autotest_common.sh@10 -- # set +x 00:27:55.218 nvme0n1 00:27:55.218 12:03:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.218 12:03:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.218 12:03:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:55.218 12:03:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.218 12:03:45 -- common/autotest_common.sh@10 -- # set +x 00:27:55.218 12:03:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.475 12:03:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.475 12:03:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.475 12:03:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.475 12:03:45 -- common/autotest_common.sh@10 -- # set +x 00:27:55.475 12:03:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.475 12:03:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:55.475 12:03:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:55.475 12:03:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:55.475 12:03:45 -- host/auth.sh@44 -- # digest=sha256 00:27:55.475 12:03:45 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:55.475 12:03:45 -- host/auth.sh@44 -- # keyid=3 00:27:55.475 12:03:45 -- host/auth.sh@45 -- # key=DHHC-1:02:ODkyODY0YTA3OGUwNzVlMDJjOTIwNmQ5N2I3YmM2NzY2MTFkODQ3MGMzZWU0MWNmQVS8oQ==: 00:27:55.475 12:03:45 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:55.475 12:03:45 -- host/auth.sh@48 -- # echo ffdhe3072 00:27:55.475 12:03:45 -- host/auth.sh@49 -- # echo DHHC-1:02:ODkyODY0YTA3OGUwNzVlMDJjOTIwNmQ5N2I3YmM2NzY2MTFkODQ3MGMzZWU0MWNmQVS8oQ==: 00:27:55.475 12:03:45 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 3 00:27:55.475 12:03:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:55.475 12:03:45 -- host/auth.sh@68 -- # digest=sha256 00:27:55.475 12:03:45 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:27:55.475 12:03:45 -- host/auth.sh@68 -- # keyid=3 00:27:55.475 12:03:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:55.475 12:03:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.475 12:03:45 -- common/autotest_common.sh@10 -- # set +x 00:27:55.475 12:03:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.475 12:03:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:55.475 12:03:45 -- nvmf/common.sh@717 -- # local ip 00:27:55.475 12:03:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:55.475 12:03:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:55.475 12:03:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.475 12:03:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.475 12:03:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:55.475 12:03:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.475 12:03:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:55.475 12:03:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:55.475 12:03:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:55.475 12:03:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:27:55.475 12:03:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.475 12:03:45 -- common/autotest_common.sh@10 -- # set +x 00:27:55.475 nvme0n1 00:27:55.475 12:03:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.475 12:03:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.475 12:03:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:55.475 12:03:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.475 12:03:45 -- common/autotest_common.sh@10 -- # set +x 00:27:55.475 12:03:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.475 12:03:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.475 12:03:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.475 12:03:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.475 12:03:46 -- common/autotest_common.sh@10 -- # set +x 00:27:55.732 12:03:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.732 12:03:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:55.732 12:03:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:55.732 12:03:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:55.732 12:03:46 -- host/auth.sh@44 -- # digest=sha256 00:27:55.732 12:03:46 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:55.732 12:03:46 -- host/auth.sh@44 -- # keyid=4 00:27:55.732 12:03:46 -- host/auth.sh@45 -- # key=DHHC-1:03:MDEyYzcwZjMwZGEwMTcxN2NjODE4MDI0YTgzZWJlOTU2N2I1OGEzZGYwNzYyMTk0YWZjMDE4YTFlMDIwNTk1MvoohfI=: 00:27:55.732 12:03:46 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:55.732 12:03:46 -- host/auth.sh@48 -- # echo ffdhe3072 00:27:55.732 12:03:46 -- host/auth.sh@49 -- # echo DHHC-1:03:MDEyYzcwZjMwZGEwMTcxN2NjODE4MDI0YTgzZWJlOTU2N2I1OGEzZGYwNzYyMTk0YWZjMDE4YTFlMDIwNTk1MvoohfI=: 00:27:55.732 12:03:46 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 4 00:27:55.732 12:03:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:55.732 12:03:46 -- host/auth.sh@68 -- # digest=sha256 00:27:55.732 12:03:46 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:27:55.732 12:03:46 -- host/auth.sh@68 -- # keyid=4 00:27:55.732 12:03:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:55.732 12:03:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.732 12:03:46 -- common/autotest_common.sh@10 -- # set +x 00:27:55.732 12:03:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.733 12:03:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:55.733 12:03:46 -- nvmf/common.sh@717 -- # local ip 00:27:55.733 12:03:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:55.733 12:03:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:55.733 12:03:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.733 12:03:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.733 12:03:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:55.733 12:03:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.733 12:03:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:55.733 12:03:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:55.733 12:03:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:55.733 12:03:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:55.733 12:03:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.733 12:03:46 -- common/autotest_common.sh@10 -- # set +x 00:27:55.733 nvme0n1 00:27:55.733 12:03:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.733 12:03:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:55.733 12:03:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.733 12:03:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.733 12:03:46 -- common/autotest_common.sh@10 -- # set +x 00:27:55.733 12:03:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.733 12:03:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.733 12:03:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.733 12:03:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.733 12:03:46 -- common/autotest_common.sh@10 -- # set +x 00:27:55.990 12:03:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.990 12:03:46 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:27:55.990 12:03:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:55.990 12:03:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:55.990 12:03:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:55.990 12:03:46 -- host/auth.sh@44 -- # digest=sha256 00:27:55.990 12:03:46 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:55.990 12:03:46 -- host/auth.sh@44 -- # keyid=0 00:27:55.990 12:03:46 -- host/auth.sh@45 -- # key=DHHC-1:00:MTQwOWVlYTk0MTMwZmFkYzA0NWI5ODFiNjE0MGQwYmMyrZ6G: 00:27:55.990 12:03:46 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:55.990 12:03:46 -- host/auth.sh@48 -- # echo ffdhe4096 00:27:55.990 12:03:46 -- host/auth.sh@49 -- # echo DHHC-1:00:MTQwOWVlYTk0MTMwZmFkYzA0NWI5ODFiNjE0MGQwYmMyrZ6G: 00:27:55.990 12:03:46 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 0 00:27:55.990 12:03:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:55.990 12:03:46 -- host/auth.sh@68 -- # digest=sha256 00:27:55.990 12:03:46 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:27:55.990 12:03:46 -- host/auth.sh@68 -- # keyid=0 00:27:55.990 12:03:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:55.990 12:03:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.990 12:03:46 -- common/autotest_common.sh@10 -- # set +x 00:27:55.990 12:03:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.990 12:03:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:55.990 12:03:46 -- nvmf/common.sh@717 -- # local ip 00:27:55.990 12:03:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:55.990 12:03:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:55.990 12:03:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.990 12:03:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.990 12:03:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:55.990 12:03:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.990 12:03:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:55.990 12:03:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:55.990 12:03:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:55.990 12:03:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:27:55.990 12:03:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.990 12:03:46 -- common/autotest_common.sh@10 -- # set +x 00:27:56.247 nvme0n1 00:27:56.247 12:03:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:56.247 12:03:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.247 12:03:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:56.247 12:03:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:56.247 12:03:46 -- common/autotest_common.sh@10 -- # set +x 00:27:56.247 12:03:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:56.247 12:03:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.247 12:03:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.247 12:03:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:56.247 12:03:46 -- common/autotest_common.sh@10 -- # set +x 00:27:56.247 12:03:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:56.247 12:03:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:56.247 12:03:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:56.247 12:03:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:56.247 12:03:46 -- host/auth.sh@44 -- # digest=sha256 00:27:56.247 12:03:46 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:56.247 12:03:46 -- host/auth.sh@44 -- # keyid=1 00:27:56.247 12:03:46 -- host/auth.sh@45 -- # key=DHHC-1:00:ZWFhNzEzOWU2NzViY2UzN2E0ZjJlZjMzNzAzNDI0MDJiOWFkYjRjNGJjOWEwMjQ24CrVsw==: 00:27:56.247 12:03:46 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:56.247 12:03:46 -- host/auth.sh@48 -- # echo ffdhe4096 00:27:56.247 12:03:46 -- host/auth.sh@49 -- # echo DHHC-1:00:ZWFhNzEzOWU2NzViY2UzN2E0ZjJlZjMzNzAzNDI0MDJiOWFkYjRjNGJjOWEwMjQ24CrVsw==: 00:27:56.247 12:03:46 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 1 00:27:56.247 12:03:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:56.247 12:03:46 -- host/auth.sh@68 -- # digest=sha256 00:27:56.247 12:03:46 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:27:56.247 12:03:46 -- host/auth.sh@68 -- # keyid=1 00:27:56.247 12:03:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:56.247 12:03:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:56.247 12:03:46 -- common/autotest_common.sh@10 -- # set +x 00:27:56.247 12:03:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:56.247 12:03:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:56.247 12:03:46 -- nvmf/common.sh@717 -- # local ip 00:27:56.247 12:03:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:56.247 12:03:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:56.247 12:03:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.247 12:03:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.247 12:03:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:56.247 12:03:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.247 12:03:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:56.247 12:03:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:56.247 12:03:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:56.247 12:03:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:27:56.247 12:03:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:56.247 12:03:46 -- common/autotest_common.sh@10 -- # set +x 00:27:56.504 nvme0n1 00:27:56.504 12:03:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:56.504 12:03:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:56.504 12:03:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.504 12:03:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:56.504 12:03:46 -- common/autotest_common.sh@10 -- # set +x 00:27:56.504 12:03:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:56.504 12:03:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.504 12:03:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.504 12:03:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:56.504 12:03:46 -- common/autotest_common.sh@10 -- # set +x 00:27:56.504 12:03:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:56.504 12:03:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:56.504 12:03:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:56.504 12:03:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:56.504 12:03:46 -- host/auth.sh@44 -- # digest=sha256 00:27:56.504 12:03:46 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:56.504 12:03:46 -- host/auth.sh@44 -- # keyid=2 00:27:56.504 12:03:46 -- host/auth.sh@45 -- # key=DHHC-1:01:NzkwOWRiZDc5Mzc5YzliZDUyMmU4YmU1ZGFlYzFlMmIQf9uH: 00:27:56.504 12:03:46 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:56.504 12:03:46 -- host/auth.sh@48 -- # echo ffdhe4096 00:27:56.504 12:03:46 -- host/auth.sh@49 -- # echo DHHC-1:01:NzkwOWRiZDc5Mzc5YzliZDUyMmU4YmU1ZGFlYzFlMmIQf9uH: 00:27:56.504 12:03:46 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 2 00:27:56.504 12:03:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:56.504 12:03:46 -- host/auth.sh@68 -- # digest=sha256 00:27:56.504 12:03:46 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:27:56.504 12:03:46 -- host/auth.sh@68 -- # keyid=2 00:27:56.504 12:03:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:56.504 12:03:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:56.504 12:03:46 -- common/autotest_common.sh@10 -- # set +x 00:27:56.504 12:03:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:56.504 12:03:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:56.505 12:03:46 -- nvmf/common.sh@717 -- # local ip 00:27:56.505 12:03:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:56.505 12:03:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:56.505 12:03:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.505 12:03:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.505 12:03:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:56.505 12:03:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.505 12:03:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:56.505 12:03:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:56.505 12:03:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:56.505 12:03:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:56.505 12:03:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:56.505 12:03:46 -- common/autotest_common.sh@10 -- # set +x 00:27:56.762 nvme0n1 00:27:56.762 12:03:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:56.762 12:03:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.762 12:03:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:56.762 12:03:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:56.762 12:03:47 -- common/autotest_common.sh@10 -- # set +x 00:27:56.762 12:03:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:56.762 12:03:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.762 12:03:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.762 12:03:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:56.762 12:03:47 -- common/autotest_common.sh@10 -- # set +x 00:27:56.762 12:03:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:56.762 12:03:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:56.762 12:03:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:56.762 12:03:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:56.762 12:03:47 -- host/auth.sh@44 -- # digest=sha256 00:27:56.762 12:03:47 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:56.762 12:03:47 -- host/auth.sh@44 -- # keyid=3 00:27:56.762 12:03:47 -- host/auth.sh@45 -- # key=DHHC-1:02:ODkyODY0YTA3OGUwNzVlMDJjOTIwNmQ5N2I3YmM2NzY2MTFkODQ3MGMzZWU0MWNmQVS8oQ==: 00:27:56.762 12:03:47 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:56.762 12:03:47 -- host/auth.sh@48 -- # echo ffdhe4096 00:27:56.762 12:03:47 -- host/auth.sh@49 -- # echo DHHC-1:02:ODkyODY0YTA3OGUwNzVlMDJjOTIwNmQ5N2I3YmM2NzY2MTFkODQ3MGMzZWU0MWNmQVS8oQ==: 00:27:56.762 12:03:47 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 3 00:27:56.762 12:03:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:56.762 12:03:47 -- host/auth.sh@68 -- # digest=sha256 00:27:56.762 12:03:47 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:27:56.762 12:03:47 -- host/auth.sh@68 -- # keyid=3 00:27:56.762 12:03:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:56.762 12:03:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:56.762 12:03:47 -- common/autotest_common.sh@10 -- # set +x 00:27:56.762 12:03:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:56.762 12:03:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:56.762 12:03:47 -- nvmf/common.sh@717 -- # local ip 00:27:56.762 12:03:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:56.762 12:03:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:56.762 12:03:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.762 12:03:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.762 12:03:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:56.762 12:03:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.762 12:03:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:56.762 12:03:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:56.762 12:03:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:56.762 12:03:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:27:56.762 12:03:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:56.762 12:03:47 -- common/autotest_common.sh@10 -- # set +x 00:27:57.019 nvme0n1 00:27:57.019 12:03:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.019 12:03:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.019 12:03:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:57.019 12:03:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.019 12:03:47 -- common/autotest_common.sh@10 -- # set +x 00:27:57.019 12:03:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.019 12:03:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.019 12:03:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.019 12:03:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.019 12:03:47 -- common/autotest_common.sh@10 -- # set +x 00:27:57.019 12:03:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.019 12:03:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:57.019 12:03:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:57.019 12:03:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:57.019 12:03:47 -- host/auth.sh@44 -- # digest=sha256 00:27:57.019 12:03:47 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:57.019 12:03:47 -- host/auth.sh@44 -- # keyid=4 00:27:57.019 12:03:47 -- host/auth.sh@45 -- # key=DHHC-1:03:MDEyYzcwZjMwZGEwMTcxN2NjODE4MDI0YTgzZWJlOTU2N2I1OGEzZGYwNzYyMTk0YWZjMDE4YTFlMDIwNTk1MvoohfI=: 00:27:57.019 12:03:47 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:57.019 12:03:47 -- host/auth.sh@48 -- # echo ffdhe4096 00:27:57.019 12:03:47 -- host/auth.sh@49 -- # echo DHHC-1:03:MDEyYzcwZjMwZGEwMTcxN2NjODE4MDI0YTgzZWJlOTU2N2I1OGEzZGYwNzYyMTk0YWZjMDE4YTFlMDIwNTk1MvoohfI=: 00:27:57.019 12:03:47 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 4 00:27:57.019 12:03:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:57.019 12:03:47 -- host/auth.sh@68 -- # digest=sha256 00:27:57.019 12:03:47 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:27:57.019 12:03:47 -- host/auth.sh@68 -- # keyid=4 00:27:57.019 12:03:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:57.019 12:03:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.019 12:03:47 -- common/autotest_common.sh@10 -- # set +x 00:27:57.019 12:03:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.019 12:03:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:57.019 12:03:47 -- nvmf/common.sh@717 -- # local ip 00:27:57.019 12:03:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:57.019 12:03:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:57.019 12:03:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.019 12:03:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.019 12:03:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:57.019 12:03:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.019 12:03:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:57.275 12:03:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:57.275 12:03:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:57.275 12:03:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:57.275 12:03:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.275 12:03:47 -- common/autotest_common.sh@10 -- # set +x 00:27:57.275 nvme0n1 00:27:57.275 12:03:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.275 12:03:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:57.275 12:03:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.275 12:03:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.275 12:03:47 -- common/autotest_common.sh@10 -- # set +x 00:27:57.275 12:03:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.532 12:03:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.532 12:03:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.532 12:03:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.532 12:03:47 -- common/autotest_common.sh@10 -- # set +x 00:27:57.532 12:03:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.532 12:03:47 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:27:57.532 12:03:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:57.532 12:03:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:57.532 12:03:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:57.532 12:03:47 -- host/auth.sh@44 -- # digest=sha256 00:27:57.532 12:03:47 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:57.532 12:03:47 -- host/auth.sh@44 -- # keyid=0 00:27:57.532 12:03:47 -- host/auth.sh@45 -- # key=DHHC-1:00:MTQwOWVlYTk0MTMwZmFkYzA0NWI5ODFiNjE0MGQwYmMyrZ6G: 00:27:57.532 12:03:47 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:57.532 12:03:47 -- host/auth.sh@48 -- # echo ffdhe6144 00:27:57.532 12:03:47 -- host/auth.sh@49 -- # echo DHHC-1:00:MTQwOWVlYTk0MTMwZmFkYzA0NWI5ODFiNjE0MGQwYmMyrZ6G: 00:27:57.532 12:03:47 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 0 00:27:57.532 12:03:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:57.532 12:03:47 -- host/auth.sh@68 -- # digest=sha256 00:27:57.532 12:03:47 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:27:57.532 12:03:47 -- host/auth.sh@68 -- # keyid=0 00:27:57.532 12:03:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:57.532 12:03:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.532 12:03:47 -- common/autotest_common.sh@10 -- # set +x 00:27:57.532 12:03:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.532 12:03:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:57.532 12:03:47 -- nvmf/common.sh@717 -- # local ip 00:27:57.532 12:03:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:57.532 12:03:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:57.532 12:03:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.532 12:03:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.532 12:03:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:57.532 12:03:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.532 12:03:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:57.532 12:03:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:57.532 12:03:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:57.532 12:03:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:27:57.532 12:03:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.532 12:03:47 -- common/autotest_common.sh@10 -- # set +x 00:27:57.789 nvme0n1 00:27:57.789 12:03:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.789 12:03:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.789 12:03:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:57.789 12:03:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.789 12:03:48 -- common/autotest_common.sh@10 -- # set +x 00:27:57.789 12:03:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.789 12:03:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.789 12:03:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.789 12:03:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.789 12:03:48 -- common/autotest_common.sh@10 -- # set +x 00:27:57.789 12:03:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.789 12:03:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:57.789 12:03:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:57.789 12:03:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:57.789 12:03:48 -- host/auth.sh@44 -- # digest=sha256 00:27:57.789 12:03:48 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:57.789 12:03:48 -- host/auth.sh@44 -- # keyid=1 00:27:57.790 12:03:48 -- host/auth.sh@45 -- # key=DHHC-1:00:ZWFhNzEzOWU2NzViY2UzN2E0ZjJlZjMzNzAzNDI0MDJiOWFkYjRjNGJjOWEwMjQ24CrVsw==: 00:27:57.790 12:03:48 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:57.790 12:03:48 -- host/auth.sh@48 -- # echo ffdhe6144 00:27:57.790 12:03:48 -- host/auth.sh@49 -- # echo DHHC-1:00:ZWFhNzEzOWU2NzViY2UzN2E0ZjJlZjMzNzAzNDI0MDJiOWFkYjRjNGJjOWEwMjQ24CrVsw==: 00:27:57.790 12:03:48 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 1 00:27:57.790 12:03:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:57.790 12:03:48 -- host/auth.sh@68 -- # digest=sha256 00:27:57.790 12:03:48 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:27:57.790 12:03:48 -- host/auth.sh@68 -- # keyid=1 00:27:57.790 12:03:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:57.790 12:03:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.790 12:03:48 -- common/autotest_common.sh@10 -- # set +x 00:27:57.790 12:03:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.790 12:03:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:57.790 12:03:48 -- nvmf/common.sh@717 -- # local ip 00:27:57.790 12:03:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:57.790 12:03:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:57.790 12:03:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.790 12:03:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.790 12:03:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:57.790 12:03:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.790 12:03:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:57.790 12:03:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:57.790 12:03:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:57.790 12:03:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:27:57.790 12:03:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.790 12:03:48 -- common/autotest_common.sh@10 -- # set +x 00:27:58.353 nvme0n1 00:27:58.353 12:03:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.353 12:03:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:58.353 12:03:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.353 12:03:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.353 12:03:48 -- common/autotest_common.sh@10 -- # set +x 00:27:58.353 12:03:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.353 12:03:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.353 12:03:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.353 12:03:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.353 12:03:48 -- common/autotest_common.sh@10 -- # set +x 00:27:58.353 12:03:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.353 12:03:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:58.353 12:03:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:58.353 12:03:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:58.353 12:03:48 -- host/auth.sh@44 -- # digest=sha256 00:27:58.353 12:03:48 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:58.353 12:03:48 -- host/auth.sh@44 -- # keyid=2 00:27:58.353 12:03:48 -- host/auth.sh@45 -- # key=DHHC-1:01:NzkwOWRiZDc5Mzc5YzliZDUyMmU4YmU1ZGFlYzFlMmIQf9uH: 00:27:58.353 12:03:48 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:58.353 12:03:48 -- host/auth.sh@48 -- # echo ffdhe6144 00:27:58.353 12:03:48 -- host/auth.sh@49 -- # echo DHHC-1:01:NzkwOWRiZDc5Mzc5YzliZDUyMmU4YmU1ZGFlYzFlMmIQf9uH: 00:27:58.353 12:03:48 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 2 00:27:58.353 12:03:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:58.353 12:03:48 -- host/auth.sh@68 -- # digest=sha256 00:27:58.353 12:03:48 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:27:58.353 12:03:48 -- host/auth.sh@68 -- # keyid=2 00:27:58.353 12:03:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:58.353 12:03:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.353 12:03:48 -- common/autotest_common.sh@10 -- # set +x 00:27:58.353 12:03:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.353 12:03:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:58.353 12:03:48 -- nvmf/common.sh@717 -- # local ip 00:27:58.353 12:03:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:58.353 12:03:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:58.353 12:03:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.353 12:03:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.353 12:03:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:58.353 12:03:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.353 12:03:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:58.353 12:03:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:58.353 12:03:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:58.353 12:03:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:58.353 12:03:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.353 12:03:48 -- common/autotest_common.sh@10 -- # set +x 00:27:58.611 nvme0n1 00:27:58.611 12:03:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.611 12:03:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.611 12:03:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.611 12:03:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:58.611 12:03:49 -- common/autotest_common.sh@10 -- # set +x 00:27:58.611 12:03:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.611 12:03:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.611 12:03:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.611 12:03:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.611 12:03:49 -- common/autotest_common.sh@10 -- # set +x 00:27:58.868 12:03:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.868 12:03:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:58.868 12:03:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:58.868 12:03:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:58.868 12:03:49 -- host/auth.sh@44 -- # digest=sha256 00:27:58.868 12:03:49 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:58.868 12:03:49 -- host/auth.sh@44 -- # keyid=3 00:27:58.868 12:03:49 -- host/auth.sh@45 -- # key=DHHC-1:02:ODkyODY0YTA3OGUwNzVlMDJjOTIwNmQ5N2I3YmM2NzY2MTFkODQ3MGMzZWU0MWNmQVS8oQ==: 00:27:58.868 12:03:49 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:58.868 12:03:49 -- host/auth.sh@48 -- # echo ffdhe6144 00:27:58.868 12:03:49 -- host/auth.sh@49 -- # echo DHHC-1:02:ODkyODY0YTA3OGUwNzVlMDJjOTIwNmQ5N2I3YmM2NzY2MTFkODQ3MGMzZWU0MWNmQVS8oQ==: 00:27:58.868 12:03:49 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 3 00:27:58.868 12:03:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:58.868 12:03:49 -- host/auth.sh@68 -- # digest=sha256 00:27:58.868 12:03:49 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:27:58.868 12:03:49 -- host/auth.sh@68 -- # keyid=3 00:27:58.868 12:03:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:58.868 12:03:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.868 12:03:49 -- common/autotest_common.sh@10 -- # set +x 00:27:58.868 12:03:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.868 12:03:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:58.868 12:03:49 -- nvmf/common.sh@717 -- # local ip 00:27:58.868 12:03:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:58.868 12:03:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:58.868 12:03:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.868 12:03:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.868 12:03:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:58.868 12:03:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.868 12:03:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:58.868 12:03:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:58.868 12:03:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:58.868 12:03:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:27:58.868 12:03:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.868 12:03:49 -- common/autotest_common.sh@10 -- # set +x 00:27:59.125 nvme0n1 00:27:59.125 12:03:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:59.125 12:03:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:59.125 12:03:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.125 12:03:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:59.125 12:03:49 -- common/autotest_common.sh@10 -- # set +x 00:27:59.125 12:03:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:59.125 12:03:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.125 12:03:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.125 12:03:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:59.125 12:03:49 -- common/autotest_common.sh@10 -- # set +x 00:27:59.125 12:03:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:59.125 12:03:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:59.126 12:03:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:59.126 12:03:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:59.126 12:03:49 -- host/auth.sh@44 -- # digest=sha256 00:27:59.126 12:03:49 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:59.126 12:03:49 -- host/auth.sh@44 -- # keyid=4 00:27:59.126 12:03:49 -- host/auth.sh@45 -- # key=DHHC-1:03:MDEyYzcwZjMwZGEwMTcxN2NjODE4MDI0YTgzZWJlOTU2N2I1OGEzZGYwNzYyMTk0YWZjMDE4YTFlMDIwNTk1MvoohfI=: 00:27:59.126 12:03:49 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:59.126 12:03:49 -- host/auth.sh@48 -- # echo ffdhe6144 00:27:59.126 12:03:49 -- host/auth.sh@49 -- # echo DHHC-1:03:MDEyYzcwZjMwZGEwMTcxN2NjODE4MDI0YTgzZWJlOTU2N2I1OGEzZGYwNzYyMTk0YWZjMDE4YTFlMDIwNTk1MvoohfI=: 00:27:59.126 12:03:49 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 4 00:27:59.126 12:03:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:59.126 12:03:49 -- host/auth.sh@68 -- # digest=sha256 00:27:59.126 12:03:49 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:27:59.126 12:03:49 -- host/auth.sh@68 -- # keyid=4 00:27:59.126 12:03:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:59.126 12:03:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:59.126 12:03:49 -- common/autotest_common.sh@10 -- # set +x 00:27:59.126 12:03:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:59.126 12:03:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:59.126 12:03:49 -- nvmf/common.sh@717 -- # local ip 00:27:59.126 12:03:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:59.126 12:03:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:59.126 12:03:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.126 12:03:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.126 12:03:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:59.126 12:03:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.126 12:03:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:59.126 12:03:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:59.126 12:03:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:59.126 12:03:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:59.126 12:03:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:59.126 12:03:49 -- common/autotest_common.sh@10 -- # set +x 00:27:59.690 nvme0n1 00:27:59.690 12:03:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:59.690 12:03:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.690 12:03:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:59.690 12:03:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:59.690 12:03:49 -- common/autotest_common.sh@10 -- # set +x 00:27:59.690 12:03:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:59.690 12:03:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.690 12:03:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.690 12:03:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:59.690 12:03:50 -- common/autotest_common.sh@10 -- # set +x 00:27:59.690 12:03:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:59.690 12:03:50 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:27:59.690 12:03:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:59.690 12:03:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:59.690 12:03:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:59.690 12:03:50 -- host/auth.sh@44 -- # digest=sha256 00:27:59.690 12:03:50 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:59.690 12:03:50 -- host/auth.sh@44 -- # keyid=0 00:27:59.690 12:03:50 -- host/auth.sh@45 -- # key=DHHC-1:00:MTQwOWVlYTk0MTMwZmFkYzA0NWI5ODFiNjE0MGQwYmMyrZ6G: 00:27:59.690 12:03:50 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:59.690 12:03:50 -- host/auth.sh@48 -- # echo ffdhe8192 00:27:59.690 12:03:50 -- host/auth.sh@49 -- # echo DHHC-1:00:MTQwOWVlYTk0MTMwZmFkYzA0NWI5ODFiNjE0MGQwYmMyrZ6G: 00:27:59.690 12:03:50 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 0 00:27:59.690 12:03:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:59.690 12:03:50 -- host/auth.sh@68 -- # digest=sha256 00:27:59.690 12:03:50 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:27:59.690 12:03:50 -- host/auth.sh@68 -- # keyid=0 00:27:59.690 12:03:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:59.690 12:03:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:59.690 12:03:50 -- common/autotest_common.sh@10 -- # set +x 00:27:59.690 12:03:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:59.690 12:03:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:59.690 12:03:50 -- nvmf/common.sh@717 -- # local ip 00:27:59.690 12:03:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:59.690 12:03:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:59.690 12:03:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.690 12:03:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.690 12:03:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:59.691 12:03:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.691 12:03:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:59.691 12:03:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:59.691 12:03:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:59.691 12:03:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:27:59.691 12:03:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:59.691 12:03:50 -- common/autotest_common.sh@10 -- # set +x 00:28:00.255 nvme0n1 00:28:00.255 12:03:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:00.255 12:03:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.255 12:03:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:00.255 12:03:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:00.255 12:03:50 -- common/autotest_common.sh@10 -- # set +x 00:28:00.255 12:03:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:00.255 12:03:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.255 12:03:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.255 12:03:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:00.255 12:03:50 -- common/autotest_common.sh@10 -- # set +x 00:28:00.255 12:03:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:00.255 12:03:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:00.255 12:03:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:28:00.255 12:03:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:00.255 12:03:50 -- host/auth.sh@44 -- # digest=sha256 00:28:00.255 12:03:50 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:00.255 12:03:50 -- host/auth.sh@44 -- # keyid=1 00:28:00.255 12:03:50 -- host/auth.sh@45 -- # key=DHHC-1:00:ZWFhNzEzOWU2NzViY2UzN2E0ZjJlZjMzNzAzNDI0MDJiOWFkYjRjNGJjOWEwMjQ24CrVsw==: 00:28:00.255 12:03:50 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:28:00.255 12:03:50 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:00.255 12:03:50 -- host/auth.sh@49 -- # echo DHHC-1:00:ZWFhNzEzOWU2NzViY2UzN2E0ZjJlZjMzNzAzNDI0MDJiOWFkYjRjNGJjOWEwMjQ24CrVsw==: 00:28:00.255 12:03:50 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 1 00:28:00.255 12:03:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:00.255 12:03:50 -- host/auth.sh@68 -- # digest=sha256 00:28:00.255 12:03:50 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:00.255 12:03:50 -- host/auth.sh@68 -- # keyid=1 00:28:00.255 12:03:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:00.255 12:03:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:00.255 12:03:50 -- common/autotest_common.sh@10 -- # set +x 00:28:00.255 12:03:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:00.255 12:03:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:00.255 12:03:50 -- nvmf/common.sh@717 -- # local ip 00:28:00.255 12:03:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:00.255 12:03:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:00.255 12:03:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.255 12:03:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.255 12:03:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:00.255 12:03:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.255 12:03:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:00.255 12:03:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:00.255 12:03:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:00.255 12:03:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:28:00.255 12:03:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:00.255 12:03:50 -- common/autotest_common.sh@10 -- # set +x 00:28:00.820 nvme0n1 00:28:00.820 12:03:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:00.820 12:03:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.820 12:03:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:00.820 12:03:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:00.820 12:03:51 -- common/autotest_common.sh@10 -- # set +x 00:28:00.820 12:03:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:00.820 12:03:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.820 12:03:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.820 12:03:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:00.820 12:03:51 -- common/autotest_common.sh@10 -- # set +x 00:28:00.820 12:03:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:00.820 12:03:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:00.820 12:03:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:28:00.820 12:03:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:00.820 12:03:51 -- host/auth.sh@44 -- # digest=sha256 00:28:00.820 12:03:51 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:00.820 12:03:51 -- host/auth.sh@44 -- # keyid=2 00:28:00.820 12:03:51 -- host/auth.sh@45 -- # key=DHHC-1:01:NzkwOWRiZDc5Mzc5YzliZDUyMmU4YmU1ZGFlYzFlMmIQf9uH: 00:28:00.820 12:03:51 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:28:00.820 12:03:51 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:00.820 12:03:51 -- host/auth.sh@49 -- # echo DHHC-1:01:NzkwOWRiZDc5Mzc5YzliZDUyMmU4YmU1ZGFlYzFlMmIQf9uH: 00:28:00.820 12:03:51 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 2 00:28:00.820 12:03:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:00.820 12:03:51 -- host/auth.sh@68 -- # digest=sha256 00:28:00.820 12:03:51 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:00.820 12:03:51 -- host/auth.sh@68 -- # keyid=2 00:28:00.820 12:03:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:00.820 12:03:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:00.820 12:03:51 -- common/autotest_common.sh@10 -- # set +x 00:28:00.820 12:03:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:00.820 12:03:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:00.820 12:03:51 -- nvmf/common.sh@717 -- # local ip 00:28:00.820 12:03:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:00.820 12:03:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:00.820 12:03:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.820 12:03:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.820 12:03:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:00.820 12:03:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.820 12:03:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:00.820 12:03:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:00.820 12:03:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:00.820 12:03:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:00.820 12:03:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:00.820 12:03:51 -- common/autotest_common.sh@10 -- # set +x 00:28:01.385 nvme0n1 00:28:01.385 12:03:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:01.385 12:03:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.385 12:03:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:01.385 12:03:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:01.385 12:03:51 -- common/autotest_common.sh@10 -- # set +x 00:28:01.385 12:03:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:01.385 12:03:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.385 12:03:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.385 12:03:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:01.385 12:03:51 -- common/autotest_common.sh@10 -- # set +x 00:28:01.642 12:03:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:01.642 12:03:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:01.642 12:03:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:28:01.642 12:03:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:01.642 12:03:51 -- host/auth.sh@44 -- # digest=sha256 00:28:01.642 12:03:51 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:01.642 12:03:51 -- host/auth.sh@44 -- # keyid=3 00:28:01.642 12:03:51 -- host/auth.sh@45 -- # key=DHHC-1:02:ODkyODY0YTA3OGUwNzVlMDJjOTIwNmQ5N2I3YmM2NzY2MTFkODQ3MGMzZWU0MWNmQVS8oQ==: 00:28:01.642 12:03:51 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:28:01.642 12:03:51 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:01.642 12:03:51 -- host/auth.sh@49 -- # echo DHHC-1:02:ODkyODY0YTA3OGUwNzVlMDJjOTIwNmQ5N2I3YmM2NzY2MTFkODQ3MGMzZWU0MWNmQVS8oQ==: 00:28:01.642 12:03:51 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 3 00:28:01.642 12:03:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:01.642 12:03:51 -- host/auth.sh@68 -- # digest=sha256 00:28:01.642 12:03:51 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:01.642 12:03:51 -- host/auth.sh@68 -- # keyid=3 00:28:01.642 12:03:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:01.642 12:03:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:01.642 12:03:51 -- common/autotest_common.sh@10 -- # set +x 00:28:01.642 12:03:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:01.642 12:03:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:01.642 12:03:51 -- nvmf/common.sh@717 -- # local ip 00:28:01.642 12:03:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:01.642 12:03:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:01.642 12:03:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.642 12:03:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.642 12:03:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:01.642 12:03:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.642 12:03:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:01.642 12:03:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:01.642 12:03:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:01.642 12:03:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:28:01.642 12:03:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:01.642 12:03:51 -- common/autotest_common.sh@10 -- # set +x 00:28:02.207 nvme0n1 00:28:02.207 12:03:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:02.207 12:03:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.207 12:03:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:02.207 12:03:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:02.207 12:03:52 -- common/autotest_common.sh@10 -- # set +x 00:28:02.207 12:03:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:02.207 12:03:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.207 12:03:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.207 12:03:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:02.207 12:03:52 -- common/autotest_common.sh@10 -- # set +x 00:28:02.207 12:03:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:02.207 12:03:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:02.207 12:03:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:28:02.207 12:03:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:02.207 12:03:52 -- host/auth.sh@44 -- # digest=sha256 00:28:02.207 12:03:52 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:02.207 12:03:52 -- host/auth.sh@44 -- # keyid=4 00:28:02.207 12:03:52 -- host/auth.sh@45 -- # key=DHHC-1:03:MDEyYzcwZjMwZGEwMTcxN2NjODE4MDI0YTgzZWJlOTU2N2I1OGEzZGYwNzYyMTk0YWZjMDE4YTFlMDIwNTk1MvoohfI=: 00:28:02.207 12:03:52 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:28:02.207 12:03:52 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:02.207 12:03:52 -- host/auth.sh@49 -- # echo DHHC-1:03:MDEyYzcwZjMwZGEwMTcxN2NjODE4MDI0YTgzZWJlOTU2N2I1OGEzZGYwNzYyMTk0YWZjMDE4YTFlMDIwNTk1MvoohfI=: 00:28:02.207 12:03:52 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 4 00:28:02.207 12:03:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:02.207 12:03:52 -- host/auth.sh@68 -- # digest=sha256 00:28:02.207 12:03:52 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:02.207 12:03:52 -- host/auth.sh@68 -- # keyid=4 00:28:02.207 12:03:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:02.207 12:03:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:02.207 12:03:52 -- common/autotest_common.sh@10 -- # set +x 00:28:02.207 12:03:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:02.207 12:03:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:02.207 12:03:52 -- nvmf/common.sh@717 -- # local ip 00:28:02.207 12:03:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:02.207 12:03:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:02.207 12:03:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.207 12:03:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.207 12:03:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:02.207 12:03:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.207 12:03:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:02.207 12:03:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:02.207 12:03:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:02.207 12:03:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:02.207 12:03:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:02.207 12:03:52 -- common/autotest_common.sh@10 -- # set +x 00:28:02.774 nvme0n1 00:28:02.774 12:03:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:02.774 12:03:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.774 12:03:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:02.774 12:03:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:02.774 12:03:53 -- common/autotest_common.sh@10 -- # set +x 00:28:02.774 12:03:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:02.774 12:03:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.774 12:03:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.774 12:03:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:02.774 12:03:53 -- common/autotest_common.sh@10 -- # set +x 00:28:02.774 12:03:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:02.774 12:03:53 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:28:02.774 12:03:53 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:28:02.774 12:03:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:02.774 12:03:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:28:02.774 12:03:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:02.774 12:03:53 -- host/auth.sh@44 -- # digest=sha384 00:28:02.774 12:03:53 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:02.774 12:03:53 -- host/auth.sh@44 -- # keyid=0 00:28:02.774 12:03:53 -- host/auth.sh@45 -- # key=DHHC-1:00:MTQwOWVlYTk0MTMwZmFkYzA0NWI5ODFiNjE0MGQwYmMyrZ6G: 00:28:02.774 12:03:53 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:02.774 12:03:53 -- host/auth.sh@48 -- # echo ffdhe2048 00:28:02.774 12:03:53 -- host/auth.sh@49 -- # echo DHHC-1:00:MTQwOWVlYTk0MTMwZmFkYzA0NWI5ODFiNjE0MGQwYmMyrZ6G: 00:28:02.774 12:03:53 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 0 00:28:02.774 12:03:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:02.774 12:03:53 -- host/auth.sh@68 -- # digest=sha384 00:28:02.774 12:03:53 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:28:02.774 12:03:53 -- host/auth.sh@68 -- # keyid=0 00:28:02.774 12:03:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:02.774 12:03:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:02.774 12:03:53 -- common/autotest_common.sh@10 -- # set +x 00:28:02.774 12:03:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:02.775 12:03:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:02.775 12:03:53 -- nvmf/common.sh@717 -- # local ip 00:28:02.775 12:03:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:02.775 12:03:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:02.775 12:03:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.775 12:03:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.775 12:03:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:02.775 12:03:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.775 12:03:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:02.775 12:03:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:02.775 12:03:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:02.775 12:03:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:28:02.775 12:03:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:02.775 12:03:53 -- common/autotest_common.sh@10 -- # set +x 00:28:03.033 nvme0n1 00:28:03.033 12:03:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:03.033 12:03:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.033 12:03:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:03.033 12:03:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:03.033 12:03:53 -- common/autotest_common.sh@10 -- # set +x 00:28:03.033 12:03:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:03.033 12:03:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.033 12:03:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.033 12:03:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:03.033 12:03:53 -- common/autotest_common.sh@10 -- # set +x 00:28:03.033 12:03:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:03.033 12:03:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:03.033 12:03:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:28:03.033 12:03:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:03.033 12:03:53 -- host/auth.sh@44 -- # digest=sha384 00:28:03.033 12:03:53 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:03.033 12:03:53 -- host/auth.sh@44 -- # keyid=1 00:28:03.033 12:03:53 -- host/auth.sh@45 -- # key=DHHC-1:00:ZWFhNzEzOWU2NzViY2UzN2E0ZjJlZjMzNzAzNDI0MDJiOWFkYjRjNGJjOWEwMjQ24CrVsw==: 00:28:03.033 12:03:53 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:03.033 12:03:53 -- host/auth.sh@48 -- # echo ffdhe2048 00:28:03.033 12:03:53 -- host/auth.sh@49 -- # echo DHHC-1:00:ZWFhNzEzOWU2NzViY2UzN2E0ZjJlZjMzNzAzNDI0MDJiOWFkYjRjNGJjOWEwMjQ24CrVsw==: 00:28:03.033 12:03:53 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 1 00:28:03.033 12:03:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:03.033 12:03:53 -- host/auth.sh@68 -- # digest=sha384 00:28:03.033 12:03:53 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:28:03.033 12:03:53 -- host/auth.sh@68 -- # keyid=1 00:28:03.033 12:03:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:03.033 12:03:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:03.033 12:03:53 -- common/autotest_common.sh@10 -- # set +x 00:28:03.034 12:03:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:03.034 12:03:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:03.034 12:03:53 -- nvmf/common.sh@717 -- # local ip 00:28:03.034 12:03:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:03.034 12:03:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:03.034 12:03:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.034 12:03:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.034 12:03:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:03.034 12:03:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.034 12:03:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:03.034 12:03:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:03.034 12:03:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:03.034 12:03:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:28:03.034 12:03:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:03.034 12:03:53 -- common/autotest_common.sh@10 -- # set +x 00:28:03.034 nvme0n1 00:28:03.034 12:03:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:03.034 12:03:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.034 12:03:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:03.034 12:03:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:03.034 12:03:53 -- common/autotest_common.sh@10 -- # set +x 00:28:03.292 12:03:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:03.292 12:03:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.292 12:03:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.292 12:03:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:03.292 12:03:53 -- common/autotest_common.sh@10 -- # set +x 00:28:03.292 12:03:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:03.292 12:03:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:03.292 12:03:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:28:03.292 12:03:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:03.292 12:03:53 -- host/auth.sh@44 -- # digest=sha384 00:28:03.292 12:03:53 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:03.292 12:03:53 -- host/auth.sh@44 -- # keyid=2 00:28:03.292 12:03:53 -- host/auth.sh@45 -- # key=DHHC-1:01:NzkwOWRiZDc5Mzc5YzliZDUyMmU4YmU1ZGFlYzFlMmIQf9uH: 00:28:03.292 12:03:53 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:03.292 12:03:53 -- host/auth.sh@48 -- # echo ffdhe2048 00:28:03.292 12:03:53 -- host/auth.sh@49 -- # echo DHHC-1:01:NzkwOWRiZDc5Mzc5YzliZDUyMmU4YmU1ZGFlYzFlMmIQf9uH: 00:28:03.292 12:03:53 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 2 00:28:03.292 12:03:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:03.292 12:03:53 -- host/auth.sh@68 -- # digest=sha384 00:28:03.292 12:03:53 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:28:03.292 12:03:53 -- host/auth.sh@68 -- # keyid=2 00:28:03.292 12:03:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:03.292 12:03:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:03.292 12:03:53 -- common/autotest_common.sh@10 -- # set +x 00:28:03.292 12:03:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:03.292 12:03:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:03.292 12:03:53 -- nvmf/common.sh@717 -- # local ip 00:28:03.292 12:03:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:03.292 12:03:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:03.292 12:03:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.292 12:03:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.292 12:03:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:03.292 12:03:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.292 12:03:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:03.292 12:03:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:03.292 12:03:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:03.292 12:03:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:03.292 12:03:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:03.292 12:03:53 -- common/autotest_common.sh@10 -- # set +x 00:28:03.292 nvme0n1 00:28:03.292 12:03:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:03.292 12:03:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.292 12:03:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:03.292 12:03:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:03.292 12:03:53 -- common/autotest_common.sh@10 -- # set +x 00:28:03.292 12:03:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:03.292 12:03:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.292 12:03:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.292 12:03:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:03.292 12:03:53 -- common/autotest_common.sh@10 -- # set +x 00:28:03.550 12:03:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:03.550 12:03:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:03.550 12:03:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:28:03.550 12:03:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:03.550 12:03:53 -- host/auth.sh@44 -- # digest=sha384 00:28:03.550 12:03:53 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:03.550 12:03:53 -- host/auth.sh@44 -- # keyid=3 00:28:03.550 12:03:53 -- host/auth.sh@45 -- # key=DHHC-1:02:ODkyODY0YTA3OGUwNzVlMDJjOTIwNmQ5N2I3YmM2NzY2MTFkODQ3MGMzZWU0MWNmQVS8oQ==: 00:28:03.550 12:03:53 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:03.550 12:03:53 -- host/auth.sh@48 -- # echo ffdhe2048 00:28:03.550 12:03:53 -- host/auth.sh@49 -- # echo DHHC-1:02:ODkyODY0YTA3OGUwNzVlMDJjOTIwNmQ5N2I3YmM2NzY2MTFkODQ3MGMzZWU0MWNmQVS8oQ==: 00:28:03.550 12:03:53 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 3 00:28:03.550 12:03:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:03.550 12:03:53 -- host/auth.sh@68 -- # digest=sha384 00:28:03.550 12:03:53 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:28:03.550 12:03:53 -- host/auth.sh@68 -- # keyid=3 00:28:03.550 12:03:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:03.550 12:03:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:03.550 12:03:53 -- common/autotest_common.sh@10 -- # set +x 00:28:03.550 12:03:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:03.550 12:03:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:03.550 12:03:53 -- nvmf/common.sh@717 -- # local ip 00:28:03.550 12:03:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:03.550 12:03:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:03.550 12:03:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.550 12:03:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.550 12:03:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:03.550 12:03:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.550 12:03:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:03.550 12:03:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:03.550 12:03:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:03.550 12:03:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:28:03.550 12:03:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:03.550 12:03:53 -- common/autotest_common.sh@10 -- # set +x 00:28:03.550 nvme0n1 00:28:03.550 12:03:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:03.550 12:03:54 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:03.550 12:03:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.550 12:03:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:03.550 12:03:54 -- common/autotest_common.sh@10 -- # set +x 00:28:03.550 12:03:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:03.550 12:03:54 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.550 12:03:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.550 12:03:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:03.550 12:03:54 -- common/autotest_common.sh@10 -- # set +x 00:28:03.550 12:03:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:03.550 12:03:54 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:03.550 12:03:54 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:28:03.550 12:03:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:03.550 12:03:54 -- host/auth.sh@44 -- # digest=sha384 00:28:03.550 12:03:54 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:03.550 12:03:54 -- host/auth.sh@44 -- # keyid=4 00:28:03.550 12:03:54 -- host/auth.sh@45 -- # key=DHHC-1:03:MDEyYzcwZjMwZGEwMTcxN2NjODE4MDI0YTgzZWJlOTU2N2I1OGEzZGYwNzYyMTk0YWZjMDE4YTFlMDIwNTk1MvoohfI=: 00:28:03.550 12:03:54 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:03.550 12:03:54 -- host/auth.sh@48 -- # echo ffdhe2048 00:28:03.550 12:03:54 -- host/auth.sh@49 -- # echo DHHC-1:03:MDEyYzcwZjMwZGEwMTcxN2NjODE4MDI0YTgzZWJlOTU2N2I1OGEzZGYwNzYyMTk0YWZjMDE4YTFlMDIwNTk1MvoohfI=: 00:28:03.550 12:03:54 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 4 00:28:03.550 12:03:54 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:03.550 12:03:54 -- host/auth.sh@68 -- # digest=sha384 00:28:03.550 12:03:54 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:28:03.550 12:03:54 -- host/auth.sh@68 -- # keyid=4 00:28:03.550 12:03:54 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:03.550 12:03:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:03.550 12:03:54 -- common/autotest_common.sh@10 -- # set +x 00:28:03.550 12:03:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:03.550 12:03:54 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:03.550 12:03:54 -- nvmf/common.sh@717 -- # local ip 00:28:03.550 12:03:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:03.550 12:03:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:03.550 12:03:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.550 12:03:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.550 12:03:54 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:03.550 12:03:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.550 12:03:54 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:03.550 12:03:54 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:03.551 12:03:54 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:03.551 12:03:54 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:03.551 12:03:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:03.551 12:03:54 -- common/autotest_common.sh@10 -- # set +x 00:28:03.808 nvme0n1 00:28:03.808 12:03:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:03.808 12:03:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.808 12:03:54 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:03.808 12:03:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:03.808 12:03:54 -- common/autotest_common.sh@10 -- # set +x 00:28:03.808 12:03:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:03.808 12:03:54 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.808 12:03:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.808 12:03:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:03.808 12:03:54 -- common/autotest_common.sh@10 -- # set +x 00:28:03.808 12:03:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:03.808 12:03:54 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:28:03.808 12:03:54 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:03.808 12:03:54 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:28:03.808 12:03:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:03.808 12:03:54 -- host/auth.sh@44 -- # digest=sha384 00:28:03.808 12:03:54 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:03.808 12:03:54 -- host/auth.sh@44 -- # keyid=0 00:28:03.808 12:03:54 -- host/auth.sh@45 -- # key=DHHC-1:00:MTQwOWVlYTk0MTMwZmFkYzA0NWI5ODFiNjE0MGQwYmMyrZ6G: 00:28:03.808 12:03:54 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:03.808 12:03:54 -- host/auth.sh@48 -- # echo ffdhe3072 00:28:03.808 12:03:54 -- host/auth.sh@49 -- # echo DHHC-1:00:MTQwOWVlYTk0MTMwZmFkYzA0NWI5ODFiNjE0MGQwYmMyrZ6G: 00:28:03.808 12:03:54 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 0 00:28:03.809 12:03:54 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:03.809 12:03:54 -- host/auth.sh@68 -- # digest=sha384 00:28:03.809 12:03:54 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:28:03.809 12:03:54 -- host/auth.sh@68 -- # keyid=0 00:28:03.809 12:03:54 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:03.809 12:03:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:03.809 12:03:54 -- common/autotest_common.sh@10 -- # set +x 00:28:03.809 12:03:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:03.809 12:03:54 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:03.809 12:03:54 -- nvmf/common.sh@717 -- # local ip 00:28:03.809 12:03:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:03.809 12:03:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:03.809 12:03:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.809 12:03:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.809 12:03:54 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:03.809 12:03:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.809 12:03:54 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:03.809 12:03:54 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:03.809 12:03:54 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:03.809 12:03:54 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:28:03.809 12:03:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:03.809 12:03:54 -- common/autotest_common.sh@10 -- # set +x 00:28:04.066 nvme0n1 00:28:04.066 12:03:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:04.066 12:03:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.066 12:03:54 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:04.066 12:03:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:04.066 12:03:54 -- common/autotest_common.sh@10 -- # set +x 00:28:04.066 12:03:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:04.066 12:03:54 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.066 12:03:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:04.066 12:03:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:04.066 12:03:54 -- common/autotest_common.sh@10 -- # set +x 00:28:04.066 12:03:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:04.066 12:03:54 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:04.066 12:03:54 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:28:04.067 12:03:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:04.067 12:03:54 -- host/auth.sh@44 -- # digest=sha384 00:28:04.067 12:03:54 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:04.067 12:03:54 -- host/auth.sh@44 -- # keyid=1 00:28:04.067 12:03:54 -- host/auth.sh@45 -- # key=DHHC-1:00:ZWFhNzEzOWU2NzViY2UzN2E0ZjJlZjMzNzAzNDI0MDJiOWFkYjRjNGJjOWEwMjQ24CrVsw==: 00:28:04.067 12:03:54 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:04.067 12:03:54 -- host/auth.sh@48 -- # echo ffdhe3072 00:28:04.067 12:03:54 -- host/auth.sh@49 -- # echo DHHC-1:00:ZWFhNzEzOWU2NzViY2UzN2E0ZjJlZjMzNzAzNDI0MDJiOWFkYjRjNGJjOWEwMjQ24CrVsw==: 00:28:04.067 12:03:54 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 1 00:28:04.067 12:03:54 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:04.067 12:03:54 -- host/auth.sh@68 -- # digest=sha384 00:28:04.067 12:03:54 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:28:04.067 12:03:54 -- host/auth.sh@68 -- # keyid=1 00:28:04.067 12:03:54 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:04.067 12:03:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:04.067 12:03:54 -- common/autotest_common.sh@10 -- # set +x 00:28:04.067 12:03:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:04.067 12:03:54 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:04.067 12:03:54 -- nvmf/common.sh@717 -- # local ip 00:28:04.067 12:03:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:04.067 12:03:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:04.067 12:03:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:04.067 12:03:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:04.067 12:03:54 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:04.067 12:03:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:04.067 12:03:54 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:04.067 12:03:54 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:04.067 12:03:54 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:04.067 12:03:54 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:28:04.067 12:03:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:04.067 12:03:54 -- common/autotest_common.sh@10 -- # set +x 00:28:04.325 nvme0n1 00:28:04.325 12:03:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:04.325 12:03:54 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:04.325 12:03:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.325 12:03:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:04.325 12:03:54 -- common/autotest_common.sh@10 -- # set +x 00:28:04.325 12:03:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:04.325 12:03:54 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.325 12:03:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:04.325 12:03:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:04.325 12:03:54 -- common/autotest_common.sh@10 -- # set +x 00:28:04.325 12:03:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:04.325 12:03:54 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:04.325 12:03:54 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:28:04.325 12:03:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:04.325 12:03:54 -- host/auth.sh@44 -- # digest=sha384 00:28:04.325 12:03:54 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:04.325 12:03:54 -- host/auth.sh@44 -- # keyid=2 00:28:04.325 12:03:54 -- host/auth.sh@45 -- # key=DHHC-1:01:NzkwOWRiZDc5Mzc5YzliZDUyMmU4YmU1ZGFlYzFlMmIQf9uH: 00:28:04.325 12:03:54 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:04.325 12:03:54 -- host/auth.sh@48 -- # echo ffdhe3072 00:28:04.325 12:03:54 -- host/auth.sh@49 -- # echo DHHC-1:01:NzkwOWRiZDc5Mzc5YzliZDUyMmU4YmU1ZGFlYzFlMmIQf9uH: 00:28:04.325 12:03:54 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 2 00:28:04.325 12:03:54 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:04.325 12:03:54 -- host/auth.sh@68 -- # digest=sha384 00:28:04.325 12:03:54 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:28:04.325 12:03:54 -- host/auth.sh@68 -- # keyid=2 00:28:04.325 12:03:54 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:04.325 12:03:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:04.325 12:03:54 -- common/autotest_common.sh@10 -- # set +x 00:28:04.325 12:03:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:04.325 12:03:54 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:04.325 12:03:54 -- nvmf/common.sh@717 -- # local ip 00:28:04.325 12:03:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:04.325 12:03:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:04.325 12:03:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:04.325 12:03:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:04.325 12:03:54 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:04.325 12:03:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:04.325 12:03:54 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:04.325 12:03:54 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:04.325 12:03:54 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:04.325 12:03:54 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:04.325 12:03:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:04.325 12:03:54 -- common/autotest_common.sh@10 -- # set +x 00:28:04.584 nvme0n1 00:28:04.584 12:03:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:04.584 12:03:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.584 12:03:54 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:04.584 12:03:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:04.584 12:03:54 -- common/autotest_common.sh@10 -- # set +x 00:28:04.584 12:03:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:04.584 12:03:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.584 12:03:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:04.584 12:03:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:04.584 12:03:55 -- common/autotest_common.sh@10 -- # set +x 00:28:04.584 12:03:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:04.584 12:03:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:04.584 12:03:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:28:04.584 12:03:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:04.584 12:03:55 -- host/auth.sh@44 -- # digest=sha384 00:28:04.584 12:03:55 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:04.584 12:03:55 -- host/auth.sh@44 -- # keyid=3 00:28:04.584 12:03:55 -- host/auth.sh@45 -- # key=DHHC-1:02:ODkyODY0YTA3OGUwNzVlMDJjOTIwNmQ5N2I3YmM2NzY2MTFkODQ3MGMzZWU0MWNmQVS8oQ==: 00:28:04.584 12:03:55 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:04.584 12:03:55 -- host/auth.sh@48 -- # echo ffdhe3072 00:28:04.584 12:03:55 -- host/auth.sh@49 -- # echo DHHC-1:02:ODkyODY0YTA3OGUwNzVlMDJjOTIwNmQ5N2I3YmM2NzY2MTFkODQ3MGMzZWU0MWNmQVS8oQ==: 00:28:04.584 12:03:55 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 3 00:28:04.584 12:03:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:04.584 12:03:55 -- host/auth.sh@68 -- # digest=sha384 00:28:04.584 12:03:55 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:28:04.584 12:03:55 -- host/auth.sh@68 -- # keyid=3 00:28:04.584 12:03:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:04.584 12:03:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:04.584 12:03:55 -- common/autotest_common.sh@10 -- # set +x 00:28:04.584 12:03:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:04.584 12:03:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:04.584 12:03:55 -- nvmf/common.sh@717 -- # local ip 00:28:04.584 12:03:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:04.584 12:03:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:04.584 12:03:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:04.584 12:03:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:04.584 12:03:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:04.584 12:03:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:04.584 12:03:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:04.584 12:03:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:04.584 12:03:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:04.584 12:03:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:28:04.584 12:03:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:04.584 12:03:55 -- common/autotest_common.sh@10 -- # set +x 00:28:04.842 nvme0n1 00:28:04.842 12:03:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:04.842 12:03:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.842 12:03:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:04.842 12:03:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:04.842 12:03:55 -- common/autotest_common.sh@10 -- # set +x 00:28:04.842 12:03:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:04.842 12:03:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.842 12:03:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:04.842 12:03:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:04.842 12:03:55 -- common/autotest_common.sh@10 -- # set +x 00:28:04.842 12:03:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:04.842 12:03:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:04.842 12:03:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:28:04.842 12:03:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:04.842 12:03:55 -- host/auth.sh@44 -- # digest=sha384 00:28:04.842 12:03:55 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:04.842 12:03:55 -- host/auth.sh@44 -- # keyid=4 00:28:04.842 12:03:55 -- host/auth.sh@45 -- # key=DHHC-1:03:MDEyYzcwZjMwZGEwMTcxN2NjODE4MDI0YTgzZWJlOTU2N2I1OGEzZGYwNzYyMTk0YWZjMDE4YTFlMDIwNTk1MvoohfI=: 00:28:04.842 12:03:55 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:04.842 12:03:55 -- host/auth.sh@48 -- # echo ffdhe3072 00:28:04.842 12:03:55 -- host/auth.sh@49 -- # echo DHHC-1:03:MDEyYzcwZjMwZGEwMTcxN2NjODE4MDI0YTgzZWJlOTU2N2I1OGEzZGYwNzYyMTk0YWZjMDE4YTFlMDIwNTk1MvoohfI=: 00:28:04.842 12:03:55 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 4 00:28:04.842 12:03:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:04.842 12:03:55 -- host/auth.sh@68 -- # digest=sha384 00:28:04.842 12:03:55 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:28:04.842 12:03:55 -- host/auth.sh@68 -- # keyid=4 00:28:04.842 12:03:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:04.842 12:03:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:04.842 12:03:55 -- common/autotest_common.sh@10 -- # set +x 00:28:04.842 12:03:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:04.842 12:03:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:04.842 12:03:55 -- nvmf/common.sh@717 -- # local ip 00:28:04.842 12:03:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:04.842 12:03:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:04.842 12:03:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:04.842 12:03:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:04.842 12:03:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:04.842 12:03:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:04.842 12:03:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:04.842 12:03:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:04.842 12:03:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:04.842 12:03:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:04.842 12:03:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:04.842 12:03:55 -- common/autotest_common.sh@10 -- # set +x 00:28:05.100 nvme0n1 00:28:05.100 12:03:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:05.100 12:03:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.100 12:03:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:05.100 12:03:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:05.100 12:03:55 -- common/autotest_common.sh@10 -- # set +x 00:28:05.100 12:03:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:05.100 12:03:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.100 12:03:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.100 12:03:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:05.100 12:03:55 -- common/autotest_common.sh@10 -- # set +x 00:28:05.100 12:03:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:05.100 12:03:55 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:28:05.100 12:03:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:05.100 12:03:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:28:05.100 12:03:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:05.100 12:03:55 -- host/auth.sh@44 -- # digest=sha384 00:28:05.100 12:03:55 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:05.100 12:03:55 -- host/auth.sh@44 -- # keyid=0 00:28:05.100 12:03:55 -- host/auth.sh@45 -- # key=DHHC-1:00:MTQwOWVlYTk0MTMwZmFkYzA0NWI5ODFiNjE0MGQwYmMyrZ6G: 00:28:05.100 12:03:55 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:05.100 12:03:55 -- host/auth.sh@48 -- # echo ffdhe4096 00:28:05.100 12:03:55 -- host/auth.sh@49 -- # echo DHHC-1:00:MTQwOWVlYTk0MTMwZmFkYzA0NWI5ODFiNjE0MGQwYmMyrZ6G: 00:28:05.100 12:03:55 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 0 00:28:05.100 12:03:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:05.100 12:03:55 -- host/auth.sh@68 -- # digest=sha384 00:28:05.100 12:03:55 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:28:05.100 12:03:55 -- host/auth.sh@68 -- # keyid=0 00:28:05.100 12:03:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:05.100 12:03:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:05.100 12:03:55 -- common/autotest_common.sh@10 -- # set +x 00:28:05.100 12:03:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:05.101 12:03:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:05.101 12:03:55 -- nvmf/common.sh@717 -- # local ip 00:28:05.101 12:03:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:05.101 12:03:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:05.101 12:03:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.101 12:03:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.101 12:03:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:05.101 12:03:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.101 12:03:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:05.101 12:03:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:05.101 12:03:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:05.101 12:03:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:28:05.101 12:03:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:05.101 12:03:55 -- common/autotest_common.sh@10 -- # set +x 00:28:05.359 nvme0n1 00:28:05.359 12:03:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:05.359 12:03:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.359 12:03:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:05.359 12:03:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:05.359 12:03:55 -- common/autotest_common.sh@10 -- # set +x 00:28:05.359 12:03:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:05.359 12:03:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.359 12:03:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.359 12:03:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:05.359 12:03:55 -- common/autotest_common.sh@10 -- # set +x 00:28:05.359 12:03:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:05.359 12:03:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:05.359 12:03:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:28:05.359 12:03:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:05.359 12:03:55 -- host/auth.sh@44 -- # digest=sha384 00:28:05.359 12:03:55 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:05.359 12:03:55 -- host/auth.sh@44 -- # keyid=1 00:28:05.359 12:03:55 -- host/auth.sh@45 -- # key=DHHC-1:00:ZWFhNzEzOWU2NzViY2UzN2E0ZjJlZjMzNzAzNDI0MDJiOWFkYjRjNGJjOWEwMjQ24CrVsw==: 00:28:05.359 12:03:55 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:05.359 12:03:55 -- host/auth.sh@48 -- # echo ffdhe4096 00:28:05.359 12:03:55 -- host/auth.sh@49 -- # echo DHHC-1:00:ZWFhNzEzOWU2NzViY2UzN2E0ZjJlZjMzNzAzNDI0MDJiOWFkYjRjNGJjOWEwMjQ24CrVsw==: 00:28:05.359 12:03:55 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 1 00:28:05.359 12:03:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:05.359 12:03:55 -- host/auth.sh@68 -- # digest=sha384 00:28:05.359 12:03:55 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:28:05.359 12:03:55 -- host/auth.sh@68 -- # keyid=1 00:28:05.359 12:03:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:05.359 12:03:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:05.359 12:03:55 -- common/autotest_common.sh@10 -- # set +x 00:28:05.359 12:03:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:05.359 12:03:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:05.359 12:03:55 -- nvmf/common.sh@717 -- # local ip 00:28:05.359 12:03:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:05.359 12:03:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:05.359 12:03:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.359 12:03:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.359 12:03:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:05.359 12:03:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.359 12:03:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:05.359 12:03:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:05.359 12:03:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:05.359 12:03:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:28:05.359 12:03:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:05.359 12:03:55 -- common/autotest_common.sh@10 -- # set +x 00:28:05.617 nvme0n1 00:28:05.617 12:03:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:05.617 12:03:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:05.617 12:03:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.617 12:03:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:05.617 12:03:56 -- common/autotest_common.sh@10 -- # set +x 00:28:05.617 12:03:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:05.617 12:03:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.617 12:03:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.617 12:03:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:05.617 12:03:56 -- common/autotest_common.sh@10 -- # set +x 00:28:05.617 12:03:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:05.617 12:03:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:05.617 12:03:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:28:05.617 12:03:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:05.617 12:03:56 -- host/auth.sh@44 -- # digest=sha384 00:28:05.617 12:03:56 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:05.617 12:03:56 -- host/auth.sh@44 -- # keyid=2 00:28:05.617 12:03:56 -- host/auth.sh@45 -- # key=DHHC-1:01:NzkwOWRiZDc5Mzc5YzliZDUyMmU4YmU1ZGFlYzFlMmIQf9uH: 00:28:05.617 12:03:56 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:05.617 12:03:56 -- host/auth.sh@48 -- # echo ffdhe4096 00:28:05.617 12:03:56 -- host/auth.sh@49 -- # echo DHHC-1:01:NzkwOWRiZDc5Mzc5YzliZDUyMmU4YmU1ZGFlYzFlMmIQf9uH: 00:28:05.617 12:03:56 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 2 00:28:05.617 12:03:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:05.617 12:03:56 -- host/auth.sh@68 -- # digest=sha384 00:28:05.617 12:03:56 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:28:05.617 12:03:56 -- host/auth.sh@68 -- # keyid=2 00:28:05.617 12:03:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:05.617 12:03:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:05.617 12:03:56 -- common/autotest_common.sh@10 -- # set +x 00:28:05.617 12:03:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:05.617 12:03:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:05.617 12:03:56 -- nvmf/common.sh@717 -- # local ip 00:28:05.617 12:03:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:05.617 12:03:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:05.617 12:03:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.617 12:03:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.617 12:03:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:05.617 12:03:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.617 12:03:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:05.617 12:03:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:05.617 12:03:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:05.617 12:03:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:05.617 12:03:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:05.617 12:03:56 -- common/autotest_common.sh@10 -- # set +x 00:28:05.875 nvme0n1 00:28:05.875 12:03:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:05.875 12:03:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:05.875 12:03:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.875 12:03:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:05.875 12:03:56 -- common/autotest_common.sh@10 -- # set +x 00:28:05.875 12:03:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:06.133 12:03:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.133 12:03:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.133 12:03:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:06.133 12:03:56 -- common/autotest_common.sh@10 -- # set +x 00:28:06.133 12:03:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:06.133 12:03:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:06.133 12:03:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:28:06.133 12:03:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:06.133 12:03:56 -- host/auth.sh@44 -- # digest=sha384 00:28:06.133 12:03:56 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:06.133 12:03:56 -- host/auth.sh@44 -- # keyid=3 00:28:06.133 12:03:56 -- host/auth.sh@45 -- # key=DHHC-1:02:ODkyODY0YTA3OGUwNzVlMDJjOTIwNmQ5N2I3YmM2NzY2MTFkODQ3MGMzZWU0MWNmQVS8oQ==: 00:28:06.133 12:03:56 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:06.133 12:03:56 -- host/auth.sh@48 -- # echo ffdhe4096 00:28:06.133 12:03:56 -- host/auth.sh@49 -- # echo DHHC-1:02:ODkyODY0YTA3OGUwNzVlMDJjOTIwNmQ5N2I3YmM2NzY2MTFkODQ3MGMzZWU0MWNmQVS8oQ==: 00:28:06.133 12:03:56 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 3 00:28:06.133 12:03:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:06.133 12:03:56 -- host/auth.sh@68 -- # digest=sha384 00:28:06.133 12:03:56 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:28:06.133 12:03:56 -- host/auth.sh@68 -- # keyid=3 00:28:06.133 12:03:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:06.133 12:03:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:06.133 12:03:56 -- common/autotest_common.sh@10 -- # set +x 00:28:06.133 12:03:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:06.133 12:03:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:06.133 12:03:56 -- nvmf/common.sh@717 -- # local ip 00:28:06.133 12:03:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:06.133 12:03:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:06.133 12:03:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.133 12:03:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.133 12:03:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:06.133 12:03:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.133 12:03:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:06.133 12:03:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:06.133 12:03:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:06.133 12:03:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:28:06.133 12:03:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:06.133 12:03:56 -- common/autotest_common.sh@10 -- # set +x 00:28:06.391 nvme0n1 00:28:06.391 12:03:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:06.391 12:03:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.391 12:03:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:06.391 12:03:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:06.391 12:03:56 -- common/autotest_common.sh@10 -- # set +x 00:28:06.391 12:03:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:06.391 12:03:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.391 12:03:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.391 12:03:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:06.391 12:03:56 -- common/autotest_common.sh@10 -- # set +x 00:28:06.391 12:03:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:06.391 12:03:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:06.391 12:03:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:28:06.391 12:03:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:06.391 12:03:56 -- host/auth.sh@44 -- # digest=sha384 00:28:06.391 12:03:56 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:06.391 12:03:56 -- host/auth.sh@44 -- # keyid=4 00:28:06.391 12:03:56 -- host/auth.sh@45 -- # key=DHHC-1:03:MDEyYzcwZjMwZGEwMTcxN2NjODE4MDI0YTgzZWJlOTU2N2I1OGEzZGYwNzYyMTk0YWZjMDE4YTFlMDIwNTk1MvoohfI=: 00:28:06.391 12:03:56 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:06.391 12:03:56 -- host/auth.sh@48 -- # echo ffdhe4096 00:28:06.391 12:03:56 -- host/auth.sh@49 -- # echo DHHC-1:03:MDEyYzcwZjMwZGEwMTcxN2NjODE4MDI0YTgzZWJlOTU2N2I1OGEzZGYwNzYyMTk0YWZjMDE4YTFlMDIwNTk1MvoohfI=: 00:28:06.391 12:03:56 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 4 00:28:06.391 12:03:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:06.391 12:03:56 -- host/auth.sh@68 -- # digest=sha384 00:28:06.391 12:03:56 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:28:06.391 12:03:56 -- host/auth.sh@68 -- # keyid=4 00:28:06.391 12:03:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:06.391 12:03:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:06.391 12:03:56 -- common/autotest_common.sh@10 -- # set +x 00:28:06.391 12:03:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:06.391 12:03:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:06.391 12:03:56 -- nvmf/common.sh@717 -- # local ip 00:28:06.391 12:03:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:06.391 12:03:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:06.391 12:03:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.391 12:03:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.391 12:03:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:06.391 12:03:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.391 12:03:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:06.391 12:03:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:06.391 12:03:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:06.391 12:03:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:06.391 12:03:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:06.391 12:03:56 -- common/autotest_common.sh@10 -- # set +x 00:28:06.649 nvme0n1 00:28:06.649 12:03:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:06.649 12:03:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.649 12:03:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:06.649 12:03:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:06.649 12:03:57 -- common/autotest_common.sh@10 -- # set +x 00:28:06.649 12:03:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:06.649 12:03:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.649 12:03:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.649 12:03:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:06.649 12:03:57 -- common/autotest_common.sh@10 -- # set +x 00:28:06.649 12:03:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:06.649 12:03:57 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:28:06.649 12:03:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:06.649 12:03:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:28:06.649 12:03:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:06.649 12:03:57 -- host/auth.sh@44 -- # digest=sha384 00:28:06.649 12:03:57 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:06.649 12:03:57 -- host/auth.sh@44 -- # keyid=0 00:28:06.649 12:03:57 -- host/auth.sh@45 -- # key=DHHC-1:00:MTQwOWVlYTk0MTMwZmFkYzA0NWI5ODFiNjE0MGQwYmMyrZ6G: 00:28:06.649 12:03:57 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:06.649 12:03:57 -- host/auth.sh@48 -- # echo ffdhe6144 00:28:06.649 12:03:57 -- host/auth.sh@49 -- # echo DHHC-1:00:MTQwOWVlYTk0MTMwZmFkYzA0NWI5ODFiNjE0MGQwYmMyrZ6G: 00:28:06.649 12:03:57 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 0 00:28:06.649 12:03:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:06.649 12:03:57 -- host/auth.sh@68 -- # digest=sha384 00:28:06.649 12:03:57 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:28:06.649 12:03:57 -- host/auth.sh@68 -- # keyid=0 00:28:06.649 12:03:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:06.649 12:03:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:06.649 12:03:57 -- common/autotest_common.sh@10 -- # set +x 00:28:06.649 12:03:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:06.649 12:03:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:06.649 12:03:57 -- nvmf/common.sh@717 -- # local ip 00:28:06.649 12:03:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:06.649 12:03:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:06.649 12:03:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.649 12:03:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.649 12:03:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:06.649 12:03:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.649 12:03:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:06.649 12:03:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:06.649 12:03:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:06.649 12:03:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:28:06.649 12:03:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:06.649 12:03:57 -- common/autotest_common.sh@10 -- # set +x 00:28:07.214 nvme0n1 00:28:07.214 12:03:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.214 12:03:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.214 12:03:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:07.214 12:03:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.214 12:03:57 -- common/autotest_common.sh@10 -- # set +x 00:28:07.214 12:03:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.214 12:03:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.214 12:03:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.214 12:03:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.214 12:03:57 -- common/autotest_common.sh@10 -- # set +x 00:28:07.214 12:03:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.214 12:03:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:07.214 12:03:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:28:07.214 12:03:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:07.214 12:03:57 -- host/auth.sh@44 -- # digest=sha384 00:28:07.214 12:03:57 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:07.214 12:03:57 -- host/auth.sh@44 -- # keyid=1 00:28:07.214 12:03:57 -- host/auth.sh@45 -- # key=DHHC-1:00:ZWFhNzEzOWU2NzViY2UzN2E0ZjJlZjMzNzAzNDI0MDJiOWFkYjRjNGJjOWEwMjQ24CrVsw==: 00:28:07.214 12:03:57 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:07.214 12:03:57 -- host/auth.sh@48 -- # echo ffdhe6144 00:28:07.214 12:03:57 -- host/auth.sh@49 -- # echo DHHC-1:00:ZWFhNzEzOWU2NzViY2UzN2E0ZjJlZjMzNzAzNDI0MDJiOWFkYjRjNGJjOWEwMjQ24CrVsw==: 00:28:07.214 12:03:57 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 1 00:28:07.214 12:03:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:07.214 12:03:57 -- host/auth.sh@68 -- # digest=sha384 00:28:07.214 12:03:57 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:28:07.214 12:03:57 -- host/auth.sh@68 -- # keyid=1 00:28:07.214 12:03:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:07.214 12:03:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.214 12:03:57 -- common/autotest_common.sh@10 -- # set +x 00:28:07.214 12:03:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.214 12:03:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:07.214 12:03:57 -- nvmf/common.sh@717 -- # local ip 00:28:07.214 12:03:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:07.214 12:03:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:07.214 12:03:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.214 12:03:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.214 12:03:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:07.214 12:03:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.214 12:03:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:07.214 12:03:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:07.214 12:03:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:07.214 12:03:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:28:07.214 12:03:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.214 12:03:57 -- common/autotest_common.sh@10 -- # set +x 00:28:07.472 nvme0n1 00:28:07.472 12:03:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.472 12:03:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.472 12:03:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:07.472 12:03:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.472 12:03:57 -- common/autotest_common.sh@10 -- # set +x 00:28:07.472 12:03:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.472 12:03:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.472 12:03:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.472 12:03:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.472 12:03:57 -- common/autotest_common.sh@10 -- # set +x 00:28:07.472 12:03:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.472 12:03:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:07.472 12:03:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:28:07.472 12:03:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:07.472 12:03:57 -- host/auth.sh@44 -- # digest=sha384 00:28:07.472 12:03:57 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:07.472 12:03:57 -- host/auth.sh@44 -- # keyid=2 00:28:07.472 12:03:57 -- host/auth.sh@45 -- # key=DHHC-1:01:NzkwOWRiZDc5Mzc5YzliZDUyMmU4YmU1ZGFlYzFlMmIQf9uH: 00:28:07.472 12:03:57 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:07.472 12:03:57 -- host/auth.sh@48 -- # echo ffdhe6144 00:28:07.472 12:03:57 -- host/auth.sh@49 -- # echo DHHC-1:01:NzkwOWRiZDc5Mzc5YzliZDUyMmU4YmU1ZGFlYzFlMmIQf9uH: 00:28:07.472 12:03:57 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 2 00:28:07.472 12:03:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:07.472 12:03:57 -- host/auth.sh@68 -- # digest=sha384 00:28:07.472 12:03:57 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:28:07.472 12:03:57 -- host/auth.sh@68 -- # keyid=2 00:28:07.472 12:03:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:07.472 12:03:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.472 12:03:57 -- common/autotest_common.sh@10 -- # set +x 00:28:07.472 12:03:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.472 12:03:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:07.472 12:03:57 -- nvmf/common.sh@717 -- # local ip 00:28:07.472 12:03:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:07.472 12:03:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:07.472 12:03:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.472 12:03:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.472 12:03:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:07.472 12:03:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.472 12:03:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:07.472 12:03:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:07.472 12:03:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:07.472 12:03:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:07.472 12:03:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.472 12:03:57 -- common/autotest_common.sh@10 -- # set +x 00:28:08.092 nvme0n1 00:28:08.092 12:03:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.092 12:03:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.092 12:03:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:08.092 12:03:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.092 12:03:58 -- common/autotest_common.sh@10 -- # set +x 00:28:08.092 12:03:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.092 12:03:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.092 12:03:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.092 12:03:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.092 12:03:58 -- common/autotest_common.sh@10 -- # set +x 00:28:08.092 12:03:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.092 12:03:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:08.092 12:03:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:28:08.092 12:03:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:08.092 12:03:58 -- host/auth.sh@44 -- # digest=sha384 00:28:08.092 12:03:58 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:08.092 12:03:58 -- host/auth.sh@44 -- # keyid=3 00:28:08.092 12:03:58 -- host/auth.sh@45 -- # key=DHHC-1:02:ODkyODY0YTA3OGUwNzVlMDJjOTIwNmQ5N2I3YmM2NzY2MTFkODQ3MGMzZWU0MWNmQVS8oQ==: 00:28:08.092 12:03:58 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:08.092 12:03:58 -- host/auth.sh@48 -- # echo ffdhe6144 00:28:08.092 12:03:58 -- host/auth.sh@49 -- # echo DHHC-1:02:ODkyODY0YTA3OGUwNzVlMDJjOTIwNmQ5N2I3YmM2NzY2MTFkODQ3MGMzZWU0MWNmQVS8oQ==: 00:28:08.092 12:03:58 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 3 00:28:08.092 12:03:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:08.092 12:03:58 -- host/auth.sh@68 -- # digest=sha384 00:28:08.092 12:03:58 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:28:08.092 12:03:58 -- host/auth.sh@68 -- # keyid=3 00:28:08.092 12:03:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:08.092 12:03:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.092 12:03:58 -- common/autotest_common.sh@10 -- # set +x 00:28:08.092 12:03:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.092 12:03:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:08.092 12:03:58 -- nvmf/common.sh@717 -- # local ip 00:28:08.092 12:03:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:08.092 12:03:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:08.092 12:03:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.092 12:03:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.092 12:03:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:08.092 12:03:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.092 12:03:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:08.092 12:03:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:08.092 12:03:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:08.092 12:03:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:28:08.092 12:03:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.092 12:03:58 -- common/autotest_common.sh@10 -- # set +x 00:28:08.351 nvme0n1 00:28:08.351 12:03:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.351 12:03:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.351 12:03:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:08.351 12:03:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.351 12:03:58 -- common/autotest_common.sh@10 -- # set +x 00:28:08.351 12:03:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.351 12:03:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.351 12:03:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.351 12:03:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.351 12:03:58 -- common/autotest_common.sh@10 -- # set +x 00:28:08.351 12:03:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.351 12:03:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:08.351 12:03:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:28:08.351 12:03:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:08.351 12:03:58 -- host/auth.sh@44 -- # digest=sha384 00:28:08.351 12:03:58 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:08.351 12:03:58 -- host/auth.sh@44 -- # keyid=4 00:28:08.351 12:03:58 -- host/auth.sh@45 -- # key=DHHC-1:03:MDEyYzcwZjMwZGEwMTcxN2NjODE4MDI0YTgzZWJlOTU2N2I1OGEzZGYwNzYyMTk0YWZjMDE4YTFlMDIwNTk1MvoohfI=: 00:28:08.351 12:03:58 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:08.351 12:03:58 -- host/auth.sh@48 -- # echo ffdhe6144 00:28:08.351 12:03:58 -- host/auth.sh@49 -- # echo DHHC-1:03:MDEyYzcwZjMwZGEwMTcxN2NjODE4MDI0YTgzZWJlOTU2N2I1OGEzZGYwNzYyMTk0YWZjMDE4YTFlMDIwNTk1MvoohfI=: 00:28:08.351 12:03:58 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 4 00:28:08.351 12:03:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:08.351 12:03:58 -- host/auth.sh@68 -- # digest=sha384 00:28:08.351 12:03:58 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:28:08.351 12:03:58 -- host/auth.sh@68 -- # keyid=4 00:28:08.351 12:03:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:08.351 12:03:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.351 12:03:58 -- common/autotest_common.sh@10 -- # set +x 00:28:08.351 12:03:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.351 12:03:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:08.351 12:03:58 -- nvmf/common.sh@717 -- # local ip 00:28:08.351 12:03:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:08.351 12:03:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:08.351 12:03:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.351 12:03:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.351 12:03:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:08.351 12:03:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.351 12:03:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:08.351 12:03:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:08.351 12:03:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:08.351 12:03:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:08.351 12:03:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.351 12:03:58 -- common/autotest_common.sh@10 -- # set +x 00:28:08.918 nvme0n1 00:28:08.918 12:03:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.918 12:03:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.918 12:03:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:08.918 12:03:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.918 12:03:59 -- common/autotest_common.sh@10 -- # set +x 00:28:08.918 12:03:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.918 12:03:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.918 12:03:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.918 12:03:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.918 12:03:59 -- common/autotest_common.sh@10 -- # set +x 00:28:08.918 12:03:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.918 12:03:59 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:28:08.918 12:03:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:08.918 12:03:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:28:08.918 12:03:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:08.918 12:03:59 -- host/auth.sh@44 -- # digest=sha384 00:28:08.918 12:03:59 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:08.918 12:03:59 -- host/auth.sh@44 -- # keyid=0 00:28:08.918 12:03:59 -- host/auth.sh@45 -- # key=DHHC-1:00:MTQwOWVlYTk0MTMwZmFkYzA0NWI5ODFiNjE0MGQwYmMyrZ6G: 00:28:08.918 12:03:59 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:08.918 12:03:59 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:08.918 12:03:59 -- host/auth.sh@49 -- # echo DHHC-1:00:MTQwOWVlYTk0MTMwZmFkYzA0NWI5ODFiNjE0MGQwYmMyrZ6G: 00:28:08.918 12:03:59 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 0 00:28:08.918 12:03:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:08.918 12:03:59 -- host/auth.sh@68 -- # digest=sha384 00:28:08.918 12:03:59 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:08.918 12:03:59 -- host/auth.sh@68 -- # keyid=0 00:28:08.918 12:03:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:08.918 12:03:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.918 12:03:59 -- common/autotest_common.sh@10 -- # set +x 00:28:08.918 12:03:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.918 12:03:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:08.918 12:03:59 -- nvmf/common.sh@717 -- # local ip 00:28:08.918 12:03:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:08.918 12:03:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:08.918 12:03:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.918 12:03:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.918 12:03:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:08.918 12:03:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.918 12:03:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:08.918 12:03:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:08.918 12:03:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:08.918 12:03:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:28:08.918 12:03:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.918 12:03:59 -- common/autotest_common.sh@10 -- # set +x 00:28:09.485 nvme0n1 00:28:09.485 12:03:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.485 12:03:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:09.485 12:03:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.485 12:03:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.485 12:03:59 -- common/autotest_common.sh@10 -- # set +x 00:28:09.485 12:03:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.485 12:03:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.485 12:03:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.485 12:03:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.485 12:03:59 -- common/autotest_common.sh@10 -- # set +x 00:28:09.485 12:03:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.485 12:03:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:09.485 12:03:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:28:09.485 12:03:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:09.485 12:03:59 -- host/auth.sh@44 -- # digest=sha384 00:28:09.485 12:03:59 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:09.485 12:03:59 -- host/auth.sh@44 -- # keyid=1 00:28:09.485 12:03:59 -- host/auth.sh@45 -- # key=DHHC-1:00:ZWFhNzEzOWU2NzViY2UzN2E0ZjJlZjMzNzAzNDI0MDJiOWFkYjRjNGJjOWEwMjQ24CrVsw==: 00:28:09.485 12:03:59 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:09.485 12:03:59 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:09.485 12:03:59 -- host/auth.sh@49 -- # echo DHHC-1:00:ZWFhNzEzOWU2NzViY2UzN2E0ZjJlZjMzNzAzNDI0MDJiOWFkYjRjNGJjOWEwMjQ24CrVsw==: 00:28:09.485 12:03:59 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 1 00:28:09.485 12:03:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:09.485 12:03:59 -- host/auth.sh@68 -- # digest=sha384 00:28:09.485 12:03:59 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:09.485 12:03:59 -- host/auth.sh@68 -- # keyid=1 00:28:09.485 12:03:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:09.485 12:03:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.485 12:03:59 -- common/autotest_common.sh@10 -- # set +x 00:28:09.485 12:03:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.485 12:03:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:09.485 12:03:59 -- nvmf/common.sh@717 -- # local ip 00:28:09.485 12:03:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:09.485 12:03:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:09.485 12:03:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.485 12:03:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.485 12:03:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:09.486 12:03:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.486 12:03:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:09.486 12:03:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:09.486 12:03:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:09.486 12:03:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:28:09.486 12:03:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.486 12:03:59 -- common/autotest_common.sh@10 -- # set +x 00:28:10.052 nvme0n1 00:28:10.052 12:04:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:10.052 12:04:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:10.052 12:04:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.053 12:04:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.053 12:04:00 -- common/autotest_common.sh@10 -- # set +x 00:28:10.053 12:04:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:10.053 12:04:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.053 12:04:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.053 12:04:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.053 12:04:00 -- common/autotest_common.sh@10 -- # set +x 00:28:10.053 12:04:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:10.053 12:04:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:10.053 12:04:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:28:10.053 12:04:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:10.053 12:04:00 -- host/auth.sh@44 -- # digest=sha384 00:28:10.053 12:04:00 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:10.053 12:04:00 -- host/auth.sh@44 -- # keyid=2 00:28:10.053 12:04:00 -- host/auth.sh@45 -- # key=DHHC-1:01:NzkwOWRiZDc5Mzc5YzliZDUyMmU4YmU1ZGFlYzFlMmIQf9uH: 00:28:10.053 12:04:00 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:10.053 12:04:00 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:10.053 12:04:00 -- host/auth.sh@49 -- # echo DHHC-1:01:NzkwOWRiZDc5Mzc5YzliZDUyMmU4YmU1ZGFlYzFlMmIQf9uH: 00:28:10.053 12:04:00 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 2 00:28:10.053 12:04:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:10.053 12:04:00 -- host/auth.sh@68 -- # digest=sha384 00:28:10.053 12:04:00 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:10.053 12:04:00 -- host/auth.sh@68 -- # keyid=2 00:28:10.053 12:04:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:10.053 12:04:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.053 12:04:00 -- common/autotest_common.sh@10 -- # set +x 00:28:10.053 12:04:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:10.053 12:04:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:10.053 12:04:00 -- nvmf/common.sh@717 -- # local ip 00:28:10.053 12:04:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:10.053 12:04:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:10.053 12:04:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.053 12:04:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.053 12:04:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:10.053 12:04:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.053 12:04:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:10.053 12:04:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:10.053 12:04:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:10.053 12:04:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:10.053 12:04:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.053 12:04:00 -- common/autotest_common.sh@10 -- # set +x 00:28:10.619 nvme0n1 00:28:10.619 12:04:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:10.619 12:04:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:10.619 12:04:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.619 12:04:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.619 12:04:01 -- common/autotest_common.sh@10 -- # set +x 00:28:10.619 12:04:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:10.619 12:04:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.619 12:04:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.619 12:04:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.619 12:04:01 -- common/autotest_common.sh@10 -- # set +x 00:28:10.619 12:04:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:10.619 12:04:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:10.619 12:04:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:28:10.619 12:04:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:10.619 12:04:01 -- host/auth.sh@44 -- # digest=sha384 00:28:10.619 12:04:01 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:10.619 12:04:01 -- host/auth.sh@44 -- # keyid=3 00:28:10.620 12:04:01 -- host/auth.sh@45 -- # key=DHHC-1:02:ODkyODY0YTA3OGUwNzVlMDJjOTIwNmQ5N2I3YmM2NzY2MTFkODQ3MGMzZWU0MWNmQVS8oQ==: 00:28:10.620 12:04:01 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:10.620 12:04:01 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:10.620 12:04:01 -- host/auth.sh@49 -- # echo DHHC-1:02:ODkyODY0YTA3OGUwNzVlMDJjOTIwNmQ5N2I3YmM2NzY2MTFkODQ3MGMzZWU0MWNmQVS8oQ==: 00:28:10.620 12:04:01 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 3 00:28:10.620 12:04:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:10.620 12:04:01 -- host/auth.sh@68 -- # digest=sha384 00:28:10.620 12:04:01 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:10.620 12:04:01 -- host/auth.sh@68 -- # keyid=3 00:28:10.620 12:04:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:10.620 12:04:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.620 12:04:01 -- common/autotest_common.sh@10 -- # set +x 00:28:10.620 12:04:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:10.620 12:04:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:10.620 12:04:01 -- nvmf/common.sh@717 -- # local ip 00:28:10.620 12:04:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:10.620 12:04:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:10.620 12:04:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.620 12:04:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.620 12:04:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:10.620 12:04:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.620 12:04:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:10.620 12:04:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:10.620 12:04:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:10.620 12:04:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:28:10.620 12:04:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.620 12:04:01 -- common/autotest_common.sh@10 -- # set +x 00:28:11.187 nvme0n1 00:28:11.187 12:04:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.187 12:04:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.187 12:04:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:11.187 12:04:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.187 12:04:01 -- common/autotest_common.sh@10 -- # set +x 00:28:11.187 12:04:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.187 12:04:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.187 12:04:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.187 12:04:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.187 12:04:01 -- common/autotest_common.sh@10 -- # set +x 00:28:11.187 12:04:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.187 12:04:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:11.187 12:04:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:28:11.187 12:04:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:11.187 12:04:01 -- host/auth.sh@44 -- # digest=sha384 00:28:11.187 12:04:01 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:11.187 12:04:01 -- host/auth.sh@44 -- # keyid=4 00:28:11.187 12:04:01 -- host/auth.sh@45 -- # key=DHHC-1:03:MDEyYzcwZjMwZGEwMTcxN2NjODE4MDI0YTgzZWJlOTU2N2I1OGEzZGYwNzYyMTk0YWZjMDE4YTFlMDIwNTk1MvoohfI=: 00:28:11.187 12:04:01 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:11.187 12:04:01 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:11.187 12:04:01 -- host/auth.sh@49 -- # echo DHHC-1:03:MDEyYzcwZjMwZGEwMTcxN2NjODE4MDI0YTgzZWJlOTU2N2I1OGEzZGYwNzYyMTk0YWZjMDE4YTFlMDIwNTk1MvoohfI=: 00:28:11.187 12:04:01 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 4 00:28:11.187 12:04:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:11.187 12:04:01 -- host/auth.sh@68 -- # digest=sha384 00:28:11.187 12:04:01 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:11.187 12:04:01 -- host/auth.sh@68 -- # keyid=4 00:28:11.187 12:04:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:11.187 12:04:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.187 12:04:01 -- common/autotest_common.sh@10 -- # set +x 00:28:11.187 12:04:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.187 12:04:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:11.187 12:04:01 -- nvmf/common.sh@717 -- # local ip 00:28:11.187 12:04:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:11.187 12:04:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:11.187 12:04:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.187 12:04:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.187 12:04:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:11.187 12:04:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.187 12:04:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:11.187 12:04:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:11.187 12:04:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:11.187 12:04:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:11.187 12:04:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.187 12:04:01 -- common/autotest_common.sh@10 -- # set +x 00:28:11.753 nvme0n1 00:28:11.753 12:04:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.753 12:04:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.753 12:04:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:11.753 12:04:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.753 12:04:02 -- common/autotest_common.sh@10 -- # set +x 00:28:11.753 12:04:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.012 12:04:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.012 12:04:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.012 12:04:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.012 12:04:02 -- common/autotest_common.sh@10 -- # set +x 00:28:12.012 12:04:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.012 12:04:02 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:28:12.012 12:04:02 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:28:12.012 12:04:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:12.012 12:04:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:28:12.012 12:04:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:12.012 12:04:02 -- host/auth.sh@44 -- # digest=sha512 00:28:12.012 12:04:02 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:12.012 12:04:02 -- host/auth.sh@44 -- # keyid=0 00:28:12.012 12:04:02 -- host/auth.sh@45 -- # key=DHHC-1:00:MTQwOWVlYTk0MTMwZmFkYzA0NWI5ODFiNjE0MGQwYmMyrZ6G: 00:28:12.012 12:04:02 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:12.012 12:04:02 -- host/auth.sh@48 -- # echo ffdhe2048 00:28:12.012 12:04:02 -- host/auth.sh@49 -- # echo DHHC-1:00:MTQwOWVlYTk0MTMwZmFkYzA0NWI5ODFiNjE0MGQwYmMyrZ6G: 00:28:12.012 12:04:02 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 0 00:28:12.012 12:04:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:12.012 12:04:02 -- host/auth.sh@68 -- # digest=sha512 00:28:12.012 12:04:02 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:28:12.012 12:04:02 -- host/auth.sh@68 -- # keyid=0 00:28:12.012 12:04:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:12.012 12:04:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.012 12:04:02 -- common/autotest_common.sh@10 -- # set +x 00:28:12.012 12:04:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.012 12:04:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:12.012 12:04:02 -- nvmf/common.sh@717 -- # local ip 00:28:12.012 12:04:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:12.012 12:04:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:12.012 12:04:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.012 12:04:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.012 12:04:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:12.012 12:04:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.012 12:04:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:12.012 12:04:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:12.012 12:04:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:12.012 12:04:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:28:12.012 12:04:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.012 12:04:02 -- common/autotest_common.sh@10 -- # set +x 00:28:12.012 nvme0n1 00:28:12.012 12:04:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.012 12:04:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.012 12:04:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:12.012 12:04:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.012 12:04:02 -- common/autotest_common.sh@10 -- # set +x 00:28:12.012 12:04:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.012 12:04:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.012 12:04:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.012 12:04:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.012 12:04:02 -- common/autotest_common.sh@10 -- # set +x 00:28:12.012 12:04:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.012 12:04:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:12.012 12:04:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:28:12.012 12:04:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:12.012 12:04:02 -- host/auth.sh@44 -- # digest=sha512 00:28:12.012 12:04:02 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:12.012 12:04:02 -- host/auth.sh@44 -- # keyid=1 00:28:12.012 12:04:02 -- host/auth.sh@45 -- # key=DHHC-1:00:ZWFhNzEzOWU2NzViY2UzN2E0ZjJlZjMzNzAzNDI0MDJiOWFkYjRjNGJjOWEwMjQ24CrVsw==: 00:28:12.012 12:04:02 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:12.012 12:04:02 -- host/auth.sh@48 -- # echo ffdhe2048 00:28:12.012 12:04:02 -- host/auth.sh@49 -- # echo DHHC-1:00:ZWFhNzEzOWU2NzViY2UzN2E0ZjJlZjMzNzAzNDI0MDJiOWFkYjRjNGJjOWEwMjQ24CrVsw==: 00:28:12.012 12:04:02 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 1 00:28:12.012 12:04:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:12.012 12:04:02 -- host/auth.sh@68 -- # digest=sha512 00:28:12.012 12:04:02 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:28:12.012 12:04:02 -- host/auth.sh@68 -- # keyid=1 00:28:12.012 12:04:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:12.012 12:04:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.012 12:04:02 -- common/autotest_common.sh@10 -- # set +x 00:28:12.271 12:04:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.271 12:04:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:12.271 12:04:02 -- nvmf/common.sh@717 -- # local ip 00:28:12.271 12:04:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:12.271 12:04:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:12.271 12:04:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.271 12:04:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.271 12:04:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:12.271 12:04:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.271 12:04:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:12.271 12:04:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:12.271 12:04:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:12.271 12:04:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:28:12.271 12:04:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.271 12:04:02 -- common/autotest_common.sh@10 -- # set +x 00:28:12.271 nvme0n1 00:28:12.271 12:04:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.271 12:04:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.271 12:04:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:12.271 12:04:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.271 12:04:02 -- common/autotest_common.sh@10 -- # set +x 00:28:12.271 12:04:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.271 12:04:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.271 12:04:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.271 12:04:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.271 12:04:02 -- common/autotest_common.sh@10 -- # set +x 00:28:12.271 12:04:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.271 12:04:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:12.271 12:04:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:28:12.271 12:04:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:12.271 12:04:02 -- host/auth.sh@44 -- # digest=sha512 00:28:12.271 12:04:02 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:12.271 12:04:02 -- host/auth.sh@44 -- # keyid=2 00:28:12.271 12:04:02 -- host/auth.sh@45 -- # key=DHHC-1:01:NzkwOWRiZDc5Mzc5YzliZDUyMmU4YmU1ZGFlYzFlMmIQf9uH: 00:28:12.271 12:04:02 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:12.271 12:04:02 -- host/auth.sh@48 -- # echo ffdhe2048 00:28:12.271 12:04:02 -- host/auth.sh@49 -- # echo DHHC-1:01:NzkwOWRiZDc5Mzc5YzliZDUyMmU4YmU1ZGFlYzFlMmIQf9uH: 00:28:12.271 12:04:02 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 2 00:28:12.271 12:04:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:12.271 12:04:02 -- host/auth.sh@68 -- # digest=sha512 00:28:12.271 12:04:02 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:28:12.271 12:04:02 -- host/auth.sh@68 -- # keyid=2 00:28:12.271 12:04:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:12.271 12:04:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.271 12:04:02 -- common/autotest_common.sh@10 -- # set +x 00:28:12.271 12:04:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.271 12:04:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:12.271 12:04:02 -- nvmf/common.sh@717 -- # local ip 00:28:12.271 12:04:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:12.271 12:04:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:12.271 12:04:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.271 12:04:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.271 12:04:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:12.271 12:04:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.271 12:04:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:12.271 12:04:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:12.271 12:04:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:12.271 12:04:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:12.271 12:04:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.271 12:04:02 -- common/autotest_common.sh@10 -- # set +x 00:28:12.530 nvme0n1 00:28:12.530 12:04:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.530 12:04:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.530 12:04:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:12.530 12:04:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.530 12:04:02 -- common/autotest_common.sh@10 -- # set +x 00:28:12.530 12:04:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.530 12:04:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.530 12:04:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.530 12:04:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.530 12:04:02 -- common/autotest_common.sh@10 -- # set +x 00:28:12.530 12:04:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.530 12:04:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:12.530 12:04:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:28:12.530 12:04:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:12.530 12:04:02 -- host/auth.sh@44 -- # digest=sha512 00:28:12.530 12:04:02 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:12.530 12:04:02 -- host/auth.sh@44 -- # keyid=3 00:28:12.530 12:04:02 -- host/auth.sh@45 -- # key=DHHC-1:02:ODkyODY0YTA3OGUwNzVlMDJjOTIwNmQ5N2I3YmM2NzY2MTFkODQ3MGMzZWU0MWNmQVS8oQ==: 00:28:12.530 12:04:02 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:12.530 12:04:02 -- host/auth.sh@48 -- # echo ffdhe2048 00:28:12.530 12:04:02 -- host/auth.sh@49 -- # echo DHHC-1:02:ODkyODY0YTA3OGUwNzVlMDJjOTIwNmQ5N2I3YmM2NzY2MTFkODQ3MGMzZWU0MWNmQVS8oQ==: 00:28:12.530 12:04:02 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 3 00:28:12.530 12:04:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:12.530 12:04:02 -- host/auth.sh@68 -- # digest=sha512 00:28:12.530 12:04:02 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:28:12.530 12:04:02 -- host/auth.sh@68 -- # keyid=3 00:28:12.530 12:04:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:12.530 12:04:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.530 12:04:02 -- common/autotest_common.sh@10 -- # set +x 00:28:12.530 12:04:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.530 12:04:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:12.530 12:04:03 -- nvmf/common.sh@717 -- # local ip 00:28:12.530 12:04:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:12.530 12:04:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:12.530 12:04:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.530 12:04:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.530 12:04:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:12.530 12:04:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.530 12:04:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:12.530 12:04:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:12.530 12:04:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:12.530 12:04:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:28:12.530 12:04:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.530 12:04:03 -- common/autotest_common.sh@10 -- # set +x 00:28:12.789 nvme0n1 00:28:12.789 12:04:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.789 12:04:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.789 12:04:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:12.789 12:04:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.789 12:04:03 -- common/autotest_common.sh@10 -- # set +x 00:28:12.789 12:04:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.789 12:04:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.789 12:04:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.789 12:04:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.789 12:04:03 -- common/autotest_common.sh@10 -- # set +x 00:28:12.789 12:04:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.789 12:04:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:12.789 12:04:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:28:12.789 12:04:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:12.789 12:04:03 -- host/auth.sh@44 -- # digest=sha512 00:28:12.789 12:04:03 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:12.789 12:04:03 -- host/auth.sh@44 -- # keyid=4 00:28:12.789 12:04:03 -- host/auth.sh@45 -- # key=DHHC-1:03:MDEyYzcwZjMwZGEwMTcxN2NjODE4MDI0YTgzZWJlOTU2N2I1OGEzZGYwNzYyMTk0YWZjMDE4YTFlMDIwNTk1MvoohfI=: 00:28:12.789 12:04:03 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:12.789 12:04:03 -- host/auth.sh@48 -- # echo ffdhe2048 00:28:12.789 12:04:03 -- host/auth.sh@49 -- # echo DHHC-1:03:MDEyYzcwZjMwZGEwMTcxN2NjODE4MDI0YTgzZWJlOTU2N2I1OGEzZGYwNzYyMTk0YWZjMDE4YTFlMDIwNTk1MvoohfI=: 00:28:12.789 12:04:03 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 4 00:28:12.789 12:04:03 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:12.789 12:04:03 -- host/auth.sh@68 -- # digest=sha512 00:28:12.789 12:04:03 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:28:12.789 12:04:03 -- host/auth.sh@68 -- # keyid=4 00:28:12.789 12:04:03 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:12.789 12:04:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.789 12:04:03 -- common/autotest_common.sh@10 -- # set +x 00:28:12.789 12:04:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.789 12:04:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:12.789 12:04:03 -- nvmf/common.sh@717 -- # local ip 00:28:12.789 12:04:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:12.789 12:04:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:12.789 12:04:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.789 12:04:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.789 12:04:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:12.789 12:04:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.789 12:04:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:12.789 12:04:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:12.789 12:04:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:12.789 12:04:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:12.789 12:04:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.789 12:04:03 -- common/autotest_common.sh@10 -- # set +x 00:28:13.046 nvme0n1 00:28:13.046 12:04:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.046 12:04:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.046 12:04:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:13.046 12:04:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.046 12:04:03 -- common/autotest_common.sh@10 -- # set +x 00:28:13.046 12:04:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.046 12:04:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.047 12:04:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.047 12:04:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.047 12:04:03 -- common/autotest_common.sh@10 -- # set +x 00:28:13.047 12:04:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.047 12:04:03 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:28:13.047 12:04:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:13.047 12:04:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:28:13.047 12:04:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:13.047 12:04:03 -- host/auth.sh@44 -- # digest=sha512 00:28:13.047 12:04:03 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:13.047 12:04:03 -- host/auth.sh@44 -- # keyid=0 00:28:13.047 12:04:03 -- host/auth.sh@45 -- # key=DHHC-1:00:MTQwOWVlYTk0MTMwZmFkYzA0NWI5ODFiNjE0MGQwYmMyrZ6G: 00:28:13.047 12:04:03 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:13.047 12:04:03 -- host/auth.sh@48 -- # echo ffdhe3072 00:28:13.047 12:04:03 -- host/auth.sh@49 -- # echo DHHC-1:00:MTQwOWVlYTk0MTMwZmFkYzA0NWI5ODFiNjE0MGQwYmMyrZ6G: 00:28:13.047 12:04:03 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 0 00:28:13.047 12:04:03 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:13.047 12:04:03 -- host/auth.sh@68 -- # digest=sha512 00:28:13.047 12:04:03 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:28:13.047 12:04:03 -- host/auth.sh@68 -- # keyid=0 00:28:13.047 12:04:03 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:13.047 12:04:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.047 12:04:03 -- common/autotest_common.sh@10 -- # set +x 00:28:13.047 12:04:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.047 12:04:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:13.047 12:04:03 -- nvmf/common.sh@717 -- # local ip 00:28:13.047 12:04:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:13.047 12:04:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:13.047 12:04:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.047 12:04:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.047 12:04:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:13.047 12:04:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.047 12:04:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:13.047 12:04:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:13.047 12:04:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:13.047 12:04:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:28:13.047 12:04:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.047 12:04:03 -- common/autotest_common.sh@10 -- # set +x 00:28:13.305 nvme0n1 00:28:13.305 12:04:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.305 12:04:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.305 12:04:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:13.305 12:04:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.305 12:04:03 -- common/autotest_common.sh@10 -- # set +x 00:28:13.305 12:04:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.305 12:04:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.305 12:04:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.305 12:04:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.305 12:04:03 -- common/autotest_common.sh@10 -- # set +x 00:28:13.305 12:04:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.305 12:04:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:13.305 12:04:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:28:13.305 12:04:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:13.305 12:04:03 -- host/auth.sh@44 -- # digest=sha512 00:28:13.305 12:04:03 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:13.305 12:04:03 -- host/auth.sh@44 -- # keyid=1 00:28:13.305 12:04:03 -- host/auth.sh@45 -- # key=DHHC-1:00:ZWFhNzEzOWU2NzViY2UzN2E0ZjJlZjMzNzAzNDI0MDJiOWFkYjRjNGJjOWEwMjQ24CrVsw==: 00:28:13.305 12:04:03 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:13.305 12:04:03 -- host/auth.sh@48 -- # echo ffdhe3072 00:28:13.305 12:04:03 -- host/auth.sh@49 -- # echo DHHC-1:00:ZWFhNzEzOWU2NzViY2UzN2E0ZjJlZjMzNzAzNDI0MDJiOWFkYjRjNGJjOWEwMjQ24CrVsw==: 00:28:13.305 12:04:03 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 1 00:28:13.305 12:04:03 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:13.305 12:04:03 -- host/auth.sh@68 -- # digest=sha512 00:28:13.305 12:04:03 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:28:13.305 12:04:03 -- host/auth.sh@68 -- # keyid=1 00:28:13.305 12:04:03 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:13.305 12:04:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.305 12:04:03 -- common/autotest_common.sh@10 -- # set +x 00:28:13.305 12:04:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.305 12:04:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:13.305 12:04:03 -- nvmf/common.sh@717 -- # local ip 00:28:13.305 12:04:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:13.305 12:04:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:13.305 12:04:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.305 12:04:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.305 12:04:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:13.305 12:04:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.305 12:04:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:13.305 12:04:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:13.305 12:04:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:13.305 12:04:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:28:13.305 12:04:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.305 12:04:03 -- common/autotest_common.sh@10 -- # set +x 00:28:13.564 nvme0n1 00:28:13.564 12:04:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.564 12:04:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.564 12:04:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:13.564 12:04:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.564 12:04:03 -- common/autotest_common.sh@10 -- # set +x 00:28:13.564 12:04:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.564 12:04:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.564 12:04:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.564 12:04:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.564 12:04:03 -- common/autotest_common.sh@10 -- # set +x 00:28:13.564 12:04:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.564 12:04:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:13.564 12:04:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:28:13.564 12:04:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:13.564 12:04:03 -- host/auth.sh@44 -- # digest=sha512 00:28:13.564 12:04:03 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:13.564 12:04:03 -- host/auth.sh@44 -- # keyid=2 00:28:13.564 12:04:03 -- host/auth.sh@45 -- # key=DHHC-1:01:NzkwOWRiZDc5Mzc5YzliZDUyMmU4YmU1ZGFlYzFlMmIQf9uH: 00:28:13.564 12:04:03 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:13.564 12:04:03 -- host/auth.sh@48 -- # echo ffdhe3072 00:28:13.564 12:04:03 -- host/auth.sh@49 -- # echo DHHC-1:01:NzkwOWRiZDc5Mzc5YzliZDUyMmU4YmU1ZGFlYzFlMmIQf9uH: 00:28:13.564 12:04:03 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 2 00:28:13.564 12:04:03 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:13.564 12:04:03 -- host/auth.sh@68 -- # digest=sha512 00:28:13.564 12:04:03 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:28:13.564 12:04:03 -- host/auth.sh@68 -- # keyid=2 00:28:13.564 12:04:03 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:13.564 12:04:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.564 12:04:03 -- common/autotest_common.sh@10 -- # set +x 00:28:13.564 12:04:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.564 12:04:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:13.564 12:04:03 -- nvmf/common.sh@717 -- # local ip 00:28:13.564 12:04:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:13.564 12:04:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:13.564 12:04:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.564 12:04:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.564 12:04:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:13.564 12:04:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.564 12:04:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:13.564 12:04:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:13.564 12:04:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:13.564 12:04:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:13.564 12:04:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.564 12:04:03 -- common/autotest_common.sh@10 -- # set +x 00:28:13.564 nvme0n1 00:28:13.564 12:04:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.564 12:04:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.564 12:04:04 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:13.564 12:04:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.564 12:04:04 -- common/autotest_common.sh@10 -- # set +x 00:28:13.823 12:04:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.823 12:04:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.823 12:04:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.823 12:04:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.823 12:04:04 -- common/autotest_common.sh@10 -- # set +x 00:28:13.823 12:04:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.823 12:04:04 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:13.823 12:04:04 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:28:13.823 12:04:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:13.823 12:04:04 -- host/auth.sh@44 -- # digest=sha512 00:28:13.823 12:04:04 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:13.823 12:04:04 -- host/auth.sh@44 -- # keyid=3 00:28:13.823 12:04:04 -- host/auth.sh@45 -- # key=DHHC-1:02:ODkyODY0YTA3OGUwNzVlMDJjOTIwNmQ5N2I3YmM2NzY2MTFkODQ3MGMzZWU0MWNmQVS8oQ==: 00:28:13.823 12:04:04 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:13.823 12:04:04 -- host/auth.sh@48 -- # echo ffdhe3072 00:28:13.823 12:04:04 -- host/auth.sh@49 -- # echo DHHC-1:02:ODkyODY0YTA3OGUwNzVlMDJjOTIwNmQ5N2I3YmM2NzY2MTFkODQ3MGMzZWU0MWNmQVS8oQ==: 00:28:13.823 12:04:04 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 3 00:28:13.823 12:04:04 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:13.823 12:04:04 -- host/auth.sh@68 -- # digest=sha512 00:28:13.823 12:04:04 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:28:13.823 12:04:04 -- host/auth.sh@68 -- # keyid=3 00:28:13.823 12:04:04 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:13.823 12:04:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.823 12:04:04 -- common/autotest_common.sh@10 -- # set +x 00:28:13.823 12:04:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.823 12:04:04 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:13.823 12:04:04 -- nvmf/common.sh@717 -- # local ip 00:28:13.823 12:04:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:13.823 12:04:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:13.823 12:04:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.823 12:04:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.823 12:04:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:13.823 12:04:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.823 12:04:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:13.823 12:04:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:13.823 12:04:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:13.823 12:04:04 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:28:13.823 12:04:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.823 12:04:04 -- common/autotest_common.sh@10 -- # set +x 00:28:13.823 nvme0n1 00:28:13.823 12:04:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.823 12:04:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.823 12:04:04 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:13.823 12:04:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.823 12:04:04 -- common/autotest_common.sh@10 -- # set +x 00:28:13.823 12:04:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.081 12:04:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.081 12:04:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.081 12:04:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.081 12:04:04 -- common/autotest_common.sh@10 -- # set +x 00:28:14.081 12:04:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.081 12:04:04 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:14.081 12:04:04 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:28:14.081 12:04:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:14.081 12:04:04 -- host/auth.sh@44 -- # digest=sha512 00:28:14.081 12:04:04 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:14.081 12:04:04 -- host/auth.sh@44 -- # keyid=4 00:28:14.081 12:04:04 -- host/auth.sh@45 -- # key=DHHC-1:03:MDEyYzcwZjMwZGEwMTcxN2NjODE4MDI0YTgzZWJlOTU2N2I1OGEzZGYwNzYyMTk0YWZjMDE4YTFlMDIwNTk1MvoohfI=: 00:28:14.081 12:04:04 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:14.081 12:04:04 -- host/auth.sh@48 -- # echo ffdhe3072 00:28:14.081 12:04:04 -- host/auth.sh@49 -- # echo DHHC-1:03:MDEyYzcwZjMwZGEwMTcxN2NjODE4MDI0YTgzZWJlOTU2N2I1OGEzZGYwNzYyMTk0YWZjMDE4YTFlMDIwNTk1MvoohfI=: 00:28:14.081 12:04:04 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 4 00:28:14.081 12:04:04 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:14.081 12:04:04 -- host/auth.sh@68 -- # digest=sha512 00:28:14.081 12:04:04 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:28:14.081 12:04:04 -- host/auth.sh@68 -- # keyid=4 00:28:14.081 12:04:04 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:14.081 12:04:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.081 12:04:04 -- common/autotest_common.sh@10 -- # set +x 00:28:14.081 12:04:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.081 12:04:04 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:14.081 12:04:04 -- nvmf/common.sh@717 -- # local ip 00:28:14.081 12:04:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:14.081 12:04:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:14.081 12:04:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.081 12:04:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.081 12:04:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:14.081 12:04:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.081 12:04:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:14.081 12:04:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:14.081 12:04:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:14.081 12:04:04 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:14.081 12:04:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.081 12:04:04 -- common/autotest_common.sh@10 -- # set +x 00:28:14.081 nvme0n1 00:28:14.081 12:04:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.081 12:04:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.081 12:04:04 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:14.081 12:04:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.081 12:04:04 -- common/autotest_common.sh@10 -- # set +x 00:28:14.081 12:04:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.339 12:04:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.339 12:04:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.339 12:04:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.339 12:04:04 -- common/autotest_common.sh@10 -- # set +x 00:28:14.339 12:04:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.339 12:04:04 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:28:14.339 12:04:04 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:14.340 12:04:04 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:28:14.340 12:04:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:14.340 12:04:04 -- host/auth.sh@44 -- # digest=sha512 00:28:14.340 12:04:04 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:14.340 12:04:04 -- host/auth.sh@44 -- # keyid=0 00:28:14.340 12:04:04 -- host/auth.sh@45 -- # key=DHHC-1:00:MTQwOWVlYTk0MTMwZmFkYzA0NWI5ODFiNjE0MGQwYmMyrZ6G: 00:28:14.340 12:04:04 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:14.340 12:04:04 -- host/auth.sh@48 -- # echo ffdhe4096 00:28:14.340 12:04:04 -- host/auth.sh@49 -- # echo DHHC-1:00:MTQwOWVlYTk0MTMwZmFkYzA0NWI5ODFiNjE0MGQwYmMyrZ6G: 00:28:14.340 12:04:04 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 0 00:28:14.340 12:04:04 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:14.340 12:04:04 -- host/auth.sh@68 -- # digest=sha512 00:28:14.340 12:04:04 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:28:14.340 12:04:04 -- host/auth.sh@68 -- # keyid=0 00:28:14.340 12:04:04 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:14.340 12:04:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.340 12:04:04 -- common/autotest_common.sh@10 -- # set +x 00:28:14.340 12:04:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.340 12:04:04 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:14.340 12:04:04 -- nvmf/common.sh@717 -- # local ip 00:28:14.340 12:04:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:14.340 12:04:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:14.340 12:04:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.340 12:04:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.340 12:04:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:14.340 12:04:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.340 12:04:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:14.340 12:04:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:14.340 12:04:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:14.340 12:04:04 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:28:14.340 12:04:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.340 12:04:04 -- common/autotest_common.sh@10 -- # set +x 00:28:14.598 nvme0n1 00:28:14.598 12:04:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.598 12:04:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.598 12:04:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.598 12:04:04 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:14.598 12:04:04 -- common/autotest_common.sh@10 -- # set +x 00:28:14.598 12:04:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.598 12:04:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.598 12:04:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.598 12:04:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.598 12:04:04 -- common/autotest_common.sh@10 -- # set +x 00:28:14.598 12:04:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.598 12:04:04 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:14.598 12:04:04 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:28:14.598 12:04:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:14.598 12:04:04 -- host/auth.sh@44 -- # digest=sha512 00:28:14.598 12:04:04 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:14.598 12:04:04 -- host/auth.sh@44 -- # keyid=1 00:28:14.598 12:04:04 -- host/auth.sh@45 -- # key=DHHC-1:00:ZWFhNzEzOWU2NzViY2UzN2E0ZjJlZjMzNzAzNDI0MDJiOWFkYjRjNGJjOWEwMjQ24CrVsw==: 00:28:14.598 12:04:04 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:14.598 12:04:04 -- host/auth.sh@48 -- # echo ffdhe4096 00:28:14.598 12:04:04 -- host/auth.sh@49 -- # echo DHHC-1:00:ZWFhNzEzOWU2NzViY2UzN2E0ZjJlZjMzNzAzNDI0MDJiOWFkYjRjNGJjOWEwMjQ24CrVsw==: 00:28:14.598 12:04:04 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 1 00:28:14.598 12:04:04 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:14.598 12:04:04 -- host/auth.sh@68 -- # digest=sha512 00:28:14.598 12:04:04 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:28:14.598 12:04:04 -- host/auth.sh@68 -- # keyid=1 00:28:14.598 12:04:04 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:14.598 12:04:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.598 12:04:04 -- common/autotest_common.sh@10 -- # set +x 00:28:14.598 12:04:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.598 12:04:04 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:14.598 12:04:04 -- nvmf/common.sh@717 -- # local ip 00:28:14.598 12:04:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:14.598 12:04:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:14.598 12:04:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.599 12:04:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.599 12:04:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:14.599 12:04:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.599 12:04:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:14.599 12:04:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:14.599 12:04:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:14.599 12:04:04 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:28:14.599 12:04:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.599 12:04:04 -- common/autotest_common.sh@10 -- # set +x 00:28:14.857 nvme0n1 00:28:14.857 12:04:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.857 12:04:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.857 12:04:05 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:14.857 12:04:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.857 12:04:05 -- common/autotest_common.sh@10 -- # set +x 00:28:14.857 12:04:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.857 12:04:05 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.857 12:04:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.857 12:04:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.857 12:04:05 -- common/autotest_common.sh@10 -- # set +x 00:28:14.857 12:04:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.857 12:04:05 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:14.857 12:04:05 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:28:14.857 12:04:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:14.857 12:04:05 -- host/auth.sh@44 -- # digest=sha512 00:28:14.857 12:04:05 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:14.857 12:04:05 -- host/auth.sh@44 -- # keyid=2 00:28:14.857 12:04:05 -- host/auth.sh@45 -- # key=DHHC-1:01:NzkwOWRiZDc5Mzc5YzliZDUyMmU4YmU1ZGFlYzFlMmIQf9uH: 00:28:14.857 12:04:05 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:14.857 12:04:05 -- host/auth.sh@48 -- # echo ffdhe4096 00:28:14.857 12:04:05 -- host/auth.sh@49 -- # echo DHHC-1:01:NzkwOWRiZDc5Mzc5YzliZDUyMmU4YmU1ZGFlYzFlMmIQf9uH: 00:28:14.857 12:04:05 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 2 00:28:14.857 12:04:05 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:14.857 12:04:05 -- host/auth.sh@68 -- # digest=sha512 00:28:14.857 12:04:05 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:28:14.857 12:04:05 -- host/auth.sh@68 -- # keyid=2 00:28:14.857 12:04:05 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:14.857 12:04:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.857 12:04:05 -- common/autotest_common.sh@10 -- # set +x 00:28:14.857 12:04:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.857 12:04:05 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:14.857 12:04:05 -- nvmf/common.sh@717 -- # local ip 00:28:14.857 12:04:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:14.857 12:04:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:14.857 12:04:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.857 12:04:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.857 12:04:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:14.857 12:04:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.857 12:04:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:14.857 12:04:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:14.857 12:04:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:14.857 12:04:05 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:14.857 12:04:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.857 12:04:05 -- common/autotest_common.sh@10 -- # set +x 00:28:15.116 nvme0n1 00:28:15.116 12:04:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.116 12:04:05 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:15.116 12:04:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.116 12:04:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.116 12:04:05 -- common/autotest_common.sh@10 -- # set +x 00:28:15.116 12:04:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.116 12:04:05 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.116 12:04:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.116 12:04:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.116 12:04:05 -- common/autotest_common.sh@10 -- # set +x 00:28:15.116 12:04:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.116 12:04:05 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:15.116 12:04:05 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:28:15.116 12:04:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:15.116 12:04:05 -- host/auth.sh@44 -- # digest=sha512 00:28:15.116 12:04:05 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:15.116 12:04:05 -- host/auth.sh@44 -- # keyid=3 00:28:15.116 12:04:05 -- host/auth.sh@45 -- # key=DHHC-1:02:ODkyODY0YTA3OGUwNzVlMDJjOTIwNmQ5N2I3YmM2NzY2MTFkODQ3MGMzZWU0MWNmQVS8oQ==: 00:28:15.116 12:04:05 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:15.116 12:04:05 -- host/auth.sh@48 -- # echo ffdhe4096 00:28:15.116 12:04:05 -- host/auth.sh@49 -- # echo DHHC-1:02:ODkyODY0YTA3OGUwNzVlMDJjOTIwNmQ5N2I3YmM2NzY2MTFkODQ3MGMzZWU0MWNmQVS8oQ==: 00:28:15.116 12:04:05 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 3 00:28:15.116 12:04:05 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:15.116 12:04:05 -- host/auth.sh@68 -- # digest=sha512 00:28:15.116 12:04:05 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:28:15.116 12:04:05 -- host/auth.sh@68 -- # keyid=3 00:28:15.116 12:04:05 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:15.116 12:04:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.116 12:04:05 -- common/autotest_common.sh@10 -- # set +x 00:28:15.116 12:04:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.116 12:04:05 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:15.116 12:04:05 -- nvmf/common.sh@717 -- # local ip 00:28:15.116 12:04:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:15.116 12:04:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:15.116 12:04:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.116 12:04:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.116 12:04:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:15.116 12:04:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.116 12:04:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:15.116 12:04:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:15.116 12:04:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:15.116 12:04:05 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:28:15.116 12:04:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.116 12:04:05 -- common/autotest_common.sh@10 -- # set +x 00:28:15.374 nvme0n1 00:28:15.374 12:04:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.374 12:04:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.374 12:04:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.374 12:04:05 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:15.374 12:04:05 -- common/autotest_common.sh@10 -- # set +x 00:28:15.374 12:04:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.374 12:04:05 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.374 12:04:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.374 12:04:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.374 12:04:05 -- common/autotest_common.sh@10 -- # set +x 00:28:15.374 12:04:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.374 12:04:05 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:15.374 12:04:05 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:28:15.374 12:04:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:15.374 12:04:05 -- host/auth.sh@44 -- # digest=sha512 00:28:15.374 12:04:05 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:15.374 12:04:05 -- host/auth.sh@44 -- # keyid=4 00:28:15.374 12:04:05 -- host/auth.sh@45 -- # key=DHHC-1:03:MDEyYzcwZjMwZGEwMTcxN2NjODE4MDI0YTgzZWJlOTU2N2I1OGEzZGYwNzYyMTk0YWZjMDE4YTFlMDIwNTk1MvoohfI=: 00:28:15.374 12:04:05 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:15.374 12:04:05 -- host/auth.sh@48 -- # echo ffdhe4096 00:28:15.374 12:04:05 -- host/auth.sh@49 -- # echo DHHC-1:03:MDEyYzcwZjMwZGEwMTcxN2NjODE4MDI0YTgzZWJlOTU2N2I1OGEzZGYwNzYyMTk0YWZjMDE4YTFlMDIwNTk1MvoohfI=: 00:28:15.374 12:04:05 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 4 00:28:15.374 12:04:05 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:15.374 12:04:05 -- host/auth.sh@68 -- # digest=sha512 00:28:15.374 12:04:05 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:28:15.374 12:04:05 -- host/auth.sh@68 -- # keyid=4 00:28:15.374 12:04:05 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:15.374 12:04:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.374 12:04:05 -- common/autotest_common.sh@10 -- # set +x 00:28:15.374 12:04:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.633 12:04:05 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:15.633 12:04:05 -- nvmf/common.sh@717 -- # local ip 00:28:15.633 12:04:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:15.633 12:04:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:15.633 12:04:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.633 12:04:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.633 12:04:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:15.633 12:04:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.633 12:04:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:15.633 12:04:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:15.633 12:04:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:15.633 12:04:05 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:15.633 12:04:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.633 12:04:05 -- common/autotest_common.sh@10 -- # set +x 00:28:15.633 nvme0n1 00:28:15.633 12:04:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.633 12:04:06 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.633 12:04:06 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:15.633 12:04:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.633 12:04:06 -- common/autotest_common.sh@10 -- # set +x 00:28:15.633 12:04:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.892 12:04:06 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.892 12:04:06 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.892 12:04:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.892 12:04:06 -- common/autotest_common.sh@10 -- # set +x 00:28:15.892 12:04:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.892 12:04:06 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:28:15.892 12:04:06 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:15.892 12:04:06 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:28:15.892 12:04:06 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:15.892 12:04:06 -- host/auth.sh@44 -- # digest=sha512 00:28:15.892 12:04:06 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:15.892 12:04:06 -- host/auth.sh@44 -- # keyid=0 00:28:15.892 12:04:06 -- host/auth.sh@45 -- # key=DHHC-1:00:MTQwOWVlYTk0MTMwZmFkYzA0NWI5ODFiNjE0MGQwYmMyrZ6G: 00:28:15.892 12:04:06 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:15.892 12:04:06 -- host/auth.sh@48 -- # echo ffdhe6144 00:28:15.892 12:04:06 -- host/auth.sh@49 -- # echo DHHC-1:00:MTQwOWVlYTk0MTMwZmFkYzA0NWI5ODFiNjE0MGQwYmMyrZ6G: 00:28:15.892 12:04:06 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 0 00:28:15.892 12:04:06 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:15.892 12:04:06 -- host/auth.sh@68 -- # digest=sha512 00:28:15.892 12:04:06 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:28:15.892 12:04:06 -- host/auth.sh@68 -- # keyid=0 00:28:15.892 12:04:06 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:15.892 12:04:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.892 12:04:06 -- common/autotest_common.sh@10 -- # set +x 00:28:15.892 12:04:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.892 12:04:06 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:15.892 12:04:06 -- nvmf/common.sh@717 -- # local ip 00:28:15.892 12:04:06 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:15.892 12:04:06 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:15.892 12:04:06 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.892 12:04:06 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.892 12:04:06 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:15.892 12:04:06 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.892 12:04:06 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:15.892 12:04:06 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:15.892 12:04:06 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:15.892 12:04:06 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:28:15.892 12:04:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.892 12:04:06 -- common/autotest_common.sh@10 -- # set +x 00:28:16.150 nvme0n1 00:28:16.150 12:04:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:16.150 12:04:06 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.150 12:04:06 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:16.150 12:04:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:16.150 12:04:06 -- common/autotest_common.sh@10 -- # set +x 00:28:16.150 12:04:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:16.150 12:04:06 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.150 12:04:06 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.150 12:04:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:16.150 12:04:06 -- common/autotest_common.sh@10 -- # set +x 00:28:16.150 12:04:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:16.150 12:04:06 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:16.150 12:04:06 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:28:16.150 12:04:06 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:16.150 12:04:06 -- host/auth.sh@44 -- # digest=sha512 00:28:16.150 12:04:06 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:16.150 12:04:06 -- host/auth.sh@44 -- # keyid=1 00:28:16.150 12:04:06 -- host/auth.sh@45 -- # key=DHHC-1:00:ZWFhNzEzOWU2NzViY2UzN2E0ZjJlZjMzNzAzNDI0MDJiOWFkYjRjNGJjOWEwMjQ24CrVsw==: 00:28:16.150 12:04:06 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:16.150 12:04:06 -- host/auth.sh@48 -- # echo ffdhe6144 00:28:16.150 12:04:06 -- host/auth.sh@49 -- # echo DHHC-1:00:ZWFhNzEzOWU2NzViY2UzN2E0ZjJlZjMzNzAzNDI0MDJiOWFkYjRjNGJjOWEwMjQ24CrVsw==: 00:28:16.150 12:04:06 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 1 00:28:16.150 12:04:06 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:16.150 12:04:06 -- host/auth.sh@68 -- # digest=sha512 00:28:16.150 12:04:06 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:28:16.150 12:04:06 -- host/auth.sh@68 -- # keyid=1 00:28:16.150 12:04:06 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:16.150 12:04:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:16.150 12:04:06 -- common/autotest_common.sh@10 -- # set +x 00:28:16.150 12:04:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:16.150 12:04:06 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:16.150 12:04:06 -- nvmf/common.sh@717 -- # local ip 00:28:16.150 12:04:06 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:16.150 12:04:06 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:16.150 12:04:06 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.150 12:04:06 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.150 12:04:06 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:16.150 12:04:06 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.150 12:04:06 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:16.150 12:04:06 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:16.150 12:04:06 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:16.150 12:04:06 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:28:16.150 12:04:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:16.150 12:04:06 -- common/autotest_common.sh@10 -- # set +x 00:28:16.715 nvme0n1 00:28:16.715 12:04:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:16.715 12:04:07 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.715 12:04:07 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:16.715 12:04:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:16.715 12:04:07 -- common/autotest_common.sh@10 -- # set +x 00:28:16.715 12:04:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:16.715 12:04:07 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.715 12:04:07 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.715 12:04:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:16.715 12:04:07 -- common/autotest_common.sh@10 -- # set +x 00:28:16.715 12:04:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:16.715 12:04:07 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:16.715 12:04:07 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:28:16.715 12:04:07 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:16.715 12:04:07 -- host/auth.sh@44 -- # digest=sha512 00:28:16.715 12:04:07 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:16.715 12:04:07 -- host/auth.sh@44 -- # keyid=2 00:28:16.715 12:04:07 -- host/auth.sh@45 -- # key=DHHC-1:01:NzkwOWRiZDc5Mzc5YzliZDUyMmU4YmU1ZGFlYzFlMmIQf9uH: 00:28:16.715 12:04:07 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:16.715 12:04:07 -- host/auth.sh@48 -- # echo ffdhe6144 00:28:16.715 12:04:07 -- host/auth.sh@49 -- # echo DHHC-1:01:NzkwOWRiZDc5Mzc5YzliZDUyMmU4YmU1ZGFlYzFlMmIQf9uH: 00:28:16.715 12:04:07 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 2 00:28:16.715 12:04:07 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:16.715 12:04:07 -- host/auth.sh@68 -- # digest=sha512 00:28:16.715 12:04:07 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:28:16.715 12:04:07 -- host/auth.sh@68 -- # keyid=2 00:28:16.715 12:04:07 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:16.715 12:04:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:16.715 12:04:07 -- common/autotest_common.sh@10 -- # set +x 00:28:16.715 12:04:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:16.715 12:04:07 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:16.715 12:04:07 -- nvmf/common.sh@717 -- # local ip 00:28:16.715 12:04:07 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:16.715 12:04:07 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:16.715 12:04:07 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.715 12:04:07 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.715 12:04:07 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:16.715 12:04:07 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.715 12:04:07 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:16.715 12:04:07 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:16.715 12:04:07 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:16.715 12:04:07 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:16.715 12:04:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:16.715 12:04:07 -- common/autotest_common.sh@10 -- # set +x 00:28:16.973 nvme0n1 00:28:16.973 12:04:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:16.973 12:04:07 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:16.973 12:04:07 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.973 12:04:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:16.973 12:04:07 -- common/autotest_common.sh@10 -- # set +x 00:28:16.973 12:04:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:16.973 12:04:07 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.973 12:04:07 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.973 12:04:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:16.973 12:04:07 -- common/autotest_common.sh@10 -- # set +x 00:28:17.233 12:04:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:17.233 12:04:07 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:17.233 12:04:07 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:28:17.233 12:04:07 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:17.233 12:04:07 -- host/auth.sh@44 -- # digest=sha512 00:28:17.233 12:04:07 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:17.233 12:04:07 -- host/auth.sh@44 -- # keyid=3 00:28:17.233 12:04:07 -- host/auth.sh@45 -- # key=DHHC-1:02:ODkyODY0YTA3OGUwNzVlMDJjOTIwNmQ5N2I3YmM2NzY2MTFkODQ3MGMzZWU0MWNmQVS8oQ==: 00:28:17.233 12:04:07 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:17.233 12:04:07 -- host/auth.sh@48 -- # echo ffdhe6144 00:28:17.233 12:04:07 -- host/auth.sh@49 -- # echo DHHC-1:02:ODkyODY0YTA3OGUwNzVlMDJjOTIwNmQ5N2I3YmM2NzY2MTFkODQ3MGMzZWU0MWNmQVS8oQ==: 00:28:17.233 12:04:07 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 3 00:28:17.233 12:04:07 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:17.233 12:04:07 -- host/auth.sh@68 -- # digest=sha512 00:28:17.233 12:04:07 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:28:17.233 12:04:07 -- host/auth.sh@68 -- # keyid=3 00:28:17.233 12:04:07 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:17.233 12:04:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.233 12:04:07 -- common/autotest_common.sh@10 -- # set +x 00:28:17.233 12:04:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:17.233 12:04:07 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:17.234 12:04:07 -- nvmf/common.sh@717 -- # local ip 00:28:17.234 12:04:07 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:17.234 12:04:07 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:17.234 12:04:07 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.234 12:04:07 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.234 12:04:07 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:17.234 12:04:07 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.234 12:04:07 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:17.234 12:04:07 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:17.234 12:04:07 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:17.234 12:04:07 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:28:17.234 12:04:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.234 12:04:07 -- common/autotest_common.sh@10 -- # set +x 00:28:17.492 nvme0n1 00:28:17.492 12:04:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:17.492 12:04:07 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.492 12:04:07 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:17.492 12:04:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.492 12:04:07 -- common/autotest_common.sh@10 -- # set +x 00:28:17.492 12:04:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:17.492 12:04:07 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.492 12:04:07 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.492 12:04:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.492 12:04:07 -- common/autotest_common.sh@10 -- # set +x 00:28:17.492 12:04:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:17.492 12:04:07 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:17.492 12:04:07 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:28:17.492 12:04:07 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:17.492 12:04:07 -- host/auth.sh@44 -- # digest=sha512 00:28:17.492 12:04:07 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:17.492 12:04:07 -- host/auth.sh@44 -- # keyid=4 00:28:17.492 12:04:07 -- host/auth.sh@45 -- # key=DHHC-1:03:MDEyYzcwZjMwZGEwMTcxN2NjODE4MDI0YTgzZWJlOTU2N2I1OGEzZGYwNzYyMTk0YWZjMDE4YTFlMDIwNTk1MvoohfI=: 00:28:17.492 12:04:07 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:17.492 12:04:07 -- host/auth.sh@48 -- # echo ffdhe6144 00:28:17.492 12:04:07 -- host/auth.sh@49 -- # echo DHHC-1:03:MDEyYzcwZjMwZGEwMTcxN2NjODE4MDI0YTgzZWJlOTU2N2I1OGEzZGYwNzYyMTk0YWZjMDE4YTFlMDIwNTk1MvoohfI=: 00:28:17.492 12:04:07 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 4 00:28:17.492 12:04:07 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:17.492 12:04:07 -- host/auth.sh@68 -- # digest=sha512 00:28:17.492 12:04:07 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:28:17.492 12:04:07 -- host/auth.sh@68 -- # keyid=4 00:28:17.492 12:04:07 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:17.492 12:04:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.492 12:04:07 -- common/autotest_common.sh@10 -- # set +x 00:28:17.492 12:04:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:17.492 12:04:07 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:17.492 12:04:07 -- nvmf/common.sh@717 -- # local ip 00:28:17.492 12:04:07 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:17.492 12:04:07 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:17.492 12:04:07 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.492 12:04:07 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.492 12:04:07 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:17.492 12:04:07 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.492 12:04:07 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:17.492 12:04:07 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:17.492 12:04:07 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:17.492 12:04:07 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:17.492 12:04:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.492 12:04:07 -- common/autotest_common.sh@10 -- # set +x 00:28:18.057 nvme0n1 00:28:18.057 12:04:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:18.057 12:04:08 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.057 12:04:08 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:18.057 12:04:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:18.057 12:04:08 -- common/autotest_common.sh@10 -- # set +x 00:28:18.057 12:04:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:18.057 12:04:08 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.057 12:04:08 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.057 12:04:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:18.057 12:04:08 -- common/autotest_common.sh@10 -- # set +x 00:28:18.057 12:04:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:18.057 12:04:08 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:28:18.057 12:04:08 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:18.057 12:04:08 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:28:18.057 12:04:08 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:18.057 12:04:08 -- host/auth.sh@44 -- # digest=sha512 00:28:18.057 12:04:08 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:18.057 12:04:08 -- host/auth.sh@44 -- # keyid=0 00:28:18.057 12:04:08 -- host/auth.sh@45 -- # key=DHHC-1:00:MTQwOWVlYTk0MTMwZmFkYzA0NWI5ODFiNjE0MGQwYmMyrZ6G: 00:28:18.057 12:04:08 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:18.057 12:04:08 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:18.057 12:04:08 -- host/auth.sh@49 -- # echo DHHC-1:00:MTQwOWVlYTk0MTMwZmFkYzA0NWI5ODFiNjE0MGQwYmMyrZ6G: 00:28:18.057 12:04:08 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 0 00:28:18.057 12:04:08 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:18.057 12:04:08 -- host/auth.sh@68 -- # digest=sha512 00:28:18.057 12:04:08 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:18.057 12:04:08 -- host/auth.sh@68 -- # keyid=0 00:28:18.057 12:04:08 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:18.057 12:04:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:18.057 12:04:08 -- common/autotest_common.sh@10 -- # set +x 00:28:18.057 12:04:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:18.057 12:04:08 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:18.057 12:04:08 -- nvmf/common.sh@717 -- # local ip 00:28:18.057 12:04:08 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:18.057 12:04:08 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:18.057 12:04:08 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.057 12:04:08 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.057 12:04:08 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:18.057 12:04:08 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.057 12:04:08 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:18.057 12:04:08 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:18.057 12:04:08 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:18.057 12:04:08 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:28:18.057 12:04:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:18.057 12:04:08 -- common/autotest_common.sh@10 -- # set +x 00:28:18.623 nvme0n1 00:28:18.623 12:04:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:18.623 12:04:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.623 12:04:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:18.623 12:04:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:18.623 12:04:09 -- common/autotest_common.sh@10 -- # set +x 00:28:18.623 12:04:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:18.623 12:04:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.623 12:04:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.623 12:04:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:18.623 12:04:09 -- common/autotest_common.sh@10 -- # set +x 00:28:18.623 12:04:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:18.623 12:04:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:18.623 12:04:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:28:18.623 12:04:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:18.623 12:04:09 -- host/auth.sh@44 -- # digest=sha512 00:28:18.623 12:04:09 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:18.623 12:04:09 -- host/auth.sh@44 -- # keyid=1 00:28:18.623 12:04:09 -- host/auth.sh@45 -- # key=DHHC-1:00:ZWFhNzEzOWU2NzViY2UzN2E0ZjJlZjMzNzAzNDI0MDJiOWFkYjRjNGJjOWEwMjQ24CrVsw==: 00:28:18.623 12:04:09 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:18.623 12:04:09 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:18.623 12:04:09 -- host/auth.sh@49 -- # echo DHHC-1:00:ZWFhNzEzOWU2NzViY2UzN2E0ZjJlZjMzNzAzNDI0MDJiOWFkYjRjNGJjOWEwMjQ24CrVsw==: 00:28:18.623 12:04:09 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 1 00:28:18.623 12:04:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:18.623 12:04:09 -- host/auth.sh@68 -- # digest=sha512 00:28:18.623 12:04:09 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:18.623 12:04:09 -- host/auth.sh@68 -- # keyid=1 00:28:18.623 12:04:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:18.623 12:04:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:18.623 12:04:09 -- common/autotest_common.sh@10 -- # set +x 00:28:18.623 12:04:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:18.623 12:04:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:18.623 12:04:09 -- nvmf/common.sh@717 -- # local ip 00:28:18.623 12:04:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:18.623 12:04:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:18.623 12:04:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.623 12:04:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.623 12:04:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:18.623 12:04:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.623 12:04:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:18.623 12:04:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:18.623 12:04:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:18.623 12:04:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:28:18.623 12:04:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:18.623 12:04:09 -- common/autotest_common.sh@10 -- # set +x 00:28:19.188 nvme0n1 00:28:19.188 12:04:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:19.188 12:04:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.189 12:04:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:19.189 12:04:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:19.189 12:04:09 -- common/autotest_common.sh@10 -- # set +x 00:28:19.189 12:04:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:19.189 12:04:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.189 12:04:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.189 12:04:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:19.189 12:04:09 -- common/autotest_common.sh@10 -- # set +x 00:28:19.189 12:04:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:19.189 12:04:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:19.189 12:04:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:28:19.189 12:04:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:19.189 12:04:09 -- host/auth.sh@44 -- # digest=sha512 00:28:19.189 12:04:09 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:19.189 12:04:09 -- host/auth.sh@44 -- # keyid=2 00:28:19.189 12:04:09 -- host/auth.sh@45 -- # key=DHHC-1:01:NzkwOWRiZDc5Mzc5YzliZDUyMmU4YmU1ZGFlYzFlMmIQf9uH: 00:28:19.189 12:04:09 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:19.189 12:04:09 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:19.189 12:04:09 -- host/auth.sh@49 -- # echo DHHC-1:01:NzkwOWRiZDc5Mzc5YzliZDUyMmU4YmU1ZGFlYzFlMmIQf9uH: 00:28:19.189 12:04:09 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 2 00:28:19.189 12:04:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:19.189 12:04:09 -- host/auth.sh@68 -- # digest=sha512 00:28:19.189 12:04:09 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:19.189 12:04:09 -- host/auth.sh@68 -- # keyid=2 00:28:19.189 12:04:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:19.189 12:04:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:19.189 12:04:09 -- common/autotest_common.sh@10 -- # set +x 00:28:19.189 12:04:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:19.189 12:04:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:19.189 12:04:09 -- nvmf/common.sh@717 -- # local ip 00:28:19.189 12:04:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:19.189 12:04:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:19.189 12:04:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.189 12:04:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.189 12:04:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:19.189 12:04:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.189 12:04:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:19.189 12:04:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:19.189 12:04:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:19.189 12:04:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:19.189 12:04:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:19.189 12:04:09 -- common/autotest_common.sh@10 -- # set +x 00:28:19.753 nvme0n1 00:28:19.753 12:04:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:19.754 12:04:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.754 12:04:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:19.754 12:04:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:19.754 12:04:10 -- common/autotest_common.sh@10 -- # set +x 00:28:19.754 12:04:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:20.011 12:04:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.011 12:04:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:20.011 12:04:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:20.011 12:04:10 -- common/autotest_common.sh@10 -- # set +x 00:28:20.011 12:04:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:20.011 12:04:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:20.011 12:04:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:28:20.011 12:04:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:20.011 12:04:10 -- host/auth.sh@44 -- # digest=sha512 00:28:20.011 12:04:10 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:20.011 12:04:10 -- host/auth.sh@44 -- # keyid=3 00:28:20.011 12:04:10 -- host/auth.sh@45 -- # key=DHHC-1:02:ODkyODY0YTA3OGUwNzVlMDJjOTIwNmQ5N2I3YmM2NzY2MTFkODQ3MGMzZWU0MWNmQVS8oQ==: 00:28:20.011 12:04:10 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:20.011 12:04:10 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:20.011 12:04:10 -- host/auth.sh@49 -- # echo DHHC-1:02:ODkyODY0YTA3OGUwNzVlMDJjOTIwNmQ5N2I3YmM2NzY2MTFkODQ3MGMzZWU0MWNmQVS8oQ==: 00:28:20.011 12:04:10 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 3 00:28:20.011 12:04:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:20.011 12:04:10 -- host/auth.sh@68 -- # digest=sha512 00:28:20.011 12:04:10 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:20.011 12:04:10 -- host/auth.sh@68 -- # keyid=3 00:28:20.011 12:04:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:20.011 12:04:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:20.011 12:04:10 -- common/autotest_common.sh@10 -- # set +x 00:28:20.011 12:04:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:20.011 12:04:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:20.011 12:04:10 -- nvmf/common.sh@717 -- # local ip 00:28:20.011 12:04:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:20.011 12:04:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:20.011 12:04:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.011 12:04:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.011 12:04:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:20.011 12:04:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.011 12:04:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:20.011 12:04:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:20.011 12:04:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:20.011 12:04:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:28:20.011 12:04:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:20.011 12:04:10 -- common/autotest_common.sh@10 -- # set +x 00:28:20.576 nvme0n1 00:28:20.576 12:04:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:20.576 12:04:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.576 12:04:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:20.576 12:04:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:20.576 12:04:10 -- common/autotest_common.sh@10 -- # set +x 00:28:20.576 12:04:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:20.576 12:04:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.576 12:04:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:20.576 12:04:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:20.576 12:04:10 -- common/autotest_common.sh@10 -- # set +x 00:28:20.576 12:04:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:20.576 12:04:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:20.576 12:04:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:28:20.576 12:04:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:20.576 12:04:10 -- host/auth.sh@44 -- # digest=sha512 00:28:20.576 12:04:10 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:20.576 12:04:10 -- host/auth.sh@44 -- # keyid=4 00:28:20.576 12:04:10 -- host/auth.sh@45 -- # key=DHHC-1:03:MDEyYzcwZjMwZGEwMTcxN2NjODE4MDI0YTgzZWJlOTU2N2I1OGEzZGYwNzYyMTk0YWZjMDE4YTFlMDIwNTk1MvoohfI=: 00:28:20.576 12:04:10 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:20.576 12:04:10 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:20.576 12:04:10 -- host/auth.sh@49 -- # echo DHHC-1:03:MDEyYzcwZjMwZGEwMTcxN2NjODE4MDI0YTgzZWJlOTU2N2I1OGEzZGYwNzYyMTk0YWZjMDE4YTFlMDIwNTk1MvoohfI=: 00:28:20.576 12:04:10 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 4 00:28:20.576 12:04:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:20.576 12:04:10 -- host/auth.sh@68 -- # digest=sha512 00:28:20.576 12:04:10 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:20.576 12:04:10 -- host/auth.sh@68 -- # keyid=4 00:28:20.576 12:04:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:20.576 12:04:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:20.576 12:04:10 -- common/autotest_common.sh@10 -- # set +x 00:28:20.576 12:04:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:20.576 12:04:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:20.576 12:04:10 -- nvmf/common.sh@717 -- # local ip 00:28:20.576 12:04:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:20.576 12:04:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:20.576 12:04:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.576 12:04:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.576 12:04:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:20.576 12:04:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.576 12:04:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:20.576 12:04:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:20.576 12:04:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:20.576 12:04:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:20.576 12:04:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:20.576 12:04:10 -- common/autotest_common.sh@10 -- # set +x 00:28:21.141 nvme0n1 00:28:21.141 12:04:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:21.141 12:04:11 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:21.141 12:04:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.141 12:04:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:21.141 12:04:11 -- common/autotest_common.sh@10 -- # set +x 00:28:21.141 12:04:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:21.141 12:04:11 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.141 12:04:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:21.141 12:04:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:21.141 12:04:11 -- common/autotest_common.sh@10 -- # set +x 00:28:21.141 12:04:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:21.141 12:04:11 -- host/auth.sh@117 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:21.141 12:04:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:21.141 12:04:11 -- host/auth.sh@44 -- # digest=sha256 00:28:21.141 12:04:11 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:21.141 12:04:11 -- host/auth.sh@44 -- # keyid=1 00:28:21.141 12:04:11 -- host/auth.sh@45 -- # key=DHHC-1:00:ZWFhNzEzOWU2NzViY2UzN2E0ZjJlZjMzNzAzNDI0MDJiOWFkYjRjNGJjOWEwMjQ24CrVsw==: 00:28:21.141 12:04:11 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:28:21.141 12:04:11 -- host/auth.sh@48 -- # echo ffdhe2048 00:28:21.141 12:04:11 -- host/auth.sh@49 -- # echo DHHC-1:00:ZWFhNzEzOWU2NzViY2UzN2E0ZjJlZjMzNzAzNDI0MDJiOWFkYjRjNGJjOWEwMjQ24CrVsw==: 00:28:21.141 12:04:11 -- host/auth.sh@118 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:21.141 12:04:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:21.141 12:04:11 -- common/autotest_common.sh@10 -- # set +x 00:28:21.141 12:04:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:21.141 12:04:11 -- host/auth.sh@119 -- # get_main_ns_ip 00:28:21.141 12:04:11 -- nvmf/common.sh@717 -- # local ip 00:28:21.141 12:04:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:21.141 12:04:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:21.141 12:04:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.141 12:04:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.141 12:04:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:21.141 12:04:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.141 12:04:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:21.141 12:04:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:21.141 12:04:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:21.141 12:04:11 -- host/auth.sh@119 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:21.141 12:04:11 -- common/autotest_common.sh@638 -- # local es=0 00:28:21.141 12:04:11 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:21.141 12:04:11 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:28:21.141 12:04:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:21.141 12:04:11 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:28:21.141 12:04:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:21.141 12:04:11 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:21.141 12:04:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:21.141 12:04:11 -- common/autotest_common.sh@10 -- # set +x 00:28:21.141 request: 00:28:21.141 { 00:28:21.141 "name": "nvme0", 00:28:21.141 "trtype": "tcp", 00:28:21.141 "traddr": "10.0.0.1", 00:28:21.141 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:21.141 "adrfam": "ipv4", 00:28:21.141 "trsvcid": "4420", 00:28:21.141 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:21.141 "method": "bdev_nvme_attach_controller", 00:28:21.141 "req_id": 1 00:28:21.141 } 00:28:21.141 Got JSON-RPC error response 00:28:21.141 response: 00:28:21.141 { 00:28:21.141 "code": -32602, 00:28:21.141 "message": "Invalid parameters" 00:28:21.141 } 00:28:21.141 12:04:11 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:28:21.141 12:04:11 -- common/autotest_common.sh@641 -- # es=1 00:28:21.141 12:04:11 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:28:21.141 12:04:11 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:28:21.141 12:04:11 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:28:21.141 12:04:11 -- host/auth.sh@121 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.141 12:04:11 -- host/auth.sh@121 -- # jq length 00:28:21.141 12:04:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:21.142 12:04:11 -- common/autotest_common.sh@10 -- # set +x 00:28:21.142 12:04:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:21.399 12:04:11 -- host/auth.sh@121 -- # (( 0 == 0 )) 00:28:21.399 12:04:11 -- host/auth.sh@124 -- # get_main_ns_ip 00:28:21.399 12:04:11 -- nvmf/common.sh@717 -- # local ip 00:28:21.399 12:04:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:21.399 12:04:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:21.399 12:04:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.399 12:04:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.399 12:04:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:21.399 12:04:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.399 12:04:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:21.399 12:04:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:21.399 12:04:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:21.399 12:04:11 -- host/auth.sh@124 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:21.399 12:04:11 -- common/autotest_common.sh@638 -- # local es=0 00:28:21.399 12:04:11 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:21.399 12:04:11 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:28:21.399 12:04:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:21.399 12:04:11 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:28:21.399 12:04:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:21.399 12:04:11 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:21.399 12:04:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:21.399 12:04:11 -- common/autotest_common.sh@10 -- # set +x 00:28:21.399 request: 00:28:21.399 { 00:28:21.400 "name": "nvme0", 00:28:21.400 "trtype": "tcp", 00:28:21.400 "traddr": "10.0.0.1", 00:28:21.400 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:21.400 "adrfam": "ipv4", 00:28:21.400 "trsvcid": "4420", 00:28:21.400 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:21.400 "dhchap_key": "key2", 00:28:21.400 "method": "bdev_nvme_attach_controller", 00:28:21.400 "req_id": 1 00:28:21.400 } 00:28:21.400 Got JSON-RPC error response 00:28:21.400 response: 00:28:21.400 { 00:28:21.400 "code": -32602, 00:28:21.400 "message": "Invalid parameters" 00:28:21.400 } 00:28:21.400 12:04:11 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:28:21.400 12:04:11 -- common/autotest_common.sh@641 -- # es=1 00:28:21.400 12:04:11 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:28:21.400 12:04:11 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:28:21.400 12:04:11 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:28:21.400 12:04:11 -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.400 12:04:11 -- host/auth.sh@127 -- # jq length 00:28:21.400 12:04:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:21.400 12:04:11 -- common/autotest_common.sh@10 -- # set +x 00:28:21.400 12:04:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:21.400 12:04:11 -- host/auth.sh@127 -- # (( 0 == 0 )) 00:28:21.400 12:04:11 -- host/auth.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:28:21.400 12:04:11 -- host/auth.sh@130 -- # cleanup 00:28:21.400 12:04:11 -- host/auth.sh@24 -- # nvmftestfini 00:28:21.400 12:04:11 -- nvmf/common.sh@477 -- # nvmfcleanup 00:28:21.400 12:04:11 -- nvmf/common.sh@117 -- # sync 00:28:21.400 12:04:11 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:21.400 12:04:11 -- nvmf/common.sh@120 -- # set +e 00:28:21.400 12:04:11 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:21.400 12:04:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:21.400 rmmod nvme_tcp 00:28:21.400 rmmod nvme_fabrics 00:28:21.400 12:04:11 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:21.400 12:04:11 -- nvmf/common.sh@124 -- # set -e 00:28:21.400 12:04:11 -- nvmf/common.sh@125 -- # return 0 00:28:21.400 12:04:11 -- nvmf/common.sh@478 -- # '[' -n 2623910 ']' 00:28:21.400 12:04:11 -- nvmf/common.sh@479 -- # killprocess 2623910 00:28:21.400 12:04:11 -- common/autotest_common.sh@936 -- # '[' -z 2623910 ']' 00:28:21.400 12:04:11 -- common/autotest_common.sh@940 -- # kill -0 2623910 00:28:21.400 12:04:11 -- common/autotest_common.sh@941 -- # uname 00:28:21.400 12:04:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:21.400 12:04:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2623910 00:28:21.658 12:04:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:21.658 12:04:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:21.658 12:04:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2623910' 00:28:21.658 killing process with pid 2623910 00:28:21.658 12:04:11 -- common/autotest_common.sh@955 -- # kill 2623910 00:28:21.658 12:04:11 -- common/autotest_common.sh@960 -- # wait 2623910 00:28:22.653 12:04:12 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:28:22.653 12:04:12 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:28:22.653 12:04:12 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:28:22.653 12:04:12 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:22.653 12:04:12 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:22.653 12:04:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:22.653 12:04:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:22.653 12:04:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:24.552 12:04:15 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:24.552 12:04:15 -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:24.552 12:04:15 -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:24.552 12:04:15 -- host/auth.sh@27 -- # clean_kernel_target 00:28:24.552 12:04:15 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:28:24.552 12:04:15 -- nvmf/common.sh@675 -- # echo 0 00:28:24.552 12:04:15 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:24.552 12:04:15 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:24.552 12:04:15 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:24.552 12:04:15 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:24.552 12:04:15 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:28:24.552 12:04:15 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:28:24.810 12:04:15 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:28.116 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:28:28.116 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:28:28.116 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:28:28.116 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:28:28.116 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:28:28.116 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:28:28.116 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:28:28.116 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:28:28.116 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:28:28.116 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:28:28.116 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:28:28.116 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:28:28.116 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:28:28.116 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:28:28.116 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:28:28.116 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:28:29.487 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:28:29.487 12:04:19 -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Tbn /tmp/spdk.key-null.uP8 /tmp/spdk.key-sha256.uVV /tmp/spdk.key-sha384.FCi /tmp/spdk.key-sha512.Ogi /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:28:29.487 12:04:19 -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:32.765 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:28:32.765 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:28:32.765 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:28:32.765 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:28:32.765 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:28:32.765 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:28:32.765 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:28:32.765 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:28:32.765 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:28:32.765 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:28:32.765 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:28:32.765 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:28:32.765 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:28:32.765 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:28:32.765 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:28:32.765 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:28:32.765 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:28:32.765 00:28:32.765 real 0m53.390s 00:28:32.765 user 0m45.066s 00:28:32.765 sys 0m15.112s 00:28:32.765 12:04:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:32.765 12:04:23 -- common/autotest_common.sh@10 -- # set +x 00:28:32.765 ************************************ 00:28:32.765 END TEST nvmf_auth 00:28:32.765 ************************************ 00:28:32.765 12:04:23 -- nvmf/nvmf.sh@104 -- # [[ tcp == \t\c\p ]] 00:28:32.765 12:04:23 -- nvmf/nvmf.sh@105 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:32.765 12:04:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:32.765 12:04:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:32.765 12:04:23 -- common/autotest_common.sh@10 -- # set +x 00:28:33.023 ************************************ 00:28:33.023 START TEST nvmf_digest 00:28:33.023 ************************************ 00:28:33.023 12:04:23 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:33.280 * Looking for test storage... 00:28:33.280 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:33.281 12:04:23 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:33.281 12:04:23 -- nvmf/common.sh@7 -- # uname -s 00:28:33.281 12:04:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:33.281 12:04:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:33.281 12:04:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:33.281 12:04:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:33.281 12:04:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:33.281 12:04:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:33.281 12:04:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:33.281 12:04:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:33.281 12:04:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:33.281 12:04:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:33.281 12:04:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:28:33.281 12:04:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:28:33.281 12:04:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:33.281 12:04:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:33.281 12:04:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:33.281 12:04:23 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:33.281 12:04:23 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:33.281 12:04:23 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:33.281 12:04:23 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:33.281 12:04:23 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:33.281 12:04:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.281 12:04:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.281 12:04:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.281 12:04:23 -- paths/export.sh@5 -- # export PATH 00:28:33.281 12:04:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.281 12:04:23 -- nvmf/common.sh@47 -- # : 0 00:28:33.281 12:04:23 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:33.281 12:04:23 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:33.281 12:04:23 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:33.281 12:04:23 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:33.281 12:04:23 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:33.281 12:04:23 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:33.281 12:04:23 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:33.281 12:04:23 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:33.281 12:04:23 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:33.281 12:04:23 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:33.281 12:04:23 -- host/digest.sh@16 -- # runtime=2 00:28:33.281 12:04:23 -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:33.281 12:04:23 -- host/digest.sh@138 -- # nvmftestinit 00:28:33.281 12:04:23 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:28:33.281 12:04:23 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:33.281 12:04:23 -- nvmf/common.sh@437 -- # prepare_net_devs 00:28:33.281 12:04:23 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:28:33.281 12:04:23 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:28:33.281 12:04:23 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:33.281 12:04:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:33.281 12:04:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:33.281 12:04:23 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:28:33.281 12:04:23 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:28:33.281 12:04:23 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:33.281 12:04:23 -- common/autotest_common.sh@10 -- # set +x 00:28:39.844 12:04:29 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:39.844 12:04:29 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:39.844 12:04:29 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:39.844 12:04:29 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:39.844 12:04:29 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:39.844 12:04:29 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:39.844 12:04:29 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:39.844 12:04:29 -- nvmf/common.sh@295 -- # net_devs=() 00:28:39.844 12:04:29 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:39.844 12:04:29 -- nvmf/common.sh@296 -- # e810=() 00:28:39.844 12:04:29 -- nvmf/common.sh@296 -- # local -ga e810 00:28:39.844 12:04:29 -- nvmf/common.sh@297 -- # x722=() 00:28:39.844 12:04:29 -- nvmf/common.sh@297 -- # local -ga x722 00:28:39.844 12:04:29 -- nvmf/common.sh@298 -- # mlx=() 00:28:39.844 12:04:29 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:39.844 12:04:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:39.844 12:04:29 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:39.844 12:04:29 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:39.844 12:04:29 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:39.844 12:04:29 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:39.844 12:04:29 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:39.844 12:04:29 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:39.844 12:04:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:39.844 12:04:29 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:39.844 12:04:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:39.844 12:04:29 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:39.844 12:04:29 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:39.844 12:04:29 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:39.844 12:04:29 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:39.844 12:04:29 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:39.844 12:04:29 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:39.844 12:04:29 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:39.844 12:04:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:39.844 12:04:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:39.844 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:39.844 12:04:29 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:39.844 12:04:29 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:39.844 12:04:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:39.844 12:04:29 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:39.844 12:04:29 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:39.844 12:04:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:39.844 12:04:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:39.844 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:39.844 12:04:29 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:39.844 12:04:29 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:39.844 12:04:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:39.844 12:04:29 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:39.844 12:04:29 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:39.844 12:04:29 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:39.844 12:04:29 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:39.844 12:04:29 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:39.844 12:04:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:39.844 12:04:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:39.844 12:04:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:28:39.844 12:04:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:39.844 12:04:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:39.844 Found net devices under 0000:af:00.0: cvl_0_0 00:28:39.844 12:04:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:28:39.844 12:04:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:39.844 12:04:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:39.844 12:04:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:28:39.844 12:04:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:39.844 12:04:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:39.844 Found net devices under 0000:af:00.1: cvl_0_1 00:28:39.844 12:04:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:28:39.844 12:04:29 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:28:39.844 12:04:29 -- nvmf/common.sh@403 -- # is_hw=yes 00:28:39.844 12:04:29 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:28:39.844 12:04:29 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:28:39.844 12:04:29 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:28:39.844 12:04:29 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:39.844 12:04:29 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:39.844 12:04:29 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:39.844 12:04:29 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:39.844 12:04:29 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:39.844 12:04:29 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:39.844 12:04:29 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:39.844 12:04:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:39.844 12:04:29 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:39.844 12:04:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:39.844 12:04:29 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:39.844 12:04:29 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:39.844 12:04:29 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:39.844 12:04:29 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:39.844 12:04:29 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:39.844 12:04:29 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:39.844 12:04:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:39.844 12:04:29 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:39.844 12:04:29 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:39.844 12:04:29 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:39.844 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:39.844 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:28:39.844 00:28:39.844 --- 10.0.0.2 ping statistics --- 00:28:39.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:39.844 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:28:39.844 12:04:29 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:39.844 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:39.844 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:28:39.844 00:28:39.844 --- 10.0.0.1 ping statistics --- 00:28:39.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:39.844 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:28:39.844 12:04:29 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:39.844 12:04:29 -- nvmf/common.sh@411 -- # return 0 00:28:39.844 12:04:29 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:28:39.844 12:04:29 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:39.844 12:04:29 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:28:39.844 12:04:29 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:28:39.844 12:04:29 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:39.844 12:04:29 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:28:39.844 12:04:29 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:28:39.844 12:04:29 -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:39.844 12:04:29 -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:39.844 12:04:29 -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:39.844 12:04:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:39.844 12:04:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:39.844 12:04:29 -- common/autotest_common.sh@10 -- # set +x 00:28:39.844 ************************************ 00:28:39.844 START TEST nvmf_digest_clean 00:28:39.845 ************************************ 00:28:39.845 12:04:30 -- common/autotest_common.sh@1111 -- # run_digest 00:28:39.845 12:04:30 -- host/digest.sh@120 -- # local dsa_initiator 00:28:39.845 12:04:30 -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:39.845 12:04:30 -- host/digest.sh@121 -- # dsa_initiator=false 00:28:39.845 12:04:30 -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:39.845 12:04:30 -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:39.845 12:04:30 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:28:39.845 12:04:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:39.845 12:04:30 -- common/autotest_common.sh@10 -- # set +x 00:28:39.845 12:04:30 -- nvmf/common.sh@470 -- # nvmfpid=2637773 00:28:39.845 12:04:30 -- nvmf/common.sh@471 -- # waitforlisten 2637773 00:28:39.845 12:04:30 -- common/autotest_common.sh@817 -- # '[' -z 2637773 ']' 00:28:39.845 12:04:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:39.845 12:04:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:39.845 12:04:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:39.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:39.845 12:04:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:39.845 12:04:30 -- common/autotest_common.sh@10 -- # set +x 00:28:39.845 12:04:30 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:39.845 [2024-04-18 12:04:30.116841] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:28:39.845 [2024-04-18 12:04:30.116938] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:39.845 EAL: No free 2048 kB hugepages reported on node 1 00:28:39.845 [2024-04-18 12:04:30.249606] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:40.104 [2024-04-18 12:04:30.460423] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:40.104 [2024-04-18 12:04:30.460473] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:40.104 [2024-04-18 12:04:30.460486] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:40.104 [2024-04-18 12:04:30.460498] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:40.104 [2024-04-18 12:04:30.460508] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:40.104 [2024-04-18 12:04:30.460545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:40.363 12:04:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:40.363 12:04:30 -- common/autotest_common.sh@850 -- # return 0 00:28:40.363 12:04:30 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:28:40.363 12:04:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:40.363 12:04:30 -- common/autotest_common.sh@10 -- # set +x 00:28:40.363 12:04:30 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:40.363 12:04:30 -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:40.363 12:04:30 -- host/digest.sh@126 -- # common_target_config 00:28:40.363 12:04:30 -- host/digest.sh@43 -- # rpc_cmd 00:28:40.363 12:04:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:40.363 12:04:30 -- common/autotest_common.sh@10 -- # set +x 00:28:40.931 null0 00:28:40.931 [2024-04-18 12:04:31.296159] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:40.931 [2024-04-18 12:04:31.320378] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:40.931 12:04:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:40.931 12:04:31 -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:40.931 12:04:31 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:40.931 12:04:31 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:40.931 12:04:31 -- host/digest.sh@80 -- # rw=randread 00:28:40.931 12:04:31 -- host/digest.sh@80 -- # bs=4096 00:28:40.932 12:04:31 -- host/digest.sh@80 -- # qd=128 00:28:40.932 12:04:31 -- host/digest.sh@80 -- # scan_dsa=false 00:28:40.932 12:04:31 -- host/digest.sh@83 -- # bperfpid=2637926 00:28:40.932 12:04:31 -- host/digest.sh@84 -- # waitforlisten 2637926 /var/tmp/bperf.sock 00:28:40.932 12:04:31 -- common/autotest_common.sh@817 -- # '[' -z 2637926 ']' 00:28:40.932 12:04:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:40.932 12:04:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:40.932 12:04:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:40.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:40.932 12:04:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:40.932 12:04:31 -- common/autotest_common.sh@10 -- # set +x 00:28:40.932 12:04:31 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:40.932 [2024-04-18 12:04:31.403971] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:28:40.932 [2024-04-18 12:04:31.404063] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2637926 ] 00:28:40.932 EAL: No free 2048 kB hugepages reported on node 1 00:28:41.190 [2024-04-18 12:04:31.528609] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:41.449 [2024-04-18 12:04:31.741913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:41.708 12:04:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:41.708 12:04:32 -- common/autotest_common.sh@850 -- # return 0 00:28:41.708 12:04:32 -- host/digest.sh@86 -- # false 00:28:41.708 12:04:32 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:41.708 12:04:32 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:42.277 12:04:32 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:42.277 12:04:32 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:42.535 nvme0n1 00:28:42.535 12:04:32 -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:42.535 12:04:32 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:42.535 Running I/O for 2 seconds... 00:28:45.066 00:28:45.066 Latency(us) 00:28:45.066 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:45.066 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:45.066 nvme0n1 : 2.00 24258.31 94.76 0.00 0.00 5270.93 2752.51 12373.20 00:28:45.066 =================================================================================================================== 00:28:45.066 Total : 24258.31 94.76 0.00 0.00 5270.93 2752.51 12373.20 00:28:45.066 0 00:28:45.066 12:04:35 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:45.066 12:04:35 -- host/digest.sh@93 -- # get_accel_stats 00:28:45.066 12:04:35 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:45.066 12:04:35 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:45.066 | select(.opcode=="crc32c") 00:28:45.066 | "\(.module_name) \(.executed)"' 00:28:45.066 12:04:35 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:45.066 12:04:35 -- host/digest.sh@94 -- # false 00:28:45.066 12:04:35 -- host/digest.sh@94 -- # exp_module=software 00:28:45.066 12:04:35 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:45.066 12:04:35 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:45.066 12:04:35 -- host/digest.sh@98 -- # killprocess 2637926 00:28:45.066 12:04:35 -- common/autotest_common.sh@936 -- # '[' -z 2637926 ']' 00:28:45.066 12:04:35 -- common/autotest_common.sh@940 -- # kill -0 2637926 00:28:45.066 12:04:35 -- common/autotest_common.sh@941 -- # uname 00:28:45.066 12:04:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:45.066 12:04:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2637926 00:28:45.066 12:04:35 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:28:45.066 12:04:35 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:28:45.066 12:04:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2637926' 00:28:45.066 killing process with pid 2637926 00:28:45.066 12:04:35 -- common/autotest_common.sh@955 -- # kill 2637926 00:28:45.066 Received shutdown signal, test time was about 2.000000 seconds 00:28:45.066 00:28:45.066 Latency(us) 00:28:45.066 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:45.066 =================================================================================================================== 00:28:45.066 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:45.066 12:04:35 -- common/autotest_common.sh@960 -- # wait 2637926 00:28:46.029 12:04:36 -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:46.029 12:04:36 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:46.029 12:04:36 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:46.029 12:04:36 -- host/digest.sh@80 -- # rw=randread 00:28:46.029 12:04:36 -- host/digest.sh@80 -- # bs=131072 00:28:46.029 12:04:36 -- host/digest.sh@80 -- # qd=16 00:28:46.029 12:04:36 -- host/digest.sh@80 -- # scan_dsa=false 00:28:46.029 12:04:36 -- host/digest.sh@83 -- # bperfpid=2638761 00:28:46.029 12:04:36 -- host/digest.sh@84 -- # waitforlisten 2638761 /var/tmp/bperf.sock 00:28:46.029 12:04:36 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:46.029 12:04:36 -- common/autotest_common.sh@817 -- # '[' -z 2638761 ']' 00:28:46.029 12:04:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:46.029 12:04:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:46.029 12:04:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:46.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:46.029 12:04:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:46.029 12:04:36 -- common/autotest_common.sh@10 -- # set +x 00:28:46.029 [2024-04-18 12:04:36.346314] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:28:46.029 [2024-04-18 12:04:36.346448] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2638761 ] 00:28:46.029 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:46.029 Zero copy mechanism will not be used. 00:28:46.029 EAL: No free 2048 kB hugepages reported on node 1 00:28:46.029 [2024-04-18 12:04:36.472541] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:46.288 [2024-04-18 12:04:36.688184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:46.855 12:04:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:46.855 12:04:37 -- common/autotest_common.sh@850 -- # return 0 00:28:46.855 12:04:37 -- host/digest.sh@86 -- # false 00:28:46.855 12:04:37 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:46.855 12:04:37 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:47.113 12:04:37 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:47.113 12:04:37 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:47.371 nvme0n1 00:28:47.371 12:04:37 -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:47.371 12:04:37 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:47.628 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:47.628 Zero copy mechanism will not be used. 00:28:47.628 Running I/O for 2 seconds... 00:28:49.527 00:28:49.527 Latency(us) 00:28:49.527 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:49.527 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:49.527 nvme0n1 : 2.00 3767.33 470.92 0.00 0.00 4244.34 1474.56 11534.34 00:28:49.527 =================================================================================================================== 00:28:49.527 Total : 3767.33 470.92 0.00 0.00 4244.34 1474.56 11534.34 00:28:49.527 0 00:28:49.527 12:04:39 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:49.527 12:04:39 -- host/digest.sh@93 -- # get_accel_stats 00:28:49.527 12:04:39 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:49.527 12:04:39 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:49.527 | select(.opcode=="crc32c") 00:28:49.527 | "\(.module_name) \(.executed)"' 00:28:49.527 12:04:39 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:49.790 12:04:40 -- host/digest.sh@94 -- # false 00:28:49.790 12:04:40 -- host/digest.sh@94 -- # exp_module=software 00:28:49.790 12:04:40 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:49.790 12:04:40 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:49.790 12:04:40 -- host/digest.sh@98 -- # killprocess 2638761 00:28:49.790 12:04:40 -- common/autotest_common.sh@936 -- # '[' -z 2638761 ']' 00:28:49.790 12:04:40 -- common/autotest_common.sh@940 -- # kill -0 2638761 00:28:49.790 12:04:40 -- common/autotest_common.sh@941 -- # uname 00:28:49.790 12:04:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:49.790 12:04:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2638761 00:28:49.790 12:04:40 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:28:49.790 12:04:40 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:28:49.790 12:04:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2638761' 00:28:49.790 killing process with pid 2638761 00:28:49.790 12:04:40 -- common/autotest_common.sh@955 -- # kill 2638761 00:28:49.790 Received shutdown signal, test time was about 2.000000 seconds 00:28:49.790 00:28:49.790 Latency(us) 00:28:49.790 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:49.790 =================================================================================================================== 00:28:49.790 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:49.790 12:04:40 -- common/autotest_common.sh@960 -- # wait 2638761 00:28:50.745 12:04:41 -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:50.745 12:04:41 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:50.745 12:04:41 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:50.745 12:04:41 -- host/digest.sh@80 -- # rw=randwrite 00:28:50.745 12:04:41 -- host/digest.sh@80 -- # bs=4096 00:28:50.745 12:04:41 -- host/digest.sh@80 -- # qd=128 00:28:50.745 12:04:41 -- host/digest.sh@80 -- # scan_dsa=false 00:28:50.745 12:04:41 -- host/digest.sh@83 -- # bperfpid=2639644 00:28:50.745 12:04:41 -- host/digest.sh@84 -- # waitforlisten 2639644 /var/tmp/bperf.sock 00:28:50.745 12:04:41 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:50.745 12:04:41 -- common/autotest_common.sh@817 -- # '[' -z 2639644 ']' 00:28:50.745 12:04:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:50.745 12:04:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:50.745 12:04:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:50.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:50.745 12:04:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:50.745 12:04:41 -- common/autotest_common.sh@10 -- # set +x 00:28:51.004 [2024-04-18 12:04:41.321645] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:28:51.004 [2024-04-18 12:04:41.321745] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2639644 ] 00:28:51.004 EAL: No free 2048 kB hugepages reported on node 1 00:28:51.004 [2024-04-18 12:04:41.445987] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:51.262 [2024-04-18 12:04:41.656051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:51.829 12:04:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:51.829 12:04:42 -- common/autotest_common.sh@850 -- # return 0 00:28:51.829 12:04:42 -- host/digest.sh@86 -- # false 00:28:51.829 12:04:42 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:51.829 12:04:42 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:52.087 12:04:42 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:52.087 12:04:42 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:52.654 nvme0n1 00:28:52.654 12:04:42 -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:52.654 12:04:42 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:52.654 Running I/O for 2 seconds... 00:28:54.555 00:28:54.555 Latency(us) 00:28:54.555 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:54.555 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:54.555 nvme0n1 : 2.00 25030.58 97.78 0.00 0.00 5106.89 2306.87 11219.76 00:28:54.555 =================================================================================================================== 00:28:54.555 Total : 25030.58 97.78 0.00 0.00 5106.89 2306.87 11219.76 00:28:54.555 0 00:28:54.555 12:04:45 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:54.555 12:04:45 -- host/digest.sh@93 -- # get_accel_stats 00:28:54.555 12:04:45 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:54.555 12:04:45 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:54.555 | select(.opcode=="crc32c") 00:28:54.555 | "\(.module_name) \(.executed)"' 00:28:54.555 12:04:45 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:54.814 12:04:45 -- host/digest.sh@94 -- # false 00:28:54.814 12:04:45 -- host/digest.sh@94 -- # exp_module=software 00:28:54.814 12:04:45 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:54.814 12:04:45 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:54.814 12:04:45 -- host/digest.sh@98 -- # killprocess 2639644 00:28:54.814 12:04:45 -- common/autotest_common.sh@936 -- # '[' -z 2639644 ']' 00:28:54.814 12:04:45 -- common/autotest_common.sh@940 -- # kill -0 2639644 00:28:54.814 12:04:45 -- common/autotest_common.sh@941 -- # uname 00:28:54.814 12:04:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:54.814 12:04:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2639644 00:28:54.814 12:04:45 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:28:54.814 12:04:45 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:28:54.814 12:04:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2639644' 00:28:54.814 killing process with pid 2639644 00:28:54.814 12:04:45 -- common/autotest_common.sh@955 -- # kill 2639644 00:28:54.814 Received shutdown signal, test time was about 2.000000 seconds 00:28:54.814 00:28:54.814 Latency(us) 00:28:54.814 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:54.814 =================================================================================================================== 00:28:54.814 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:54.814 12:04:45 -- common/autotest_common.sh@960 -- # wait 2639644 00:28:56.191 12:04:46 -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:56.191 12:04:46 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:56.191 12:04:46 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:56.191 12:04:46 -- host/digest.sh@80 -- # rw=randwrite 00:28:56.191 12:04:46 -- host/digest.sh@80 -- # bs=131072 00:28:56.191 12:04:46 -- host/digest.sh@80 -- # qd=16 00:28:56.191 12:04:46 -- host/digest.sh@80 -- # scan_dsa=false 00:28:56.191 12:04:46 -- host/digest.sh@83 -- # bperfpid=2640529 00:28:56.191 12:04:46 -- host/digest.sh@84 -- # waitforlisten 2640529 /var/tmp/bperf.sock 00:28:56.191 12:04:46 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:56.191 12:04:46 -- common/autotest_common.sh@817 -- # '[' -z 2640529 ']' 00:28:56.191 12:04:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:56.191 12:04:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:56.191 12:04:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:56.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:56.191 12:04:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:56.191 12:04:46 -- common/autotest_common.sh@10 -- # set +x 00:28:56.191 [2024-04-18 12:04:46.393555] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:28:56.191 [2024-04-18 12:04:46.393662] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2640529 ] 00:28:56.191 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:56.191 Zero copy mechanism will not be used. 00:28:56.191 EAL: No free 2048 kB hugepages reported on node 1 00:28:56.191 [2024-04-18 12:04:46.519195] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:56.191 [2024-04-18 12:04:46.734251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:56.757 12:04:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:56.757 12:04:47 -- common/autotest_common.sh@850 -- # return 0 00:28:56.757 12:04:47 -- host/digest.sh@86 -- # false 00:28:56.757 12:04:47 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:56.757 12:04:47 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:57.324 12:04:47 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:57.324 12:04:47 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:57.581 nvme0n1 00:28:57.581 12:04:48 -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:57.581 12:04:48 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:57.840 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:57.840 Zero copy mechanism will not be used. 00:28:57.840 Running I/O for 2 seconds... 00:28:59.742 00:28:59.742 Latency(us) 00:28:59.742 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:59.742 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:59.742 nvme0n1 : 2.00 4192.46 524.06 0.00 0.00 3810.27 2686.98 8545.89 00:28:59.742 =================================================================================================================== 00:28:59.742 Total : 4192.46 524.06 0.00 0.00 3810.27 2686.98 8545.89 00:28:59.742 0 00:28:59.742 12:04:50 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:59.742 12:04:50 -- host/digest.sh@93 -- # get_accel_stats 00:28:59.742 12:04:50 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:59.742 12:04:50 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:59.742 | select(.opcode=="crc32c") 00:28:59.742 | "\(.module_name) \(.executed)"' 00:28:59.742 12:04:50 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:59.999 12:04:50 -- host/digest.sh@94 -- # false 00:28:59.999 12:04:50 -- host/digest.sh@94 -- # exp_module=software 00:28:59.999 12:04:50 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:59.999 12:04:50 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:59.999 12:04:50 -- host/digest.sh@98 -- # killprocess 2640529 00:29:00.000 12:04:50 -- common/autotest_common.sh@936 -- # '[' -z 2640529 ']' 00:29:00.000 12:04:50 -- common/autotest_common.sh@940 -- # kill -0 2640529 00:29:00.000 12:04:50 -- common/autotest_common.sh@941 -- # uname 00:29:00.000 12:04:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:00.000 12:04:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2640529 00:29:00.000 12:04:50 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:00.000 12:04:50 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:00.000 12:04:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2640529' 00:29:00.000 killing process with pid 2640529 00:29:00.000 12:04:50 -- common/autotest_common.sh@955 -- # kill 2640529 00:29:00.000 Received shutdown signal, test time was about 2.000000 seconds 00:29:00.000 00:29:00.000 Latency(us) 00:29:00.000 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:00.000 =================================================================================================================== 00:29:00.000 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:00.000 12:04:50 -- common/autotest_common.sh@960 -- # wait 2640529 00:29:00.958 12:04:51 -- host/digest.sh@132 -- # killprocess 2637773 00:29:00.958 12:04:51 -- common/autotest_common.sh@936 -- # '[' -z 2637773 ']' 00:29:00.958 12:04:51 -- common/autotest_common.sh@940 -- # kill -0 2637773 00:29:00.958 12:04:51 -- common/autotest_common.sh@941 -- # uname 00:29:00.958 12:04:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:00.958 12:04:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2637773 00:29:01.216 12:04:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:01.216 12:04:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:01.216 12:04:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2637773' 00:29:01.216 killing process with pid 2637773 00:29:01.216 12:04:51 -- common/autotest_common.sh@955 -- # kill 2637773 00:29:01.216 12:04:51 -- common/autotest_common.sh@960 -- # wait 2637773 00:29:02.589 00:29:02.589 real 0m22.738s 00:29:02.589 user 0m41.785s 00:29:02.589 sys 0m5.236s 00:29:02.589 12:04:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:02.589 12:04:52 -- common/autotest_common.sh@10 -- # set +x 00:29:02.589 ************************************ 00:29:02.589 END TEST nvmf_digest_clean 00:29:02.589 ************************************ 00:29:02.589 12:04:52 -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:29:02.589 12:04:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:02.589 12:04:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:02.589 12:04:52 -- common/autotest_common.sh@10 -- # set +x 00:29:02.589 ************************************ 00:29:02.589 START TEST nvmf_digest_error 00:29:02.589 ************************************ 00:29:02.589 12:04:52 -- common/autotest_common.sh@1111 -- # run_digest_error 00:29:02.589 12:04:52 -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:29:02.589 12:04:52 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:29:02.589 12:04:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:02.589 12:04:52 -- common/autotest_common.sh@10 -- # set +x 00:29:02.589 12:04:52 -- nvmf/common.sh@470 -- # nvmfpid=2641638 00:29:02.589 12:04:52 -- nvmf/common.sh@471 -- # waitforlisten 2641638 00:29:02.589 12:04:52 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:02.589 12:04:52 -- common/autotest_common.sh@817 -- # '[' -z 2641638 ']' 00:29:02.589 12:04:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:02.589 12:04:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:02.589 12:04:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:02.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:02.589 12:04:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:02.589 12:04:52 -- common/autotest_common.sh@10 -- # set +x 00:29:02.589 [2024-04-18 12:04:53.056142] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:29:02.589 [2024-04-18 12:04:53.056230] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:02.589 EAL: No free 2048 kB hugepages reported on node 1 00:29:02.845 [2024-04-18 12:04:53.185445] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:02.845 [2024-04-18 12:04:53.387456] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:02.845 [2024-04-18 12:04:53.387509] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:02.845 [2024-04-18 12:04:53.387522] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:02.845 [2024-04-18 12:04:53.387545] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:02.846 [2024-04-18 12:04:53.387554] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:02.846 [2024-04-18 12:04:53.387592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:03.410 12:04:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:03.410 12:04:53 -- common/autotest_common.sh@850 -- # return 0 00:29:03.410 12:04:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:29:03.410 12:04:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:03.410 12:04:53 -- common/autotest_common.sh@10 -- # set +x 00:29:03.410 12:04:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:03.410 12:04:53 -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:29:03.410 12:04:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:03.410 12:04:53 -- common/autotest_common.sh@10 -- # set +x 00:29:03.410 [2024-04-18 12:04:53.869385] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:29:03.410 12:04:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:03.410 12:04:53 -- host/digest.sh@105 -- # common_target_config 00:29:03.410 12:04:53 -- host/digest.sh@43 -- # rpc_cmd 00:29:03.410 12:04:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:03.410 12:04:53 -- common/autotest_common.sh@10 -- # set +x 00:29:03.975 null0 00:29:03.975 [2024-04-18 12:04:54.259976] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:03.975 [2024-04-18 12:04:54.284218] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:03.975 12:04:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:03.975 12:04:54 -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:29:03.975 12:04:54 -- host/digest.sh@54 -- # local rw bs qd 00:29:03.975 12:04:54 -- host/digest.sh@56 -- # rw=randread 00:29:03.975 12:04:54 -- host/digest.sh@56 -- # bs=4096 00:29:03.975 12:04:54 -- host/digest.sh@56 -- # qd=128 00:29:03.975 12:04:54 -- host/digest.sh@58 -- # bperfpid=2641917 00:29:03.975 12:04:54 -- host/digest.sh@60 -- # waitforlisten 2641917 /var/tmp/bperf.sock 00:29:03.975 12:04:54 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:29:03.975 12:04:54 -- common/autotest_common.sh@817 -- # '[' -z 2641917 ']' 00:29:03.975 12:04:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:03.975 12:04:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:03.975 12:04:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:03.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:03.975 12:04:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:03.975 12:04:54 -- common/autotest_common.sh@10 -- # set +x 00:29:03.975 [2024-04-18 12:04:54.367154] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:29:03.975 [2024-04-18 12:04:54.367246] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2641917 ] 00:29:03.975 EAL: No free 2048 kB hugepages reported on node 1 00:29:03.975 [2024-04-18 12:04:54.491931] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:04.231 [2024-04-18 12:04:54.703315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:04.796 12:04:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:04.796 12:04:55 -- common/autotest_common.sh@850 -- # return 0 00:29:04.796 12:04:55 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:04.796 12:04:55 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:04.796 12:04:55 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:04.796 12:04:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:04.796 12:04:55 -- common/autotest_common.sh@10 -- # set +x 00:29:04.796 12:04:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:04.796 12:04:55 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:04.796 12:04:55 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:05.053 nvme0n1 00:29:05.053 12:04:55 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:05.053 12:04:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:05.053 12:04:55 -- common/autotest_common.sh@10 -- # set +x 00:29:05.053 12:04:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:05.053 12:04:55 -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:05.053 12:04:55 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:05.310 Running I/O for 2 seconds... 00:29:05.310 [2024-04-18 12:04:55.703459] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:05.310 [2024-04-18 12:04:55.703503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.310 [2024-04-18 12:04:55.703521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.310 [2024-04-18 12:04:55.714814] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:05.310 [2024-04-18 12:04:55.714851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:14919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.310 [2024-04-18 12:04:55.714867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.310 [2024-04-18 12:04:55.724807] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:05.310 [2024-04-18 12:04:55.724840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.310 [2024-04-18 12:04:55.724855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.310 [2024-04-18 12:04:55.735802] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:05.310 [2024-04-18 12:04:55.735833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.310 [2024-04-18 12:04:55.735848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.310 [2024-04-18 12:04:55.746684] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:05.310 [2024-04-18 12:04:55.746714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.310 [2024-04-18 12:04:55.746729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.310 [2024-04-18 12:04:55.756859] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:05.310 [2024-04-18 12:04:55.756889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.310 [2024-04-18 12:04:55.756908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.310 [2024-04-18 12:04:55.766106] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:05.310 [2024-04-18 12:04:55.766136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:24151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.310 [2024-04-18 12:04:55.766150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.310 [2024-04-18 12:04:55.778308] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:05.310 [2024-04-18 12:04:55.778337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:19005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.310 [2024-04-18 12:04:55.778352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.310 [2024-04-18 12:04:55.789016] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:05.310 [2024-04-18 12:04:55.789045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:7159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.310 [2024-04-18 12:04:55.789060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.310 [2024-04-18 12:04:55.799635] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:05.310 [2024-04-18 12:04:55.799664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.310 [2024-04-18 12:04:55.799678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.310 [2024-04-18 12:04:55.810243] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:05.310 [2024-04-18 12:04:55.810272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:16163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.310 [2024-04-18 12:04:55.810286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.310 [2024-04-18 12:04:55.820419] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:05.311 [2024-04-18 12:04:55.820448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:23840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.311 [2024-04-18 12:04:55.820469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.311 [2024-04-18 12:04:55.830488] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:05.311 [2024-04-18 12:04:55.830516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:12566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.311 [2024-04-18 12:04:55.830530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.311 [2024-04-18 12:04:55.841668] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:05.311 [2024-04-18 12:04:55.841697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.311 [2024-04-18 12:04:55.841711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.311 [2024-04-18 12:04:55.851699] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:05.311 [2024-04-18 12:04:55.851728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.311 [2024-04-18 12:04:55.851742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.569 [2024-04-18 12:04:55.863398] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:05.569 [2024-04-18 12:04:55.863428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:13596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.569 [2024-04-18 12:04:55.863456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.569 [2024-04-18 12:04:55.874939] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:05.569 [2024-04-18 12:04:55.874968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:13653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.569 [2024-04-18 12:04:55.874982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.569 [2024-04-18 12:04:55.884924] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:05.569 [2024-04-18 12:04:55.884953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:19228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.569 [2024-04-18 12:04:55.884968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.569 [2024-04-18 12:04:55.895382] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:05.569 [2024-04-18 12:04:55.895412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.569 [2024-04-18 12:04:55.895428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.569 [2024-04-18 12:04:55.907616] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:05.569 [2024-04-18 12:04:55.907644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.569 [2024-04-18 12:04:55.907659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.569 [2024-04-18 12:04:55.916468] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:05.569 [2024-04-18 12:04:55.916498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.569 [2024-04-18 12:04:55.916513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.569 [2024-04-18 12:04:55.927610] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:05.569 [2024-04-18 12:04:55.927639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:19637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.569 [2024-04-18 12:04:55.927654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.569 [2024-04-18 12:04:55.939371] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:05.569 [2024-04-18 12:04:55.939400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:14041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.569 [2024-04-18 12:04:55.939418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.569 [2024-04-18 12:04:55.948627] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:05.569 [2024-04-18 12:04:55.948656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.569 [2024-04-18 12:04:55.948670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.569 [2024-04-18 12:04:55.960097] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:05.569 [2024-04-18 12:04:55.960127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:23212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.569 [2024-04-18 12:04:55.960141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.569 [2024-04-18 12:04:55.971412] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:05.569 [2024-04-18 12:04:55.971442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.569 [2024-04-18 12:04:55.971462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.569 [2024-04-18 12:04:55.981292] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:05.569 [2024-04-18 12:04:55.981322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.569 [2024-04-18 12:04:55.981336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.569 [2024-04-18 12:04:55.992340] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:05.569 [2024-04-18 12:04:55.992370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.569 [2024-04-18 12:04:55.992384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.569 [2024-04-18 12:04:56.003147] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:05.569 [2024-04-18 12:04:56.003176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:6487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.569 [2024-04-18 12:04:56.003190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.569 [2024-04-18 12:04:56.015159] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:05.569 [2024-04-18 12:04:56.015188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.569 [2024-04-18 12:04:56.015202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.569 [2024-04-18 12:04:56.023932] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:05.569 [2024-04-18 12:04:56.023961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:22979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.569 [2024-04-18 12:04:56.023975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.569 [2024-04-18 12:04:56.035324] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:05.569 [2024-04-18 12:04:56.035353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.569 [2024-04-18 12:04:56.035368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.569 [2024-04-18 12:04:56.045865] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:05.569 [2024-04-18 12:04:56.045894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:24137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.569 [2024-04-18 12:04:56.045908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.569 [2024-04-18 12:04:56.056785] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:05.569 [2024-04-18 12:04:56.056813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:18164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.569 [2024-04-18 12:04:56.056827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.569 [2024-04-18 12:04:56.067170] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:05.569 [2024-04-18 12:04:56.067198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:10618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.569 [2024-04-18 12:04:56.067213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.569 [2024-04-18 12:04:56.077813] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:05.569 [2024-04-18 12:04:56.077842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.569 [2024-04-18 12:04:56.077855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.569 [2024-04-18 12:04:56.088636] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:05.569 [2024-04-18 12:04:56.088665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:10020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.569 [2024-04-18 12:04:56.088679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.569 [2024-04-18 12:04:56.099043] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:05.569 [2024-04-18 12:04:56.099072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:2790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.569 [2024-04-18 12:04:56.099086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.569 [2024-04-18 12:04:56.110138] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:05.569 [2024-04-18 12:04:56.110167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.569 [2024-04-18 12:04:56.110181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.827 [2024-04-18 12:04:56.120131] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:05.827 [2024-04-18 12:04:56.120160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:24952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.827 [2024-04-18 12:04:56.120179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.827 [2024-04-18 12:04:56.132567] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:05.827 [2024-04-18 12:04:56.132595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.827 [2024-04-18 12:04:56.132610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.827 [2024-04-18 12:04:56.142680] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:05.827 [2024-04-18 12:04:56.142709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.827 [2024-04-18 12:04:56.142723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.827 [2024-04-18 12:04:56.155086] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:05.827 [2024-04-18 12:04:56.155114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.827 [2024-04-18 12:04:56.155128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.827 [2024-04-18 12:04:56.165739] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:05.827 [2024-04-18 12:04:56.165769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:17667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.827 [2024-04-18 12:04:56.165783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.827 [2024-04-18 12:04:56.175156] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:05.827 [2024-04-18 12:04:56.175185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.827 [2024-04-18 12:04:56.175199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.827 [2024-04-18 12:04:56.186681] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:05.827 [2024-04-18 12:04:56.186710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.827 [2024-04-18 12:04:56.186724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.827 [2024-04-18 12:04:56.197576] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:05.827 [2024-04-18 12:04:56.197607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.827 [2024-04-18 12:04:56.197621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.827 [2024-04-18 12:04:56.206811] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:05.827 [2024-04-18 12:04:56.206839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:14485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.827 [2024-04-18 12:04:56.206853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.827 [2024-04-18 12:04:56.219313] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:05.827 [2024-04-18 12:04:56.219342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.828 [2024-04-18 12:04:56.219356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.828 [2024-04-18 12:04:56.228230] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:05.828 [2024-04-18 12:04:56.228258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.828 [2024-04-18 12:04:56.228273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.828 [2024-04-18 12:04:56.239973] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:05.828 [2024-04-18 12:04:56.240001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:2084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.828 [2024-04-18 12:04:56.240015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.828 [2024-04-18 12:04:56.251539] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:05.828 [2024-04-18 12:04:56.251567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.828 [2024-04-18 12:04:56.251582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.828 [2024-04-18 12:04:56.261667] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:05.828 [2024-04-18 12:04:56.261697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.828 [2024-04-18 12:04:56.261711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.828 [2024-04-18 12:04:56.271350] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:05.828 [2024-04-18 12:04:56.271378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.828 [2024-04-18 12:04:56.271393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.828 [2024-04-18 12:04:56.282751] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:05.828 [2024-04-18 12:04:56.282779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:24761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.828 [2024-04-18 12:04:56.282794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.828 [2024-04-18 12:04:56.293520] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:05.828 [2024-04-18 12:04:56.293549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:1424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.828 [2024-04-18 12:04:56.293563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.828 [2024-04-18 12:04:56.304606] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:05.828 [2024-04-18 12:04:56.304635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.828 [2024-04-18 12:04:56.304653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.828 [2024-04-18 12:04:56.314289] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:05.828 [2024-04-18 12:04:56.314319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.828 [2024-04-18 12:04:56.314334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.828 [2024-04-18 12:04:56.325783] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:05.828 [2024-04-18 12:04:56.325821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.828 [2024-04-18 12:04:56.325836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.828 [2024-04-18 12:04:56.336046] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:05.828 [2024-04-18 12:04:56.336076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.828 [2024-04-18 12:04:56.336091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.828 [2024-04-18 12:04:56.345787] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:05.828 [2024-04-18 12:04:56.345816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:9669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.828 [2024-04-18 12:04:56.345831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.828 [2024-04-18 12:04:56.357214] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:05.828 [2024-04-18 12:04:56.357244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.828 [2024-04-18 12:04:56.357259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.828 [2024-04-18 12:04:56.366257] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:05.828 [2024-04-18 12:04:56.366285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:16965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.828 [2024-04-18 12:04:56.366299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.086 [2024-04-18 12:04:56.378166] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.086 [2024-04-18 12:04:56.378195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:9301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.086 [2024-04-18 12:04:56.378210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.086 [2024-04-18 12:04:56.389204] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.086 [2024-04-18 12:04:56.389233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.086 [2024-04-18 12:04:56.389247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.086 [2024-04-18 12:04:56.399664] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.086 [2024-04-18 12:04:56.399696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:6448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.086 [2024-04-18 12:04:56.399710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.086 [2024-04-18 12:04:56.410959] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.086 [2024-04-18 12:04:56.410988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.086 [2024-04-18 12:04:56.411002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.086 [2024-04-18 12:04:56.421312] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.086 [2024-04-18 12:04:56.421341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:22983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.086 [2024-04-18 12:04:56.421355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.086 [2024-04-18 12:04:56.430753] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.086 [2024-04-18 12:04:56.430781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.086 [2024-04-18 12:04:56.430795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.086 [2024-04-18 12:04:56.442010] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.086 [2024-04-18 12:04:56.442039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.086 [2024-04-18 12:04:56.442054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.086 [2024-04-18 12:04:56.453618] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.086 [2024-04-18 12:04:56.453647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.086 [2024-04-18 12:04:56.453662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.086 [2024-04-18 12:04:56.462182] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.086 [2024-04-18 12:04:56.462210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.086 [2024-04-18 12:04:56.462225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.086 [2024-04-18 12:04:56.474942] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.086 [2024-04-18 12:04:56.474971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.086 [2024-04-18 12:04:56.474985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.086 [2024-04-18 12:04:56.484974] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.086 [2024-04-18 12:04:56.485002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:11624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.086 [2024-04-18 12:04:56.485019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.086 [2024-04-18 12:04:56.496103] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.086 [2024-04-18 12:04:56.496131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.086 [2024-04-18 12:04:56.496145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.086 [2024-04-18 12:04:56.507466] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.086 [2024-04-18 12:04:56.507496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.086 [2024-04-18 12:04:56.507510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.086 [2024-04-18 12:04:56.517291] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.086 [2024-04-18 12:04:56.517319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.086 [2024-04-18 12:04:56.517333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.086 [2024-04-18 12:04:56.528361] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.086 [2024-04-18 12:04:56.528389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.086 [2024-04-18 12:04:56.528404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.086 [2024-04-18 12:04:56.538218] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.086 [2024-04-18 12:04:56.538246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.086 [2024-04-18 12:04:56.538261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.086 [2024-04-18 12:04:56.550168] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.086 [2024-04-18 12:04:56.550197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:24045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.086 [2024-04-18 12:04:56.550212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.086 [2024-04-18 12:04:56.560117] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.086 [2024-04-18 12:04:56.560147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:12874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.086 [2024-04-18 12:04:56.560161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.086 [2024-04-18 12:04:56.570414] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.086 [2024-04-18 12:04:56.570443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:2644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.086 [2024-04-18 12:04:56.570462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.086 [2024-04-18 12:04:56.583201] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.086 [2024-04-18 12:04:56.583233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:24268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.086 [2024-04-18 12:04:56.583248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.086 [2024-04-18 12:04:56.592310] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.086 [2024-04-18 12:04:56.592337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:5948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.086 [2024-04-18 12:04:56.592352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.087 [2024-04-18 12:04:56.603467] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.087 [2024-04-18 12:04:56.603495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.087 [2024-04-18 12:04:56.603510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.087 [2024-04-18 12:04:56.614242] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.087 [2024-04-18 12:04:56.614270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.087 [2024-04-18 12:04:56.614285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.087 [2024-04-18 12:04:56.624261] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.087 [2024-04-18 12:04:56.624289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.087 [2024-04-18 12:04:56.624303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.344 [2024-04-18 12:04:56.636013] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.344 [2024-04-18 12:04:56.636042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:22424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.344 [2024-04-18 12:04:56.636057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.344 [2024-04-18 12:04:56.646904] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.344 [2024-04-18 12:04:56.646932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.344 [2024-04-18 12:04:56.646947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.344 [2024-04-18 12:04:56.657134] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.344 [2024-04-18 12:04:56.657163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:11425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.344 [2024-04-18 12:04:56.657177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.344 [2024-04-18 12:04:56.667760] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.344 [2024-04-18 12:04:56.667788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:8804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.344 [2024-04-18 12:04:56.667806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.344 [2024-04-18 12:04:56.678009] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.344 [2024-04-18 12:04:56.678037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:15287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.344 [2024-04-18 12:04:56.678052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.344 [2024-04-18 12:04:56.689545] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.344 [2024-04-18 12:04:56.689573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.344 [2024-04-18 12:04:56.689587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.344 [2024-04-18 12:04:56.699914] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.344 [2024-04-18 12:04:56.699943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:2892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.344 [2024-04-18 12:04:56.699957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.344 [2024-04-18 12:04:56.710883] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.344 [2024-04-18 12:04:56.710910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.344 [2024-04-18 12:04:56.710925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.344 [2024-04-18 12:04:56.721666] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.344 [2024-04-18 12:04:56.721694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:14524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.344 [2024-04-18 12:04:56.721708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.344 [2024-04-18 12:04:56.731172] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.344 [2024-04-18 12:04:56.731200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.344 [2024-04-18 12:04:56.731214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.344 [2024-04-18 12:04:56.741890] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.344 [2024-04-18 12:04:56.741917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.344 [2024-04-18 12:04:56.741931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.344 [2024-04-18 12:04:56.752923] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.345 [2024-04-18 12:04:56.752952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.345 [2024-04-18 12:04:56.752967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.345 [2024-04-18 12:04:56.763502] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.345 [2024-04-18 12:04:56.763534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:7745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.345 [2024-04-18 12:04:56.763549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.345 [2024-04-18 12:04:56.774622] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.345 [2024-04-18 12:04:56.774651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:23745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.345 [2024-04-18 12:04:56.774666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.345 [2024-04-18 12:04:56.784365] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.345 [2024-04-18 12:04:56.784394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:16317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.345 [2024-04-18 12:04:56.784408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.345 [2024-04-18 12:04:56.794731] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.345 [2024-04-18 12:04:56.794759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:16211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.345 [2024-04-18 12:04:56.794773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.345 [2024-04-18 12:04:56.806667] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.345 [2024-04-18 12:04:56.806695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:16455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.345 [2024-04-18 12:04:56.806710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.345 [2024-04-18 12:04:56.818160] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.345 [2024-04-18 12:04:56.818188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:1279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.345 [2024-04-18 12:04:56.818204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.345 [2024-04-18 12:04:56.828649] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.345 [2024-04-18 12:04:56.828678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.345 [2024-04-18 12:04:56.828692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.345 [2024-04-18 12:04:56.838756] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.345 [2024-04-18 12:04:56.838785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.345 [2024-04-18 12:04:56.838799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.345 [2024-04-18 12:04:56.849018] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.345 [2024-04-18 12:04:56.849047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.345 [2024-04-18 12:04:56.849064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.345 [2024-04-18 12:04:56.860192] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.345 [2024-04-18 12:04:56.860221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.345 [2024-04-18 12:04:56.860235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.345 [2024-04-18 12:04:56.870399] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.345 [2024-04-18 12:04:56.870428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:4061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.345 [2024-04-18 12:04:56.870442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.345 [2024-04-18 12:04:56.880876] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.345 [2024-04-18 12:04:56.880905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:18746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.345 [2024-04-18 12:04:56.880920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.345 [2024-04-18 12:04:56.891626] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.345 [2024-04-18 12:04:56.891656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.345 [2024-04-18 12:04:56.891672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.603 [2024-04-18 12:04:56.903100] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.603 [2024-04-18 12:04:56.903129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.603 [2024-04-18 12:04:56.903143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.603 [2024-04-18 12:04:56.913533] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.603 [2024-04-18 12:04:56.913562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.603 [2024-04-18 12:04:56.913576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.603 [2024-04-18 12:04:56.924641] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.603 [2024-04-18 12:04:56.924670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.603 [2024-04-18 12:04:56.924684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.603 [2024-04-18 12:04:56.934676] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.603 [2024-04-18 12:04:56.934704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:12853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.603 [2024-04-18 12:04:56.934719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.603 [2024-04-18 12:04:56.945659] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.603 [2024-04-18 12:04:56.945692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.603 [2024-04-18 12:04:56.945706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.603 [2024-04-18 12:04:56.955305] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.603 [2024-04-18 12:04:56.955334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.603 [2024-04-18 12:04:56.955349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.603 [2024-04-18 12:04:56.966236] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.603 [2024-04-18 12:04:56.966266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.603 [2024-04-18 12:04:56.966281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.603 [2024-04-18 12:04:56.978229] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.603 [2024-04-18 12:04:56.978258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:11583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.603 [2024-04-18 12:04:56.978273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.603 [2024-04-18 12:04:56.987851] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.603 [2024-04-18 12:04:56.987878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:18391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.603 [2024-04-18 12:04:56.987893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.603 [2024-04-18 12:04:56.997953] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.603 [2024-04-18 12:04:56.997981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:18960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.603 [2024-04-18 12:04:56.997995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.603 [2024-04-18 12:04:57.009316] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.603 [2024-04-18 12:04:57.009344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.603 [2024-04-18 12:04:57.009359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.603 [2024-04-18 12:04:57.019092] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.603 [2024-04-18 12:04:57.019120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:14711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.603 [2024-04-18 12:04:57.019135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.603 [2024-04-18 12:04:57.029853] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.603 [2024-04-18 12:04:57.029880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.603 [2024-04-18 12:04:57.029895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.603 [2024-04-18 12:04:57.041035] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.603 [2024-04-18 12:04:57.041063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.603 [2024-04-18 12:04:57.041078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.603 [2024-04-18 12:04:57.051762] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.603 [2024-04-18 12:04:57.051789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.603 [2024-04-18 12:04:57.051804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.603 [2024-04-18 12:04:57.062472] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.603 [2024-04-18 12:04:57.062500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.603 [2024-04-18 12:04:57.062514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.603 [2024-04-18 12:04:57.072506] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.603 [2024-04-18 12:04:57.072534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:21981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.603 [2024-04-18 12:04:57.072548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.603 [2024-04-18 12:04:57.083082] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.603 [2024-04-18 12:04:57.083110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.603 [2024-04-18 12:04:57.083124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.604 [2024-04-18 12:04:57.092852] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.604 [2024-04-18 12:04:57.092879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:21950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.604 [2024-04-18 12:04:57.092894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.604 [2024-04-18 12:04:57.105099] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.604 [2024-04-18 12:04:57.105128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.604 [2024-04-18 12:04:57.105143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.604 [2024-04-18 12:04:57.114504] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.604 [2024-04-18 12:04:57.114531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:19266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.604 [2024-04-18 12:04:57.114546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.604 [2024-04-18 12:04:57.125721] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.604 [2024-04-18 12:04:57.125754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.604 [2024-04-18 12:04:57.125769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.604 [2024-04-18 12:04:57.136109] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.604 [2024-04-18 12:04:57.136137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:16374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.604 [2024-04-18 12:04:57.136152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.604 [2024-04-18 12:04:57.146188] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.604 [2024-04-18 12:04:57.146217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:3550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.604 [2024-04-18 12:04:57.146231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.862 [2024-04-18 12:04:57.157889] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.862 [2024-04-18 12:04:57.157917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:18692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.862 [2024-04-18 12:04:57.157932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.862 [2024-04-18 12:04:57.168858] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.862 [2024-04-18 12:04:57.168887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.862 [2024-04-18 12:04:57.168902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.862 [2024-04-18 12:04:57.179737] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.862 [2024-04-18 12:04:57.179766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.862 [2024-04-18 12:04:57.179780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.862 [2024-04-18 12:04:57.189851] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.862 [2024-04-18 12:04:57.189880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.862 [2024-04-18 12:04:57.189895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.862 [2024-04-18 12:04:57.200575] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.862 [2024-04-18 12:04:57.200603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:8486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.862 [2024-04-18 12:04:57.200618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.862 [2024-04-18 12:04:57.210490] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.862 [2024-04-18 12:04:57.210518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:22947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.862 [2024-04-18 12:04:57.210532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.862 [2024-04-18 12:04:57.221772] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.862 [2024-04-18 12:04:57.221800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.862 [2024-04-18 12:04:57.221815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.862 [2024-04-18 12:04:57.232475] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.862 [2024-04-18 12:04:57.232504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:25147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.862 [2024-04-18 12:04:57.232526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.862 [2024-04-18 12:04:57.242963] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.862 [2024-04-18 12:04:57.242992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.862 [2024-04-18 12:04:57.243006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.862 [2024-04-18 12:04:57.252471] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.862 [2024-04-18 12:04:57.252499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.862 [2024-04-18 12:04:57.252514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.862 [2024-04-18 12:04:57.264941] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.863 [2024-04-18 12:04:57.264970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:11251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.863 [2024-04-18 12:04:57.264985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.863 [2024-04-18 12:04:57.276128] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.863 [2024-04-18 12:04:57.276157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:22125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.863 [2024-04-18 12:04:57.276172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.863 [2024-04-18 12:04:57.285524] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.863 [2024-04-18 12:04:57.285553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:12863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.863 [2024-04-18 12:04:57.285568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.863 [2024-04-18 12:04:57.297081] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.863 [2024-04-18 12:04:57.297111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:13761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.863 [2024-04-18 12:04:57.297126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.863 [2024-04-18 12:04:57.308860] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.863 [2024-04-18 12:04:57.308893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.863 [2024-04-18 12:04:57.308907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.863 [2024-04-18 12:04:57.321013] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.863 [2024-04-18 12:04:57.321043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:13701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.863 [2024-04-18 12:04:57.321059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.863 [2024-04-18 12:04:57.332147] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.863 [2024-04-18 12:04:57.332178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.863 [2024-04-18 12:04:57.332193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.863 [2024-04-18 12:04:57.343014] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.863 [2024-04-18 12:04:57.343042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:14157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.863 [2024-04-18 12:04:57.343057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.863 [2024-04-18 12:04:57.354679] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.863 [2024-04-18 12:04:57.354708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.863 [2024-04-18 12:04:57.354723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.863 [2024-04-18 12:04:57.364899] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.863 [2024-04-18 12:04:57.364928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.863 [2024-04-18 12:04:57.364942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.863 [2024-04-18 12:04:57.375294] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.863 [2024-04-18 12:04:57.375323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.863 [2024-04-18 12:04:57.375338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.863 [2024-04-18 12:04:57.386147] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.863 [2024-04-18 12:04:57.386176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:4504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.863 [2024-04-18 12:04:57.386190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.863 [2024-04-18 12:04:57.396605] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.863 [2024-04-18 12:04:57.396633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.863 [2024-04-18 12:04:57.396648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.863 [2024-04-18 12:04:57.407376] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:06.863 [2024-04-18 12:04:57.407405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:17322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.863 [2024-04-18 12:04:57.407421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.122 [2024-04-18 12:04:57.419474] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:07.122 [2024-04-18 12:04:57.419502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.122 [2024-04-18 12:04:57.419517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.122 [2024-04-18 12:04:57.428301] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:07.122 [2024-04-18 12:04:57.428330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.122 [2024-04-18 12:04:57.428344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.122 [2024-04-18 12:04:57.439667] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:07.122 [2024-04-18 12:04:57.439705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.122 [2024-04-18 12:04:57.439720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.122 [2024-04-18 12:04:57.451447] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:07.122 [2024-04-18 12:04:57.451483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.122 [2024-04-18 12:04:57.451498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.122 [2024-04-18 12:04:57.460924] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:07.122 [2024-04-18 12:04:57.460954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:8250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.122 [2024-04-18 12:04:57.460968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.122 [2024-04-18 12:04:57.473258] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:07.122 [2024-04-18 12:04:57.473287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.122 [2024-04-18 12:04:57.473303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.122 [2024-04-18 12:04:57.482796] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:07.122 [2024-04-18 12:04:57.482825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.122 [2024-04-18 12:04:57.482840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.122 [2024-04-18 12:04:57.493821] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:07.122 [2024-04-18 12:04:57.493856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.122 [2024-04-18 12:04:57.493871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.122 [2024-04-18 12:04:57.504684] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:07.122 [2024-04-18 12:04:57.504713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:19426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.122 [2024-04-18 12:04:57.504728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.122 [2024-04-18 12:04:57.516471] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:07.122 [2024-04-18 12:04:57.516500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.122 [2024-04-18 12:04:57.516514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.122 [2024-04-18 12:04:57.526980] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:07.122 [2024-04-18 12:04:57.527008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.122 [2024-04-18 12:04:57.527022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.122 [2024-04-18 12:04:57.538024] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:07.122 [2024-04-18 12:04:57.538052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.122 [2024-04-18 12:04:57.538066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.122 [2024-04-18 12:04:57.547599] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:07.122 [2024-04-18 12:04:57.547628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.122 [2024-04-18 12:04:57.547642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.122 [2024-04-18 12:04:57.558639] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:07.122 [2024-04-18 12:04:57.558667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:6169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.122 [2024-04-18 12:04:57.558681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.122 [2024-04-18 12:04:57.569676] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:07.122 [2024-04-18 12:04:57.569704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.122 [2024-04-18 12:04:57.569718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.122 [2024-04-18 12:04:57.579318] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:07.122 [2024-04-18 12:04:57.579346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:14128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.122 [2024-04-18 12:04:57.579361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.122 [2024-04-18 12:04:57.589731] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:07.122 [2024-04-18 12:04:57.589760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.122 [2024-04-18 12:04:57.589774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.122 [2024-04-18 12:04:57.601055] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:07.122 [2024-04-18 12:04:57.601084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:15385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.122 [2024-04-18 12:04:57.601099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.122 [2024-04-18 12:04:57.612709] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:07.122 [2024-04-18 12:04:57.612737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.122 [2024-04-18 12:04:57.612751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.122 [2024-04-18 12:04:57.622113] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:07.122 [2024-04-18 12:04:57.622141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.122 [2024-04-18 12:04:57.622156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.122 [2024-04-18 12:04:57.632957] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:07.122 [2024-04-18 12:04:57.632985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.122 [2024-04-18 12:04:57.632999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.122 [2024-04-18 12:04:57.643406] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:07.122 [2024-04-18 12:04:57.643434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.122 [2024-04-18 12:04:57.643448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.122 [2024-04-18 12:04:57.653892] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:07.122 [2024-04-18 12:04:57.653921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.122 [2024-04-18 12:04:57.653935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.122 [2024-04-18 12:04:57.664847] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:07.123 [2024-04-18 12:04:57.664876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.123 [2024-04-18 12:04:57.664891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.379 [2024-04-18 12:04:57.674748] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:07.379 [2024-04-18 12:04:57.674779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.379 [2024-04-18 12:04:57.674793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.379 [2024-04-18 12:04:57.686167] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:07.379 [2024-04-18 12:04:57.686195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.379 [2024-04-18 12:04:57.686209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.379 00:29:07.379 Latency(us) 00:29:07.379 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:07.379 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:07.379 nvme0n1 : 2.00 23688.94 92.53 0.00 0.00 5395.73 2647.65 18350.08 00:29:07.379 =================================================================================================================== 00:29:07.379 Total : 23688.94 92.53 0.00 0.00 5395.73 2647.65 18350.08 00:29:07.379 0 00:29:07.379 12:04:57 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:07.379 12:04:57 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:07.379 12:04:57 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:07.379 12:04:57 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:07.379 | .driver_specific 00:29:07.379 | .nvme_error 00:29:07.379 | .status_code 00:29:07.379 | .command_transient_transport_error' 00:29:07.379 12:04:57 -- host/digest.sh@71 -- # (( 186 > 0 )) 00:29:07.379 12:04:57 -- host/digest.sh@73 -- # killprocess 2641917 00:29:07.379 12:04:57 -- common/autotest_common.sh@936 -- # '[' -z 2641917 ']' 00:29:07.379 12:04:57 -- common/autotest_common.sh@940 -- # kill -0 2641917 00:29:07.379 12:04:57 -- common/autotest_common.sh@941 -- # uname 00:29:07.379 12:04:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:07.379 12:04:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2641917 00:29:07.636 12:04:57 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:07.636 12:04:57 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:07.636 12:04:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2641917' 00:29:07.636 killing process with pid 2641917 00:29:07.636 12:04:57 -- common/autotest_common.sh@955 -- # kill 2641917 00:29:07.636 Received shutdown signal, test time was about 2.000000 seconds 00:29:07.636 00:29:07.636 Latency(us) 00:29:07.636 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:07.636 =================================================================================================================== 00:29:07.636 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:07.636 12:04:57 -- common/autotest_common.sh@960 -- # wait 2641917 00:29:08.568 12:04:58 -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:29:08.568 12:04:58 -- host/digest.sh@54 -- # local rw bs qd 00:29:08.568 12:04:58 -- host/digest.sh@56 -- # rw=randread 00:29:08.568 12:04:58 -- host/digest.sh@56 -- # bs=131072 00:29:08.568 12:04:58 -- host/digest.sh@56 -- # qd=16 00:29:08.568 12:04:58 -- host/digest.sh@58 -- # bperfpid=2642656 00:29:08.568 12:04:58 -- host/digest.sh@60 -- # waitforlisten 2642656 /var/tmp/bperf.sock 00:29:08.568 12:04:58 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:08.568 12:04:58 -- common/autotest_common.sh@817 -- # '[' -z 2642656 ']' 00:29:08.568 12:04:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:08.568 12:04:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:08.568 12:04:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:08.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:08.568 12:04:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:08.568 12:04:58 -- common/autotest_common.sh@10 -- # set +x 00:29:08.568 [2024-04-18 12:04:59.031436] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:29:08.568 [2024-04-18 12:04:59.031558] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2642656 ] 00:29:08.568 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:08.568 Zero copy mechanism will not be used. 00:29:08.568 EAL: No free 2048 kB hugepages reported on node 1 00:29:08.824 [2024-04-18 12:04:59.157113] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:08.824 [2024-04-18 12:04:59.368995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:09.389 12:04:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:09.389 12:04:59 -- common/autotest_common.sh@850 -- # return 0 00:29:09.389 12:04:59 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:09.389 12:04:59 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:09.644 12:04:59 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:09.644 12:04:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:09.644 12:04:59 -- common/autotest_common.sh@10 -- # set +x 00:29:09.644 12:04:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:09.644 12:04:59 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:09.645 12:04:59 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:09.900 nvme0n1 00:29:09.900 12:05:00 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:09.900 12:05:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:09.900 12:05:00 -- common/autotest_common.sh@10 -- # set +x 00:29:09.900 12:05:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:09.900 12:05:00 -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:09.900 12:05:00 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:09.900 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:09.900 Zero copy mechanism will not be used. 00:29:09.900 Running I/O for 2 seconds... 00:29:09.900 [2024-04-18 12:05:00.358156] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:09.900 [2024-04-18 12:05:00.358199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.900 [2024-04-18 12:05:00.358217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:09.900 [2024-04-18 12:05:00.371461] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:09.900 [2024-04-18 12:05:00.371496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.900 [2024-04-18 12:05:00.371512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:09.900 [2024-04-18 12:05:00.383076] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:09.900 [2024-04-18 12:05:00.383105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.900 [2024-04-18 12:05:00.383120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:09.900 [2024-04-18 12:05:00.392449] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:09.900 [2024-04-18 12:05:00.392484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.900 [2024-04-18 12:05:00.392498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.900 [2024-04-18 12:05:00.401660] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:09.901 [2024-04-18 12:05:00.401689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.901 [2024-04-18 12:05:00.401705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:09.901 [2024-04-18 12:05:00.411103] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:09.901 [2024-04-18 12:05:00.411131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.901 [2024-04-18 12:05:00.411146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:09.901 [2024-04-18 12:05:00.421685] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:09.901 [2024-04-18 12:05:00.421714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.901 [2024-04-18 12:05:00.421729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:09.901 [2024-04-18 12:05:00.432123] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:09.901 [2024-04-18 12:05:00.432152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.901 [2024-04-18 12:05:00.432167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.901 [2024-04-18 12:05:00.442556] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:09.901 [2024-04-18 12:05:00.442585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.901 [2024-04-18 12:05:00.442601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.158 [2024-04-18 12:05:00.455130] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.158 [2024-04-18 12:05:00.455159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.158 [2024-04-18 12:05:00.455174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.158 [2024-04-18 12:05:00.466657] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.158 [2024-04-18 12:05:00.466687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.158 [2024-04-18 12:05:00.466703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.158 [2024-04-18 12:05:00.478859] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.158 [2024-04-18 12:05:00.478889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.158 [2024-04-18 12:05:00.478908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.158 [2024-04-18 12:05:00.490201] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.158 [2024-04-18 12:05:00.490231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.158 [2024-04-18 12:05:00.490246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.158 [2024-04-18 12:05:00.500484] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.158 [2024-04-18 12:05:00.500513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.158 [2024-04-18 12:05:00.500528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.158 [2024-04-18 12:05:00.509803] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.158 [2024-04-18 12:05:00.509833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.158 [2024-04-18 12:05:00.509848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.158 [2024-04-18 12:05:00.520966] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.158 [2024-04-18 12:05:00.520995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.158 [2024-04-18 12:05:00.521019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.158 [2024-04-18 12:05:00.530485] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.158 [2024-04-18 12:05:00.530515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.158 [2024-04-18 12:05:00.530529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.158 [2024-04-18 12:05:00.539511] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.158 [2024-04-18 12:05:00.539541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.158 [2024-04-18 12:05:00.539555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.158 [2024-04-18 12:05:00.549284] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.158 [2024-04-18 12:05:00.549314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.158 [2024-04-18 12:05:00.549329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.158 [2024-04-18 12:05:00.560005] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.158 [2024-04-18 12:05:00.560034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.158 [2024-04-18 12:05:00.560049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.158 [2024-04-18 12:05:00.571498] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.158 [2024-04-18 12:05:00.571527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.158 [2024-04-18 12:05:00.571541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.158 [2024-04-18 12:05:00.583434] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.158 [2024-04-18 12:05:00.583469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.158 [2024-04-18 12:05:00.583484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.158 [2024-04-18 12:05:00.594434] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.158 [2024-04-18 12:05:00.594469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.158 [2024-04-18 12:05:00.594484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.158 [2024-04-18 12:05:00.605968] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.158 [2024-04-18 12:05:00.605997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.158 [2024-04-18 12:05:00.606012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.158 [2024-04-18 12:05:00.617421] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.158 [2024-04-18 12:05:00.617449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.158 [2024-04-18 12:05:00.617471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.158 [2024-04-18 12:05:00.627900] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.158 [2024-04-18 12:05:00.627930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.158 [2024-04-18 12:05:00.627944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.158 [2024-04-18 12:05:00.637772] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.158 [2024-04-18 12:05:00.637801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.158 [2024-04-18 12:05:00.637815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.158 [2024-04-18 12:05:00.648194] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.158 [2024-04-18 12:05:00.648223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.158 [2024-04-18 12:05:00.648237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.158 [2024-04-18 12:05:00.657900] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.158 [2024-04-18 12:05:00.657928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.158 [2024-04-18 12:05:00.657945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.158 [2024-04-18 12:05:00.666899] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.158 [2024-04-18 12:05:00.666927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.158 [2024-04-18 12:05:00.666941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.158 [2024-04-18 12:05:00.675075] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.158 [2024-04-18 12:05:00.675102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.158 [2024-04-18 12:05:00.675115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.159 [2024-04-18 12:05:00.682739] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.159 [2024-04-18 12:05:00.682766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.159 [2024-04-18 12:05:00.682779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.159 [2024-04-18 12:05:00.690227] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.159 [2024-04-18 12:05:00.690254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.159 [2024-04-18 12:05:00.690267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.159 [2024-04-18 12:05:00.697768] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.159 [2024-04-18 12:05:00.697796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.159 [2024-04-18 12:05:00.697810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.159 [2024-04-18 12:05:00.705384] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.159 [2024-04-18 12:05:00.705411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.159 [2024-04-18 12:05:00.705425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.417 [2024-04-18 12:05:00.713028] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.417 [2024-04-18 12:05:00.713056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.417 [2024-04-18 12:05:00.713070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.417 [2024-04-18 12:05:00.720576] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.417 [2024-04-18 12:05:00.720602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.417 [2024-04-18 12:05:00.720615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.417 [2024-04-18 12:05:00.728155] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.417 [2024-04-18 12:05:00.728182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.417 [2024-04-18 12:05:00.728195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.417 [2024-04-18 12:05:00.735950] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.417 [2024-04-18 12:05:00.735978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.417 [2024-04-18 12:05:00.735991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.417 [2024-04-18 12:05:00.743626] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.417 [2024-04-18 12:05:00.743652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.417 [2024-04-18 12:05:00.743666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.417 [2024-04-18 12:05:00.751851] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.417 [2024-04-18 12:05:00.751879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.417 [2024-04-18 12:05:00.751893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.417 [2024-04-18 12:05:00.759721] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.417 [2024-04-18 12:05:00.759749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.417 [2024-04-18 12:05:00.759764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.417 [2024-04-18 12:05:00.767797] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.417 [2024-04-18 12:05:00.767824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.417 [2024-04-18 12:05:00.767838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.417 [2024-04-18 12:05:00.775994] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.417 [2024-04-18 12:05:00.776021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.417 [2024-04-18 12:05:00.776035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.417 [2024-04-18 12:05:00.783646] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.417 [2024-04-18 12:05:00.783673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.417 [2024-04-18 12:05:00.783687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.417 [2024-04-18 12:05:00.798824] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.417 [2024-04-18 12:05:00.798850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.417 [2024-04-18 12:05:00.798867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.417 [2024-04-18 12:05:00.811129] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.417 [2024-04-18 12:05:00.811156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.417 [2024-04-18 12:05:00.811169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.417 [2024-04-18 12:05:00.821503] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.417 [2024-04-18 12:05:00.821531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.417 [2024-04-18 12:05:00.821545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.417 [2024-04-18 12:05:00.830006] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.417 [2024-04-18 12:05:00.830034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.417 [2024-04-18 12:05:00.830047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.417 [2024-04-18 12:05:00.837515] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.417 [2024-04-18 12:05:00.837542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.417 [2024-04-18 12:05:00.837555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.417 [2024-04-18 12:05:00.844947] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.417 [2024-04-18 12:05:00.844973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.417 [2024-04-18 12:05:00.844987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.417 [2024-04-18 12:05:00.852373] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.417 [2024-04-18 12:05:00.852399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.417 [2024-04-18 12:05:00.852412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.417 [2024-04-18 12:05:00.859861] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.417 [2024-04-18 12:05:00.859888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.417 [2024-04-18 12:05:00.859902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.417 [2024-04-18 12:05:00.867284] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.417 [2024-04-18 12:05:00.867310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.417 [2024-04-18 12:05:00.867324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.417 [2024-04-18 12:05:00.874810] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.417 [2024-04-18 12:05:00.874837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.417 [2024-04-18 12:05:00.874851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.417 [2024-04-18 12:05:00.883242] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.417 [2024-04-18 12:05:00.883268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.417 [2024-04-18 12:05:00.883282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.417 [2024-04-18 12:05:00.891145] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.417 [2024-04-18 12:05:00.891171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.417 [2024-04-18 12:05:00.891185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.417 [2024-04-18 12:05:00.898669] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.417 [2024-04-18 12:05:00.898695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.418 [2024-04-18 12:05:00.898709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.418 [2024-04-18 12:05:00.906094] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.418 [2024-04-18 12:05:00.906130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.418 [2024-04-18 12:05:00.906144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.418 [2024-04-18 12:05:00.913553] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.418 [2024-04-18 12:05:00.913580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.418 [2024-04-18 12:05:00.913594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.418 [2024-04-18 12:05:00.921005] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.418 [2024-04-18 12:05:00.921032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.418 [2024-04-18 12:05:00.921046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.418 [2024-04-18 12:05:00.928417] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.418 [2024-04-18 12:05:00.928444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.418 [2024-04-18 12:05:00.928464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.418 [2024-04-18 12:05:00.935908] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.418 [2024-04-18 12:05:00.935934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.418 [2024-04-18 12:05:00.935951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.418 [2024-04-18 12:05:00.943363] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.418 [2024-04-18 12:05:00.943389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.418 [2024-04-18 12:05:00.943403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.418 [2024-04-18 12:05:00.951178] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.418 [2024-04-18 12:05:00.951205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.418 [2024-04-18 12:05:00.951218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.418 [2024-04-18 12:05:00.963167] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.418 [2024-04-18 12:05:00.963194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.418 [2024-04-18 12:05:00.963208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.676 [2024-04-18 12:05:00.976952] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.676 [2024-04-18 12:05:00.976979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.676 [2024-04-18 12:05:00.976992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.676 [2024-04-18 12:05:00.988456] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.676 [2024-04-18 12:05:00.988484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.676 [2024-04-18 12:05:00.988498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.676 [2024-04-18 12:05:00.999466] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.676 [2024-04-18 12:05:00.999493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.676 [2024-04-18 12:05:00.999508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.676 [2024-04-18 12:05:01.014066] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.676 [2024-04-18 12:05:01.014095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.676 [2024-04-18 12:05:01.014108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.676 [2024-04-18 12:05:01.030509] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.676 [2024-04-18 12:05:01.030537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.676 [2024-04-18 12:05:01.030551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.676 [2024-04-18 12:05:01.041015] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.676 [2024-04-18 12:05:01.041042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.676 [2024-04-18 12:05:01.041056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.676 [2024-04-18 12:05:01.053878] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.676 [2024-04-18 12:05:01.053906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.676 [2024-04-18 12:05:01.053919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.676 [2024-04-18 12:05:01.066834] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.676 [2024-04-18 12:05:01.066861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.676 [2024-04-18 12:05:01.066875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.676 [2024-04-18 12:05:01.076918] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.676 [2024-04-18 12:05:01.076945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.676 [2024-04-18 12:05:01.076958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.676 [2024-04-18 12:05:01.086124] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.676 [2024-04-18 12:05:01.086150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.676 [2024-04-18 12:05:01.086164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.676 [2024-04-18 12:05:01.094767] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.676 [2024-04-18 12:05:01.094793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.676 [2024-04-18 12:05:01.094806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.676 [2024-04-18 12:05:01.105988] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.676 [2024-04-18 12:05:01.106014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.676 [2024-04-18 12:05:01.106028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.676 [2024-04-18 12:05:01.119248] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.676 [2024-04-18 12:05:01.119275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.676 [2024-04-18 12:05:01.119289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.676 [2024-04-18 12:05:01.129588] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.676 [2024-04-18 12:05:01.129614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.676 [2024-04-18 12:05:01.129631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.676 [2024-04-18 12:05:01.138247] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.676 [2024-04-18 12:05:01.138274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.676 [2024-04-18 12:05:01.138288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.676 [2024-04-18 12:05:01.151767] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.676 [2024-04-18 12:05:01.151793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.676 [2024-04-18 12:05:01.151807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.676 [2024-04-18 12:05:01.162421] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.676 [2024-04-18 12:05:01.162447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.676 [2024-04-18 12:05:01.162466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.676 [2024-04-18 12:05:01.171239] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.676 [2024-04-18 12:05:01.171265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.676 [2024-04-18 12:05:01.171278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.676 [2024-04-18 12:05:01.179326] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.676 [2024-04-18 12:05:01.179352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.676 [2024-04-18 12:05:01.179365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.676 [2024-04-18 12:05:01.194208] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.677 [2024-04-18 12:05:01.194234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.677 [2024-04-18 12:05:01.194247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.677 [2024-04-18 12:05:01.206525] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.677 [2024-04-18 12:05:01.206552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.677 [2024-04-18 12:05:01.206565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.677 [2024-04-18 12:05:01.218815] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.677 [2024-04-18 12:05:01.218843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.677 [2024-04-18 12:05:01.218857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.935 [2024-04-18 12:05:01.235466] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.935 [2024-04-18 12:05:01.235494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.935 [2024-04-18 12:05:01.235507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.935 [2024-04-18 12:05:01.249710] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.935 [2024-04-18 12:05:01.249739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.935 [2024-04-18 12:05:01.249752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.935 [2024-04-18 12:05:01.265264] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.935 [2024-04-18 12:05:01.265291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.935 [2024-04-18 12:05:01.265305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.935 [2024-04-18 12:05:01.278180] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.935 [2024-04-18 12:05:01.278207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.935 [2024-04-18 12:05:01.278221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.935 [2024-04-18 12:05:01.288150] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.935 [2024-04-18 12:05:01.288177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.935 [2024-04-18 12:05:01.288190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.935 [2024-04-18 12:05:01.297774] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.935 [2024-04-18 12:05:01.297802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.935 [2024-04-18 12:05:01.297816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.935 [2024-04-18 12:05:01.307835] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.935 [2024-04-18 12:05:01.307863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.935 [2024-04-18 12:05:01.307877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.935 [2024-04-18 12:05:01.324468] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.935 [2024-04-18 12:05:01.324496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.935 [2024-04-18 12:05:01.324510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.935 [2024-04-18 12:05:01.338068] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.935 [2024-04-18 12:05:01.338096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.935 [2024-04-18 12:05:01.338113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.935 [2024-04-18 12:05:01.350063] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.935 [2024-04-18 12:05:01.350091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.935 [2024-04-18 12:05:01.350105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.935 [2024-04-18 12:05:01.362062] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.935 [2024-04-18 12:05:01.362091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.935 [2024-04-18 12:05:01.362105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.935 [2024-04-18 12:05:01.379278] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.935 [2024-04-18 12:05:01.379305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.935 [2024-04-18 12:05:01.379320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.935 [2024-04-18 12:05:01.392941] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.935 [2024-04-18 12:05:01.392969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.935 [2024-04-18 12:05:01.392983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.935 [2024-04-18 12:05:01.402922] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.935 [2024-04-18 12:05:01.402949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.935 [2024-04-18 12:05:01.402963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.935 [2024-04-18 12:05:01.412873] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.935 [2024-04-18 12:05:01.412899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.935 [2024-04-18 12:05:01.412913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.935 [2024-04-18 12:05:01.421177] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.935 [2024-04-18 12:05:01.421204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.935 [2024-04-18 12:05:01.421218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.935 [2024-04-18 12:05:01.434866] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.935 [2024-04-18 12:05:01.434892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.935 [2024-04-18 12:05:01.434905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.935 [2024-04-18 12:05:01.445408] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.935 [2024-04-18 12:05:01.445434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.935 [2024-04-18 12:05:01.445448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.935 [2024-04-18 12:05:01.454950] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.935 [2024-04-18 12:05:01.454985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.935 [2024-04-18 12:05:01.455014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.935 [2024-04-18 12:05:01.463743] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.935 [2024-04-18 12:05:01.463770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.935 [2024-04-18 12:05:01.463783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.935 [2024-04-18 12:05:01.473628] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:10.935 [2024-04-18 12:05:01.473656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.935 [2024-04-18 12:05:01.473669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.195 [2024-04-18 12:05:01.484004] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.195 [2024-04-18 12:05:01.484032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.195 [2024-04-18 12:05:01.484047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.195 [2024-04-18 12:05:01.492847] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.195 [2024-04-18 12:05:01.492875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.195 [2024-04-18 12:05:01.492889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.195 [2024-04-18 12:05:01.505578] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.195 [2024-04-18 12:05:01.505606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.195 [2024-04-18 12:05:01.505619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.195 [2024-04-18 12:05:01.515895] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.195 [2024-04-18 12:05:01.515922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.195 [2024-04-18 12:05:01.515936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.195 [2024-04-18 12:05:01.525051] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.195 [2024-04-18 12:05:01.525079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.195 [2024-04-18 12:05:01.525114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.195 [2024-04-18 12:05:01.532970] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.195 [2024-04-18 12:05:01.532998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.195 [2024-04-18 12:05:01.533012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.195 [2024-04-18 12:05:01.540597] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.195 [2024-04-18 12:05:01.540624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.195 [2024-04-18 12:05:01.540637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.195 [2024-04-18 12:05:01.548818] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.195 [2024-04-18 12:05:01.548846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.195 [2024-04-18 12:05:01.548860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.195 [2024-04-18 12:05:01.558207] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.195 [2024-04-18 12:05:01.558236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.195 [2024-04-18 12:05:01.558250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.195 [2024-04-18 12:05:01.567218] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.195 [2024-04-18 12:05:01.567245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.195 [2024-04-18 12:05:01.567260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.195 [2024-04-18 12:05:01.576224] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.195 [2024-04-18 12:05:01.576252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.195 [2024-04-18 12:05:01.576266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.195 [2024-04-18 12:05:01.585440] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.195 [2024-04-18 12:05:01.585479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.195 [2024-04-18 12:05:01.585493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.195 [2024-04-18 12:05:01.594413] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.195 [2024-04-18 12:05:01.594442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.195 [2024-04-18 12:05:01.594463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.195 [2024-04-18 12:05:01.603733] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.195 [2024-04-18 12:05:01.603761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.195 [2024-04-18 12:05:01.603775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.195 [2024-04-18 12:05:01.613481] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.195 [2024-04-18 12:05:01.613509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.195 [2024-04-18 12:05:01.613524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.195 [2024-04-18 12:05:01.622058] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.195 [2024-04-18 12:05:01.622085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.195 [2024-04-18 12:05:01.622099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.195 [2024-04-18 12:05:01.629856] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.195 [2024-04-18 12:05:01.629883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.195 [2024-04-18 12:05:01.629897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.195 [2024-04-18 12:05:01.637427] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.195 [2024-04-18 12:05:01.637461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.195 [2024-04-18 12:05:01.637491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.195 [2024-04-18 12:05:01.644947] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.195 [2024-04-18 12:05:01.644972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.195 [2024-04-18 12:05:01.644986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.195 [2024-04-18 12:05:01.653678] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.195 [2024-04-18 12:05:01.653706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.195 [2024-04-18 12:05:01.653720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.195 [2024-04-18 12:05:01.662734] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.195 [2024-04-18 12:05:01.662761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.195 [2024-04-18 12:05:01.662775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.195 [2024-04-18 12:05:01.671423] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.195 [2024-04-18 12:05:01.671457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.195 [2024-04-18 12:05:01.671475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.195 [2024-04-18 12:05:01.680606] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.195 [2024-04-18 12:05:01.680634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.195 [2024-04-18 12:05:01.680648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.195 [2024-04-18 12:05:01.689445] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.195 [2024-04-18 12:05:01.689480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.195 [2024-04-18 12:05:01.689495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.195 [2024-04-18 12:05:01.698634] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.196 [2024-04-18 12:05:01.698662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.196 [2024-04-18 12:05:01.698676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.196 [2024-04-18 12:05:01.707353] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.196 [2024-04-18 12:05:01.707380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.196 [2024-04-18 12:05:01.707394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.196 [2024-04-18 12:05:01.715127] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.196 [2024-04-18 12:05:01.715154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.196 [2024-04-18 12:05:01.715167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.196 [2024-04-18 12:05:01.722659] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.196 [2024-04-18 12:05:01.722687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.196 [2024-04-18 12:05:01.722700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.196 [2024-04-18 12:05:01.730096] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.196 [2024-04-18 12:05:01.730123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.196 [2024-04-18 12:05:01.730137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.196 [2024-04-18 12:05:01.737611] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.196 [2024-04-18 12:05:01.737639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.196 [2024-04-18 12:05:01.737653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.455 [2024-04-18 12:05:01.745657] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.455 [2024-04-18 12:05:01.745685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.455 [2024-04-18 12:05:01.745699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.455 [2024-04-18 12:05:01.754210] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.455 [2024-04-18 12:05:01.754238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.455 [2024-04-18 12:05:01.754252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.455 [2024-04-18 12:05:01.763515] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.455 [2024-04-18 12:05:01.763544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.455 [2024-04-18 12:05:01.763558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.455 [2024-04-18 12:05:01.772603] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.455 [2024-04-18 12:05:01.772630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.455 [2024-04-18 12:05:01.772644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.455 [2024-04-18 12:05:01.781355] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.455 [2024-04-18 12:05:01.781383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.455 [2024-04-18 12:05:01.781405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.455 [2024-04-18 12:05:01.790582] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.455 [2024-04-18 12:05:01.790610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.455 [2024-04-18 12:05:01.790624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.455 [2024-04-18 12:05:01.799802] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.455 [2024-04-18 12:05:01.799831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.455 [2024-04-18 12:05:01.799846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.455 [2024-04-18 12:05:01.807957] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.455 [2024-04-18 12:05:01.807984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.455 [2024-04-18 12:05:01.807998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.455 [2024-04-18 12:05:01.817170] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.455 [2024-04-18 12:05:01.817199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.455 [2024-04-18 12:05:01.817217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.455 [2024-04-18 12:05:01.826399] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.455 [2024-04-18 12:05:01.826428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.455 [2024-04-18 12:05:01.826442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.455 [2024-04-18 12:05:01.837101] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.455 [2024-04-18 12:05:01.837128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.455 [2024-04-18 12:05:01.837142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.455 [2024-04-18 12:05:01.847604] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.455 [2024-04-18 12:05:01.847633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.455 [2024-04-18 12:05:01.847648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.455 [2024-04-18 12:05:01.858038] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.455 [2024-04-18 12:05:01.858066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.455 [2024-04-18 12:05:01.858080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.455 [2024-04-18 12:05:01.869469] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.455 [2024-04-18 12:05:01.869497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.455 [2024-04-18 12:05:01.869512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.455 [2024-04-18 12:05:01.882062] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.455 [2024-04-18 12:05:01.882090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.455 [2024-04-18 12:05:01.882105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.455 [2024-04-18 12:05:01.894143] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.455 [2024-04-18 12:05:01.894175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.455 [2024-04-18 12:05:01.894190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.455 [2024-04-18 12:05:01.905372] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.455 [2024-04-18 12:05:01.905401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.455 [2024-04-18 12:05:01.905415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.455 [2024-04-18 12:05:01.917191] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.455 [2024-04-18 12:05:01.917220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.455 [2024-04-18 12:05:01.917234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.455 [2024-04-18 12:05:01.929328] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.455 [2024-04-18 12:05:01.929357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.455 [2024-04-18 12:05:01.929371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.455 [2024-04-18 12:05:01.940286] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.455 [2024-04-18 12:05:01.940314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.455 [2024-04-18 12:05:01.940328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.455 [2024-04-18 12:05:01.950126] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.455 [2024-04-18 12:05:01.950154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.455 [2024-04-18 12:05:01.950169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.455 [2024-04-18 12:05:01.959785] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.455 [2024-04-18 12:05:01.959813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.455 [2024-04-18 12:05:01.959827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.455 [2024-04-18 12:05:01.969132] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.456 [2024-04-18 12:05:01.969159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.456 [2024-04-18 12:05:01.969173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.456 [2024-04-18 12:05:01.977951] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.456 [2024-04-18 12:05:01.977984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.456 [2024-04-18 12:05:01.977998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.456 [2024-04-18 12:05:01.986245] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.456 [2024-04-18 12:05:01.986274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.456 [2024-04-18 12:05:01.986289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.456 [2024-04-18 12:05:01.994349] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.456 [2024-04-18 12:05:01.994377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.456 [2024-04-18 12:05:01.994397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.456 [2024-04-18 12:05:02.002009] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.456 [2024-04-18 12:05:02.002036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.456 [2024-04-18 12:05:02.002050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.714 [2024-04-18 12:05:02.009787] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.715 [2024-04-18 12:05:02.009814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.715 [2024-04-18 12:05:02.009828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.715 [2024-04-18 12:05:02.017402] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.715 [2024-04-18 12:05:02.017429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.715 [2024-04-18 12:05:02.017443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.715 [2024-04-18 12:05:02.025002] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.715 [2024-04-18 12:05:02.025029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.715 [2024-04-18 12:05:02.025042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.715 [2024-04-18 12:05:02.032581] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.715 [2024-04-18 12:05:02.032608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.715 [2024-04-18 12:05:02.032622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.715 [2024-04-18 12:05:02.040100] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.715 [2024-04-18 12:05:02.040127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.715 [2024-04-18 12:05:02.040141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.715 [2024-04-18 12:05:02.047810] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.715 [2024-04-18 12:05:02.047837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.715 [2024-04-18 12:05:02.047851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.715 [2024-04-18 12:05:02.055403] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.715 [2024-04-18 12:05:02.055430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.715 [2024-04-18 12:05:02.055444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.715 [2024-04-18 12:05:02.063022] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.715 [2024-04-18 12:05:02.063049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.715 [2024-04-18 12:05:02.063063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.715 [2024-04-18 12:05:02.070598] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.715 [2024-04-18 12:05:02.070625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.715 [2024-04-18 12:05:02.070638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.715 [2024-04-18 12:05:02.078216] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.715 [2024-04-18 12:05:02.078242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.715 [2024-04-18 12:05:02.078256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.715 [2024-04-18 12:05:02.085827] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.715 [2024-04-18 12:05:02.085854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.715 [2024-04-18 12:05:02.085868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.715 [2024-04-18 12:05:02.093404] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.715 [2024-04-18 12:05:02.093431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.715 [2024-04-18 12:05:02.093445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.715 [2024-04-18 12:05:02.100951] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.715 [2024-04-18 12:05:02.100978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.715 [2024-04-18 12:05:02.100992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.715 [2024-04-18 12:05:02.108602] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.715 [2024-04-18 12:05:02.108628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.715 [2024-04-18 12:05:02.108642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.715 [2024-04-18 12:05:02.116188] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.715 [2024-04-18 12:05:02.116215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.715 [2024-04-18 12:05:02.116229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.715 [2024-04-18 12:05:02.123686] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.715 [2024-04-18 12:05:02.123712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.715 [2024-04-18 12:05:02.123729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.715 [2024-04-18 12:05:02.131214] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.715 [2024-04-18 12:05:02.131240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.715 [2024-04-18 12:05:02.131253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.715 [2024-04-18 12:05:02.138732] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.715 [2024-04-18 12:05:02.138758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.715 [2024-04-18 12:05:02.138772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.715 [2024-04-18 12:05:02.146236] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.715 [2024-04-18 12:05:02.146262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.715 [2024-04-18 12:05:02.146276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.715 [2024-04-18 12:05:02.153739] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.715 [2024-04-18 12:05:02.153766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.715 [2024-04-18 12:05:02.153779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.715 [2024-04-18 12:05:02.161317] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.715 [2024-04-18 12:05:02.161353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.715 [2024-04-18 12:05:02.161366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.715 [2024-04-18 12:05:02.168783] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.715 [2024-04-18 12:05:02.168810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.715 [2024-04-18 12:05:02.168823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.715 [2024-04-18 12:05:02.176206] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.715 [2024-04-18 12:05:02.176232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.715 [2024-04-18 12:05:02.176246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.715 [2024-04-18 12:05:02.183741] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.715 [2024-04-18 12:05:02.183767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.715 [2024-04-18 12:05:02.183780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.715 [2024-04-18 12:05:02.191187] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.715 [2024-04-18 12:05:02.191216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.715 [2024-04-18 12:05:02.191229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.715 [2024-04-18 12:05:02.198679] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.715 [2024-04-18 12:05:02.198705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.715 [2024-04-18 12:05:02.198718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.715 [2024-04-18 12:05:02.206178] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.715 [2024-04-18 12:05:02.206204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.716 [2024-04-18 12:05:02.206217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.716 [2024-04-18 12:05:02.213607] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.716 [2024-04-18 12:05:02.213633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.716 [2024-04-18 12:05:02.213646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.716 [2024-04-18 12:05:02.221042] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.716 [2024-04-18 12:05:02.221069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.716 [2024-04-18 12:05:02.221082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.716 [2024-04-18 12:05:02.228517] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.716 [2024-04-18 12:05:02.228544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.716 [2024-04-18 12:05:02.228557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.716 [2024-04-18 12:05:02.235989] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.716 [2024-04-18 12:05:02.236015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.716 [2024-04-18 12:05:02.236029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.716 [2024-04-18 12:05:02.243491] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.716 [2024-04-18 12:05:02.243518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.716 [2024-04-18 12:05:02.243531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.716 [2024-04-18 12:05:02.250964] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.716 [2024-04-18 12:05:02.250990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.716 [2024-04-18 12:05:02.251007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.716 [2024-04-18 12:05:02.258455] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.716 [2024-04-18 12:05:02.258482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.716 [2024-04-18 12:05:02.258496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.974 [2024-04-18 12:05:02.266064] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.974 [2024-04-18 12:05:02.266092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.974 [2024-04-18 12:05:02.266106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.974 [2024-04-18 12:05:02.273580] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.974 [2024-04-18 12:05:02.273607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.974 [2024-04-18 12:05:02.273620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.974 [2024-04-18 12:05:02.281012] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.974 [2024-04-18 12:05:02.281037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.974 [2024-04-18 12:05:02.281051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.974 [2024-04-18 12:05:02.288626] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.974 [2024-04-18 12:05:02.288653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.974 [2024-04-18 12:05:02.288667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.974 [2024-04-18 12:05:02.296090] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.974 [2024-04-18 12:05:02.296117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.974 [2024-04-18 12:05:02.296131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.974 [2024-04-18 12:05:02.303607] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.974 [2024-04-18 12:05:02.303633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.974 [2024-04-18 12:05:02.303646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.974 [2024-04-18 12:05:02.311087] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.974 [2024-04-18 12:05:02.311113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.974 [2024-04-18 12:05:02.311126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.974 [2024-04-18 12:05:02.318811] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.974 [2024-04-18 12:05:02.318841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.974 [2024-04-18 12:05:02.318856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.974 [2024-04-18 12:05:02.326370] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.974 [2024-04-18 12:05:02.326396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.974 [2024-04-18 12:05:02.326410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.974 [2024-04-18 12:05:02.333984] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.974 [2024-04-18 12:05:02.334011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.974 [2024-04-18 12:05:02.334025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.974 [2024-04-18 12:05:02.341386] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:29:11.974 [2024-04-18 12:05:02.341412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.974 [2024-04-18 12:05:02.341426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.974 00:29:11.974 Latency(us) 00:29:11.974 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:11.974 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:11.974 nvme0n1 : 2.00 3267.15 408.39 0.00 0.00 4892.69 3591.37 16986.93 00:29:11.974 =================================================================================================================== 00:29:11.974 Total : 3267.15 408.39 0.00 0.00 4892.69 3591.37 16986.93 00:29:11.974 0 00:29:11.974 12:05:02 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:11.974 12:05:02 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:11.974 12:05:02 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:11.974 | .driver_specific 00:29:11.974 | .nvme_error 00:29:11.974 | .status_code 00:29:11.974 | .command_transient_transport_error' 00:29:11.974 12:05:02 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:12.232 12:05:02 -- host/digest.sh@71 -- # (( 211 > 0 )) 00:29:12.232 12:05:02 -- host/digest.sh@73 -- # killprocess 2642656 00:29:12.232 12:05:02 -- common/autotest_common.sh@936 -- # '[' -z 2642656 ']' 00:29:12.232 12:05:02 -- common/autotest_common.sh@940 -- # kill -0 2642656 00:29:12.232 12:05:02 -- common/autotest_common.sh@941 -- # uname 00:29:12.232 12:05:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:12.232 12:05:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2642656 00:29:12.232 12:05:02 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:12.232 12:05:02 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:12.232 12:05:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2642656' 00:29:12.232 killing process with pid 2642656 00:29:12.232 12:05:02 -- common/autotest_common.sh@955 -- # kill 2642656 00:29:12.232 Received shutdown signal, test time was about 2.000000 seconds 00:29:12.232 00:29:12.232 Latency(us) 00:29:12.232 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:12.232 =================================================================================================================== 00:29:12.232 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:12.232 12:05:02 -- common/autotest_common.sh@960 -- # wait 2642656 00:29:13.197 12:05:03 -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:29:13.197 12:05:03 -- host/digest.sh@54 -- # local rw bs qd 00:29:13.197 12:05:03 -- host/digest.sh@56 -- # rw=randwrite 00:29:13.197 12:05:03 -- host/digest.sh@56 -- # bs=4096 00:29:13.197 12:05:03 -- host/digest.sh@56 -- # qd=128 00:29:13.197 12:05:03 -- host/digest.sh@58 -- # bperfpid=2643374 00:29:13.197 12:05:03 -- host/digest.sh@60 -- # waitforlisten 2643374 /var/tmp/bperf.sock 00:29:13.197 12:05:03 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:13.197 12:05:03 -- common/autotest_common.sh@817 -- # '[' -z 2643374 ']' 00:29:13.197 12:05:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:13.197 12:05:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:13.197 12:05:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:13.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:13.197 12:05:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:13.197 12:05:03 -- common/autotest_common.sh@10 -- # set +x 00:29:13.197 [2024-04-18 12:05:03.674801] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:29:13.197 [2024-04-18 12:05:03.674892] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2643374 ] 00:29:13.459 EAL: No free 2048 kB hugepages reported on node 1 00:29:13.459 [2024-04-18 12:05:03.798805] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:13.716 [2024-04-18 12:05:04.011662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:13.975 12:05:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:13.975 12:05:04 -- common/autotest_common.sh@850 -- # return 0 00:29:13.975 12:05:04 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:13.975 12:05:04 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:14.232 12:05:04 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:14.232 12:05:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:14.232 12:05:04 -- common/autotest_common.sh@10 -- # set +x 00:29:14.232 12:05:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:14.232 12:05:04 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:14.232 12:05:04 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:14.489 nvme0n1 00:29:14.489 12:05:04 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:14.489 12:05:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:14.489 12:05:04 -- common/autotest_common.sh@10 -- # set +x 00:29:14.489 12:05:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:14.489 12:05:04 -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:14.489 12:05:04 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:14.489 Running I/O for 2 seconds... 00:29:14.489 [2024-04-18 12:05:04.983390] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:29:14.489 [2024-04-18 12:05:04.984488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.489 [2024-04-18 12:05:04.984528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:14.490 [2024-04-18 12:05:04.994144] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f46d0 00:29:14.490 [2024-04-18 12:05:04.995252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.490 [2024-04-18 12:05:04.995286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:14.490 [2024-04-18 12:05:05.004659] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fbcf0 00:29:14.490 [2024-04-18 12:05:05.005753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.490 [2024-04-18 12:05:05.005783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:14.490 [2024-04-18 12:05:05.015142] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:29:14.490 [2024-04-18 12:05:05.016271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.490 [2024-04-18 12:05:05.016298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:14.490 [2024-04-18 12:05:05.025592] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:29:14.490 [2024-04-18 12:05:05.026593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:21374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.490 [2024-04-18 12:05:05.026621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:14.490 [2024-04-18 12:05:05.036144] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f46d0 00:29:14.490 [2024-04-18 12:05:05.037290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.490 [2024-04-18 12:05:05.037317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:14.747 [2024-04-18 12:05:05.046967] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fbcf0 00:29:14.747 [2024-04-18 12:05:05.048080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.747 [2024-04-18 12:05:05.048107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:14.747 [2024-04-18 12:05:05.057637] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:29:14.747 [2024-04-18 12:05:05.058757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:15077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.747 [2024-04-18 12:05:05.058784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:14.747 [2024-04-18 12:05:05.068094] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:29:14.747 [2024-04-18 12:05:05.069205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:17329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.747 [2024-04-18 12:05:05.069232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:14.747 [2024-04-18 12:05:05.078509] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f46d0 00:29:14.747 [2024-04-18 12:05:05.079632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.748 [2024-04-18 12:05:05.079662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:14.748 [2024-04-18 12:05:05.088964] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fbcf0 00:29:14.748 [2024-04-18 12:05:05.090004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:10678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.748 [2024-04-18 12:05:05.090030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:14.748 [2024-04-18 12:05:05.099306] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:29:14.748 [2024-04-18 12:05:05.100430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.748 [2024-04-18 12:05:05.100462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:14.748 [2024-04-18 12:05:05.109721] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:29:14.748 [2024-04-18 12:05:05.110826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.748 [2024-04-18 12:05:05.110852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:14.748 [2024-04-18 12:05:05.120080] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f46d0 00:29:14.748 [2024-04-18 12:05:05.121185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.748 [2024-04-18 12:05:05.121211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:14.748 [2024-04-18 12:05:05.130504] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fbcf0 00:29:14.748 [2024-04-18 12:05:05.131561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.748 [2024-04-18 12:05:05.131586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:14.748 [2024-04-18 12:05:05.140900] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:29:14.748 [2024-04-18 12:05:05.142024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.748 [2024-04-18 12:05:05.142049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:14.748 [2024-04-18 12:05:05.151274] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:29:14.748 [2024-04-18 12:05:05.152354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.748 [2024-04-18 12:05:05.152380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:14.748 [2024-04-18 12:05:05.161693] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f46d0 00:29:14.748 [2024-04-18 12:05:05.162812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:11397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.748 [2024-04-18 12:05:05.162837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:14.748 [2024-04-18 12:05:05.172071] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fbcf0 00:29:14.748 [2024-04-18 12:05:05.173081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.748 [2024-04-18 12:05:05.173107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:14.748 [2024-04-18 12:05:05.182507] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:29:14.748 [2024-04-18 12:05:05.183611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:18135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.748 [2024-04-18 12:05:05.183636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:14.748 [2024-04-18 12:05:05.192902] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:29:14.748 [2024-04-18 12:05:05.194036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.748 [2024-04-18 12:05:05.194063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:14.748 [2024-04-18 12:05:05.203314] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f46d0 00:29:14.748 [2024-04-18 12:05:05.204417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.748 [2024-04-18 12:05:05.204443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:14.748 [2024-04-18 12:05:05.213676] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fbcf0 00:29:14.748 [2024-04-18 12:05:05.214790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:24463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.748 [2024-04-18 12:05:05.214816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:14.748 [2024-04-18 12:05:05.224090] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:29:14.748 [2024-04-18 12:05:05.225146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.748 [2024-04-18 12:05:05.225180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:14.748 [2024-04-18 12:05:05.234438] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:29:14.748 [2024-04-18 12:05:05.235564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.748 [2024-04-18 12:05:05.235590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:14.748 [2024-04-18 12:05:05.245036] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f46d0 00:29:14.748 [2024-04-18 12:05:05.246120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.748 [2024-04-18 12:05:05.246145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:14.748 [2024-04-18 12:05:05.255422] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fbcf0 00:29:14.748 [2024-04-18 12:05:05.256548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.748 [2024-04-18 12:05:05.256574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:14.748 [2024-04-18 12:05:05.265783] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:29:14.748 [2024-04-18 12:05:05.266912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:3851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.748 [2024-04-18 12:05:05.266937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:14.748 [2024-04-18 12:05:05.276176] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:29:14.748 [2024-04-18 12:05:05.277246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.748 [2024-04-18 12:05:05.277272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:14.748 [2024-04-18 12:05:05.286537] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f46d0 00:29:14.748 [2024-04-18 12:05:05.287636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.748 [2024-04-18 12:05:05.287662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.007 [2024-04-18 12:05:05.297279] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fbcf0 00:29:15.007 [2024-04-18 12:05:05.298454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.007 [2024-04-18 12:05:05.298480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.007 [2024-04-18 12:05:05.307851] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:29:15.007 [2024-04-18 12:05:05.308957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.007 [2024-04-18 12:05:05.308982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.007 [2024-04-18 12:05:05.318205] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:29:15.007 [2024-04-18 12:05:05.319352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.007 [2024-04-18 12:05:05.319378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.007 [2024-04-18 12:05:05.328592] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f46d0 00:29:15.007 [2024-04-18 12:05:05.329687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.007 [2024-04-18 12:05:05.329713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.007 [2024-04-18 12:05:05.338961] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fbcf0 00:29:15.007 [2024-04-18 12:05:05.340048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.007 [2024-04-18 12:05:05.340073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.007 [2024-04-18 12:05:05.349414] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:29:15.007 [2024-04-18 12:05:05.350537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.007 [2024-04-18 12:05:05.350566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.007 [2024-04-18 12:05:05.359842] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:29:15.007 [2024-04-18 12:05:05.360990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:24957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.007 [2024-04-18 12:05:05.361016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.007 [2024-04-18 12:05:05.370283] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f46d0 00:29:15.007 [2024-04-18 12:05:05.371401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:10425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.007 [2024-04-18 12:05:05.371426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.007 [2024-04-18 12:05:05.380669] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fbcf0 00:29:15.007 [2024-04-18 12:05:05.381751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.007 [2024-04-18 12:05:05.381777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.007 [2024-04-18 12:05:05.391077] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:29:15.007 [2024-04-18 12:05:05.392212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:24622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.007 [2024-04-18 12:05:05.392237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.007 [2024-04-18 12:05:05.401445] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:29:15.007 [2024-04-18 12:05:05.402523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.007 [2024-04-18 12:05:05.402548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.007 [2024-04-18 12:05:05.411850] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f46d0 00:29:15.007 [2024-04-18 12:05:05.412949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:6067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.007 [2024-04-18 12:05:05.412974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.007 [2024-04-18 12:05:05.422197] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fbcf0 00:29:15.007 [2024-04-18 12:05:05.423363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.007 [2024-04-18 12:05:05.423389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.007 [2024-04-18 12:05:05.432721] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:29:15.007 [2024-04-18 12:05:05.433790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.007 [2024-04-18 12:05:05.433816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.007 [2024-04-18 12:05:05.443288] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:29:15.007 [2024-04-18 12:05:05.444389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.007 [2024-04-18 12:05:05.444415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.007 [2024-04-18 12:05:05.453707] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f46d0 00:29:15.007 [2024-04-18 12:05:05.454813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.007 [2024-04-18 12:05:05.454838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.007 [2024-04-18 12:05:05.464088] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fbcf0 00:29:15.007 [2024-04-18 12:05:05.465253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.007 [2024-04-18 12:05:05.465278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.007 [2024-04-18 12:05:05.474474] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:29:15.007 [2024-04-18 12:05:05.475536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.007 [2024-04-18 12:05:05.475562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.007 [2024-04-18 12:05:05.484877] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:29:15.007 [2024-04-18 12:05:05.485935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:21437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.007 [2024-04-18 12:05:05.485962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.008 [2024-04-18 12:05:05.495396] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f46d0 00:29:15.008 [2024-04-18 12:05:05.496419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.008 [2024-04-18 12:05:05.496445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.008 [2024-04-18 12:05:05.505858] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fbcf0 00:29:15.008 [2024-04-18 12:05:05.506911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.008 [2024-04-18 12:05:05.506938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.008 [2024-04-18 12:05:05.516352] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:29:15.008 [2024-04-18 12:05:05.517462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.008 [2024-04-18 12:05:05.517488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.008 [2024-04-18 12:05:05.526898] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:29:15.008 [2024-04-18 12:05:05.527931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:9803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.008 [2024-04-18 12:05:05.527962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.008 [2024-04-18 12:05:05.537385] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f46d0 00:29:15.008 [2024-04-18 12:05:05.538515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:13700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.008 [2024-04-18 12:05:05.538541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.008 [2024-04-18 12:05:05.547818] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fbcf0 00:29:15.008 [2024-04-18 12:05:05.548883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.008 [2024-04-18 12:05:05.548908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.268 [2024-04-18 12:05:05.558601] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:29:15.268 [2024-04-18 12:05:05.559741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:3798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.268 [2024-04-18 12:05:05.559767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.268 [2024-04-18 12:05:05.569094] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:29:15.268 [2024-04-18 12:05:05.570259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:1902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.268 [2024-04-18 12:05:05.570284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.268 [2024-04-18 12:05:05.579513] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f46d0 00:29:15.268 [2024-04-18 12:05:05.580594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.268 [2024-04-18 12:05:05.580619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.268 [2024-04-18 12:05:05.589989] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fbcf0 00:29:15.268 [2024-04-18 12:05:05.591039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.268 [2024-04-18 12:05:05.591065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.268 [2024-04-18 12:05:05.600464] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:29:15.268 [2024-04-18 12:05:05.601575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.268 [2024-04-18 12:05:05.601600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.268 [2024-04-18 12:05:05.610824] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:29:15.268 [2024-04-18 12:05:05.611953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.268 [2024-04-18 12:05:05.611979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.268 [2024-04-18 12:05:05.621190] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f46d0 00:29:15.268 [2024-04-18 12:05:05.622332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.268 [2024-04-18 12:05:05.622358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.268 [2024-04-18 12:05:05.631557] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fbcf0 00:29:15.268 [2024-04-18 12:05:05.632625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:16804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.268 [2024-04-18 12:05:05.632652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.268 [2024-04-18 12:05:05.641934] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:29:15.268 [2024-04-18 12:05:05.643037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:17725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.268 [2024-04-18 12:05:05.643062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.268 [2024-04-18 12:05:05.652264] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:29:15.268 [2024-04-18 12:05:05.653378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:11311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.268 [2024-04-18 12:05:05.653405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.268 [2024-04-18 12:05:05.662656] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f46d0 00:29:15.268 [2024-04-18 12:05:05.663770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.268 [2024-04-18 12:05:05.663795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.268 [2024-04-18 12:05:05.673009] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fbcf0 00:29:15.268 [2024-04-18 12:05:05.674140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:8923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.268 [2024-04-18 12:05:05.674165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.268 [2024-04-18 12:05:05.683367] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:29:15.268 [2024-04-18 12:05:05.684469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.268 [2024-04-18 12:05:05.684511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.268 [2024-04-18 12:05:05.693744] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:29:15.268 [2024-04-18 12:05:05.694819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.268 [2024-04-18 12:05:05.694846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.268 [2024-04-18 12:05:05.704057] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f46d0 00:29:15.268 [2024-04-18 12:05:05.705162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.268 [2024-04-18 12:05:05.705188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.268 [2024-04-18 12:05:05.714410] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fbcf0 00:29:15.268 [2024-04-18 12:05:05.715553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:11732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.268 [2024-04-18 12:05:05.715580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.268 [2024-04-18 12:05:05.724812] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:29:15.268 [2024-04-18 12:05:05.725936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.268 [2024-04-18 12:05:05.725961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.268 [2024-04-18 12:05:05.735179] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:29:15.268 [2024-04-18 12:05:05.736255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.268 [2024-04-18 12:05:05.736282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.268 [2024-04-18 12:05:05.745745] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f46d0 00:29:15.268 [2024-04-18 12:05:05.746847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.268 [2024-04-18 12:05:05.746872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.268 [2024-04-18 12:05:05.756051] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fbcf0 00:29:15.269 [2024-04-18 12:05:05.757182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.269 [2024-04-18 12:05:05.757208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.269 [2024-04-18 12:05:05.766488] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:29:15.269 [2024-04-18 12:05:05.767567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.269 [2024-04-18 12:05:05.767592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.269 [2024-04-18 12:05:05.776848] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:29:15.269 [2024-04-18 12:05:05.777928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.269 [2024-04-18 12:05:05.777954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.269 [2024-04-18 12:05:05.787241] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f46d0 00:29:15.269 [2024-04-18 12:05:05.788348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:10332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.269 [2024-04-18 12:05:05.788373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.269 [2024-04-18 12:05:05.797588] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fbcf0 00:29:15.269 [2024-04-18 12:05:05.798651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.269 [2024-04-18 12:05:05.798681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.269 [2024-04-18 12:05:05.807975] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:29:15.269 [2024-04-18 12:05:05.809066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.269 [2024-04-18 12:05:05.809091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.527 [2024-04-18 12:05:05.818736] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:29:15.527 [2024-04-18 12:05:05.819866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.527 [2024-04-18 12:05:05.819893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.527 [2024-04-18 12:05:05.829231] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f46d0 00:29:15.527 [2024-04-18 12:05:05.830319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.527 [2024-04-18 12:05:05.830345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.527 [2024-04-18 12:05:05.839606] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fbcf0 00:29:15.527 [2024-04-18 12:05:05.840753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:6709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.527 [2024-04-18 12:05:05.840778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.527 [2024-04-18 12:05:05.849984] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:29:15.527 [2024-04-18 12:05:05.851087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.527 [2024-04-18 12:05:05.851112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.527 [2024-04-18 12:05:05.860389] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:29:15.527 [2024-04-18 12:05:05.861498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:17897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.527 [2024-04-18 12:05:05.861524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.528 [2024-04-18 12:05:05.870758] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f46d0 00:29:15.528 [2024-04-18 12:05:05.871900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.528 [2024-04-18 12:05:05.871926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.528 [2024-04-18 12:05:05.881168] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fbcf0 00:29:15.528 [2024-04-18 12:05:05.882257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.528 [2024-04-18 12:05:05.882283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.528 [2024-04-18 12:05:05.891534] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:29:15.528 [2024-04-18 12:05:05.892661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:17291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.528 [2024-04-18 12:05:05.892695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.528 [2024-04-18 12:05:05.901893] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:29:15.528 [2024-04-18 12:05:05.902981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:13680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.528 [2024-04-18 12:05:05.903008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.528 [2024-04-18 12:05:05.912266] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f46d0 00:29:15.528 [2024-04-18 12:05:05.913380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.528 [2024-04-18 12:05:05.913405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.528 [2024-04-18 12:05:05.922661] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fbcf0 00:29:15.528 [2024-04-18 12:05:05.923767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.528 [2024-04-18 12:05:05.923793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.528 [2024-04-18 12:05:05.933035] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:29:15.528 [2024-04-18 12:05:05.934112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:21099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.528 [2024-04-18 12:05:05.934137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.528 [2024-04-18 12:05:05.943391] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:29:15.528 [2024-04-18 12:05:05.944521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.528 [2024-04-18 12:05:05.944547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.528 [2024-04-18 12:05:05.953777] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f46d0 00:29:15.528 [2024-04-18 12:05:05.954875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.528 [2024-04-18 12:05:05.954901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.528 [2024-04-18 12:05:05.964141] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fbcf0 00:29:15.528 [2024-04-18 12:05:05.965214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.528 [2024-04-18 12:05:05.965239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.528 [2024-04-18 12:05:05.974471] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:29:15.528 [2024-04-18 12:05:05.975624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:17142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.528 [2024-04-18 12:05:05.975653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.528 [2024-04-18 12:05:05.984852] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:29:15.528 [2024-04-18 12:05:05.985928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.528 [2024-04-18 12:05:05.985954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.528 [2024-04-18 12:05:05.995430] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f46d0 00:29:15.528 [2024-04-18 12:05:05.996516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.528 [2024-04-18 12:05:05.996541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.528 [2024-04-18 12:05:06.005805] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fbcf0 00:29:15.528 [2024-04-18 12:05:06.006925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.528 [2024-04-18 12:05:06.006951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.528 [2024-04-18 12:05:06.016180] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:29:15.528 [2024-04-18 12:05:06.017266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:25170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.528 [2024-04-18 12:05:06.017292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.528 [2024-04-18 12:05:06.026515] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:29:15.528 [2024-04-18 12:05:06.027636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.528 [2024-04-18 12:05:06.027662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.528 [2024-04-18 12:05:06.036877] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f46d0 00:29:15.528 [2024-04-18 12:05:06.037886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.528 [2024-04-18 12:05:06.037912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.528 [2024-04-18 12:05:06.047259] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fbcf0 00:29:15.528 [2024-04-18 12:05:06.048366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.528 [2024-04-18 12:05:06.048392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.528 [2024-04-18 12:05:06.057665] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:29:15.528 [2024-04-18 12:05:06.058810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.528 [2024-04-18 12:05:06.058835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.528 [2024-04-18 12:05:06.068105] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:29:15.528 [2024-04-18 12:05:06.069123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.528 [2024-04-18 12:05:06.069148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.787 [2024-04-18 12:05:06.078893] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f46d0 00:29:15.787 [2024-04-18 12:05:06.080065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:24050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.787 [2024-04-18 12:05:06.080091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.787 [2024-04-18 12:05:06.089357] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fbcf0 00:29:15.787 [2024-04-18 12:05:06.090478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.787 [2024-04-18 12:05:06.090504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.787 [2024-04-18 12:05:06.099758] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:29:15.787 [2024-04-18 12:05:06.100872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:6799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.787 [2024-04-18 12:05:06.100898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.787 [2024-04-18 12:05:06.110124] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:29:15.787 [2024-04-18 12:05:06.111266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.787 [2024-04-18 12:05:06.111293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.787 [2024-04-18 12:05:06.120477] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f46d0 00:29:15.787 [2024-04-18 12:05:06.121572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.787 [2024-04-18 12:05:06.121598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.787 [2024-04-18 12:05:06.130825] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fbcf0 00:29:15.787 [2024-04-18 12:05:06.131911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:6694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.787 [2024-04-18 12:05:06.131936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.787 [2024-04-18 12:05:06.141150] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:29:15.787 [2024-04-18 12:05:06.142258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:18478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.787 [2024-04-18 12:05:06.142284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.787 [2024-04-18 12:05:06.151560] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:29:15.787 [2024-04-18 12:05:06.152631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:14646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.787 [2024-04-18 12:05:06.152655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.787 [2024-04-18 12:05:06.161962] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f46d0 00:29:15.787 [2024-04-18 12:05:06.163090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:18802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.787 [2024-04-18 12:05:06.163116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.787 [2024-04-18 12:05:06.172304] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fbcf0 00:29:15.787 [2024-04-18 12:05:06.173376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.787 [2024-04-18 12:05:06.173402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.787 [2024-04-18 12:05:06.182676] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:29:15.787 [2024-04-18 12:05:06.183761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.787 [2024-04-18 12:05:06.183787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.787 [2024-04-18 12:05:06.193018] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:29:15.787 [2024-04-18 12:05:06.194139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:24754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.787 [2024-04-18 12:05:06.194165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.787 [2024-04-18 12:05:06.203414] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f46d0 00:29:15.787 [2024-04-18 12:05:06.204502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.787 [2024-04-18 12:05:06.204527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.787 [2024-04-18 12:05:06.213805] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fbcf0 00:29:15.787 [2024-04-18 12:05:06.214947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.787 [2024-04-18 12:05:06.214972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.787 [2024-04-18 12:05:06.224159] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:29:15.787 [2024-04-18 12:05:06.225248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:23539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.787 [2024-04-18 12:05:06.225274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.787 [2024-04-18 12:05:06.234537] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:29:15.787 [2024-04-18 12:05:06.235655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.787 [2024-04-18 12:05:06.235680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.787 [2024-04-18 12:05:06.244873] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f46d0 00:29:15.787 [2024-04-18 12:05:06.245975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.787 [2024-04-18 12:05:06.246001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.787 [2024-04-18 12:05:06.255481] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fbcf0 00:29:15.787 [2024-04-18 12:05:06.256543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.787 [2024-04-18 12:05:06.256568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.787 [2024-04-18 12:05:06.265869] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:29:15.787 [2024-04-18 12:05:06.266955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.787 [2024-04-18 12:05:06.266981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.787 [2024-04-18 12:05:06.276199] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:29:15.787 [2024-04-18 12:05:06.277332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.787 [2024-04-18 12:05:06.277358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.787 [2024-04-18 12:05:06.286577] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f46d0 00:29:15.787 [2024-04-18 12:05:06.287651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.787 [2024-04-18 12:05:06.287677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.787 [2024-04-18 12:05:06.296944] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fbcf0 00:29:15.787 [2024-04-18 12:05:06.298122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:19156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.787 [2024-04-18 12:05:06.298147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.787 [2024-04-18 12:05:06.307303] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:29:15.787 [2024-04-18 12:05:06.308293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:23587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.787 [2024-04-18 12:05:06.308318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.787 [2024-04-18 12:05:06.317701] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:29:15.787 [2024-04-18 12:05:06.318802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.787 [2024-04-18 12:05:06.318828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.787 [2024-04-18 12:05:06.328015] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f46d0 00:29:15.787 [2024-04-18 12:05:06.329150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:14336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.787 [2024-04-18 12:05:06.329176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:16.047 [2024-04-18 12:05:06.338824] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fbcf0 00:29:16.047 [2024-04-18 12:05:06.339969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:3230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.047 [2024-04-18 12:05:06.339995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:16.047 [2024-04-18 12:05:06.349297] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:29:16.047 [2024-04-18 12:05:06.350430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:12828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.047 [2024-04-18 12:05:06.350460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:16.047 [2024-04-18 12:05:06.359693] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:29:16.047 [2024-04-18 12:05:06.360819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:14200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.047 [2024-04-18 12:05:06.360845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:16.047 [2024-04-18 12:05:06.370100] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f46d0 00:29:16.047 [2024-04-18 12:05:06.371192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.047 [2024-04-18 12:05:06.371217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:16.047 [2024-04-18 12:05:06.380446] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fbcf0 00:29:16.047 [2024-04-18 12:05:06.381577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:17251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.047 [2024-04-18 12:05:06.381612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:16.047 [2024-04-18 12:05:06.390895] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:29:16.047 [2024-04-18 12:05:06.391963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.047 [2024-04-18 12:05:06.391989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:16.047 [2024-04-18 12:05:06.401265] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:29:16.047 [2024-04-18 12:05:06.402388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.047 [2024-04-18 12:05:06.402413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:16.047 [2024-04-18 12:05:06.411624] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f46d0 00:29:16.047 [2024-04-18 12:05:06.412739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.047 [2024-04-18 12:05:06.412765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:16.048 [2024-04-18 12:05:06.422013] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fbcf0 00:29:16.048 [2024-04-18 12:05:06.423092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.048 [2024-04-18 12:05:06.423122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:16.048 [2024-04-18 12:05:06.432323] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:29:16.048 [2024-04-18 12:05:06.433354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:18309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.048 [2024-04-18 12:05:06.433380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:16.048 [2024-04-18 12:05:06.442741] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:29:16.048 [2024-04-18 12:05:06.443826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:20499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.048 [2024-04-18 12:05:06.443852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:16.048 [2024-04-18 12:05:06.453258] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f46d0 00:29:16.048 [2024-04-18 12:05:06.454377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:16966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.048 [2024-04-18 12:05:06.454404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:16.048 [2024-04-18 12:05:06.463655] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fbcf0 00:29:16.048 [2024-04-18 12:05:06.464663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.048 [2024-04-18 12:05:06.464688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:16.048 [2024-04-18 12:05:06.474007] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:29:16.048 [2024-04-18 12:05:06.475119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:24244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.048 [2024-04-18 12:05:06.475145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:16.048 [2024-04-18 12:05:06.484390] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:29:16.048 [2024-04-18 12:05:06.485473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.048 [2024-04-18 12:05:06.485514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:16.048 [2024-04-18 12:05:06.494788] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f46d0 00:29:16.048 [2024-04-18 12:05:06.495792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.048 [2024-04-18 12:05:06.495818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:16.048 [2024-04-18 12:05:06.505294] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fbcf0 00:29:16.048 [2024-04-18 12:05:06.506422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.048 [2024-04-18 12:05:06.506447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:16.048 [2024-04-18 12:05:06.515702] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:29:16.048 [2024-04-18 12:05:06.516810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.048 [2024-04-18 12:05:06.516835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:16.048 [2024-04-18 12:05:06.526060] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:29:16.048 [2024-04-18 12:05:06.527153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:3495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.048 [2024-04-18 12:05:06.527179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:16.048 [2024-04-18 12:05:06.536501] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f46d0 00:29:16.048 [2024-04-18 12:05:06.537631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.048 [2024-04-18 12:05:06.537657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:16.048 [2024-04-18 12:05:06.546922] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fbcf0 00:29:16.048 [2024-04-18 12:05:06.547989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:20727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.048 [2024-04-18 12:05:06.548014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:16.048 [2024-04-18 12:05:06.557267] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:29:16.048 [2024-04-18 12:05:06.558386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:11022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.048 [2024-04-18 12:05:06.558420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:16.048 [2024-04-18 12:05:06.567701] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:29:16.048 [2024-04-18 12:05:06.568800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.048 [2024-04-18 12:05:06.568824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:16.048 [2024-04-18 12:05:06.578059] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f46d0 00:29:16.048 [2024-04-18 12:05:06.579200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.048 [2024-04-18 12:05:06.579226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:16.048 [2024-04-18 12:05:06.588462] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fbcf0 00:29:16.048 [2024-04-18 12:05:06.589560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.048 [2024-04-18 12:05:06.589585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:16.307 [2024-04-18 12:05:06.599263] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:29:16.307 [2024-04-18 12:05:06.600381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.307 [2024-04-18 12:05:06.600411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:16.307 [2024-04-18 12:05:06.609756] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:29:16.307 [2024-04-18 12:05:06.610917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:11533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.307 [2024-04-18 12:05:06.610942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:16.307 [2024-04-18 12:05:06.620169] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f46d0 00:29:16.307 [2024-04-18 12:05:06.621281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.307 [2024-04-18 12:05:06.621308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:16.307 [2024-04-18 12:05:06.630476] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fbcf0 00:29:16.307 [2024-04-18 12:05:06.631578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.307 [2024-04-18 12:05:06.631604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:16.307 [2024-04-18 12:05:06.640840] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:29:16.307 [2024-04-18 12:05:06.641945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:8131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.307 [2024-04-18 12:05:06.641971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:16.307 [2024-04-18 12:05:06.651327] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:29:16.307 [2024-04-18 12:05:06.652440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.307 [2024-04-18 12:05:06.652475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:16.307 [2024-04-18 12:05:06.662020] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f46d0 00:29:16.307 [2024-04-18 12:05:06.663068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.307 [2024-04-18 12:05:06.663094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:16.307 [2024-04-18 12:05:06.672656] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fbcf0 00:29:16.307 [2024-04-18 12:05:06.673786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:3225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.307 [2024-04-18 12:05:06.673811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:16.307 [2024-04-18 12:05:06.683047] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:29:16.307 [2024-04-18 12:05:06.684139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.307 [2024-04-18 12:05:06.684164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:16.307 [2024-04-18 12:05:06.693361] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:29:16.307 [2024-04-18 12:05:06.694469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.307 [2024-04-18 12:05:06.694496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:16.307 [2024-04-18 12:05:06.703791] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f46d0 00:29:16.307 [2024-04-18 12:05:06.704791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.307 [2024-04-18 12:05:06.704816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:16.307 [2024-04-18 12:05:06.714176] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fbcf0 00:29:16.307 [2024-04-18 12:05:06.715305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:20370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.307 [2024-04-18 12:05:06.715331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:16.307 [2024-04-18 12:05:06.724596] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:29:16.307 [2024-04-18 12:05:06.725594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.307 [2024-04-18 12:05:06.725619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:16.307 [2024-04-18 12:05:06.734947] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:29:16.307 [2024-04-18 12:05:06.735979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:14452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.307 [2024-04-18 12:05:06.736005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:16.307 [2024-04-18 12:05:06.745330] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f46d0 00:29:16.307 [2024-04-18 12:05:06.746308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:24254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.307 [2024-04-18 12:05:06.746334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:16.307 [2024-04-18 12:05:06.755844] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fbcf0 00:29:16.307 [2024-04-18 12:05:06.756892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.307 [2024-04-18 12:05:06.756918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:16.307 [2024-04-18 12:05:06.766259] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:29:16.307 [2024-04-18 12:05:06.767345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.307 [2024-04-18 12:05:06.767370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:16.307 [2024-04-18 12:05:06.776657] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:29:16.307 [2024-04-18 12:05:06.777758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:17344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.307 [2024-04-18 12:05:06.777784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:16.307 [2024-04-18 12:05:06.786973] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f46d0 00:29:16.307 [2024-04-18 12:05:06.788083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.307 [2024-04-18 12:05:06.788109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:16.307 [2024-04-18 12:05:06.797366] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fbcf0 00:29:16.307 [2024-04-18 12:05:06.798365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.307 [2024-04-18 12:05:06.798390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:16.307 [2024-04-18 12:05:06.807725] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:29:16.307 [2024-04-18 12:05:06.808833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.307 [2024-04-18 12:05:06.808858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:16.307 [2024-04-18 12:05:06.818125] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:29:16.307 [2024-04-18 12:05:06.819210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:18394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.307 [2024-04-18 12:05:06.819236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:16.307 [2024-04-18 12:05:06.828445] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f46d0 00:29:16.307 [2024-04-18 12:05:06.829544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:11007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.307 [2024-04-18 12:05:06.829571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:16.308 [2024-04-18 12:05:06.838841] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fbcf0 00:29:16.308 [2024-04-18 12:05:06.839858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.308 [2024-04-18 12:05:06.839883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:16.308 [2024-04-18 12:05:06.849241] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:29:16.308 [2024-04-18 12:05:06.850354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.308 [2024-04-18 12:05:06.850380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:16.566 [2024-04-18 12:05:06.860072] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:29:16.566 [2024-04-18 12:05:06.861106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.566 [2024-04-18 12:05:06.861132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:16.566 [2024-04-18 12:05:06.870484] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f46d0 00:29:16.566 [2024-04-18 12:05:06.871572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:18877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.566 [2024-04-18 12:05:06.871604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:16.566 [2024-04-18 12:05:06.880860] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fbcf0 00:29:16.566 [2024-04-18 12:05:06.881971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.566 [2024-04-18 12:05:06.881997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:16.566 [2024-04-18 12:05:06.891262] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:29:16.566 [2024-04-18 12:05:06.892359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.566 [2024-04-18 12:05:06.892384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:16.566 [2024-04-18 12:05:06.901673] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:29:16.566 [2024-04-18 12:05:06.902803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:20406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.566 [2024-04-18 12:05:06.902829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:16.566 [2024-04-18 12:05:06.912057] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f46d0 00:29:16.566 [2024-04-18 12:05:06.913121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.566 [2024-04-18 12:05:06.913145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:16.566 [2024-04-18 12:05:06.922421] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fbcf0 00:29:16.566 [2024-04-18 12:05:06.923446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:10029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.566 [2024-04-18 12:05:06.923477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:16.566 [2024-04-18 12:05:06.932810] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:29:16.566 [2024-04-18 12:05:06.933898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:5322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.566 [2024-04-18 12:05:06.933923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:16.566 [2024-04-18 12:05:06.943188] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:29:16.566 [2024-04-18 12:05:06.944282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.566 [2024-04-18 12:05:06.944309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:16.567 [2024-04-18 12:05:06.953568] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f46d0 00:29:16.567 [2024-04-18 12:05:06.954626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.567 [2024-04-18 12:05:06.954651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:16.567 [2024-04-18 12:05:06.963973] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fbcf0 00:29:16.567 [2024-04-18 12:05:06.965048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.567 [2024-04-18 12:05:06.965074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:16.567 00:29:16.567 Latency(us) 00:29:16.567 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:16.567 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:16.567 nvme0n1 : 2.00 24404.78 95.33 0.00 0.00 5237.66 4194.30 14994.64 00:29:16.567 =================================================================================================================== 00:29:16.567 Total : 24404.78 95.33 0.00 0.00 5237.66 4194.30 14994.64 00:29:16.567 0 00:29:16.567 12:05:06 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:16.567 12:05:06 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:16.567 12:05:06 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:16.567 | .driver_specific 00:29:16.567 | .nvme_error 00:29:16.567 | .status_code 00:29:16.567 | .command_transient_transport_error' 00:29:16.567 12:05:06 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:16.825 12:05:07 -- host/digest.sh@71 -- # (( 191 > 0 )) 00:29:16.825 12:05:07 -- host/digest.sh@73 -- # killprocess 2643374 00:29:16.825 12:05:07 -- common/autotest_common.sh@936 -- # '[' -z 2643374 ']' 00:29:16.825 12:05:07 -- common/autotest_common.sh@940 -- # kill -0 2643374 00:29:16.825 12:05:07 -- common/autotest_common.sh@941 -- # uname 00:29:16.825 12:05:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:16.825 12:05:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2643374 00:29:16.825 12:05:07 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:16.825 12:05:07 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:16.825 12:05:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2643374' 00:29:16.825 killing process with pid 2643374 00:29:16.825 12:05:07 -- common/autotest_common.sh@955 -- # kill 2643374 00:29:16.825 Received shutdown signal, test time was about 2.000000 seconds 00:29:16.825 00:29:16.825 Latency(us) 00:29:16.825 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:16.825 =================================================================================================================== 00:29:16.825 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:16.825 12:05:07 -- common/autotest_common.sh@960 -- # wait 2643374 00:29:17.760 12:05:08 -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:29:17.760 12:05:08 -- host/digest.sh@54 -- # local rw bs qd 00:29:17.760 12:05:08 -- host/digest.sh@56 -- # rw=randwrite 00:29:17.760 12:05:08 -- host/digest.sh@56 -- # bs=131072 00:29:17.760 12:05:08 -- host/digest.sh@56 -- # qd=16 00:29:17.760 12:05:08 -- host/digest.sh@58 -- # bperfpid=2644098 00:29:17.760 12:05:08 -- host/digest.sh@60 -- # waitforlisten 2644098 /var/tmp/bperf.sock 00:29:17.760 12:05:08 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:17.760 12:05:08 -- common/autotest_common.sh@817 -- # '[' -z 2644098 ']' 00:29:17.760 12:05:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:17.760 12:05:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:17.760 12:05:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:17.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:17.760 12:05:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:17.760 12:05:08 -- common/autotest_common.sh@10 -- # set +x 00:29:17.760 [2024-04-18 12:05:08.302852] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:29:17.760 [2024-04-18 12:05:08.302969] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2644098 ] 00:29:17.760 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:17.760 Zero copy mechanism will not be used. 00:29:18.019 EAL: No free 2048 kB hugepages reported on node 1 00:29:18.019 [2024-04-18 12:05:08.428023] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:18.277 [2024-04-18 12:05:08.640307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:18.535 12:05:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:18.535 12:05:09 -- common/autotest_common.sh@850 -- # return 0 00:29:18.535 12:05:09 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:18.535 12:05:09 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:18.793 12:05:09 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:18.793 12:05:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:18.793 12:05:09 -- common/autotest_common.sh@10 -- # set +x 00:29:18.793 12:05:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:18.793 12:05:09 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:18.793 12:05:09 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:19.052 nvme0n1 00:29:19.052 12:05:09 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:19.052 12:05:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:19.052 12:05:09 -- common/autotest_common.sh@10 -- # set +x 00:29:19.052 12:05:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:19.052 12:05:09 -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:19.052 12:05:09 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:19.313 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:19.313 Zero copy mechanism will not be used. 00:29:19.313 Running I/O for 2 seconds... 00:29:19.313 [2024-04-18 12:05:09.633840] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.313 [2024-04-18 12:05:09.634424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.313 [2024-04-18 12:05:09.634475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:19.313 [2024-04-18 12:05:09.645523] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.313 [2024-04-18 12:05:09.645953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.313 [2024-04-18 12:05:09.645986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:19.313 [2024-04-18 12:05:09.654368] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.313 [2024-04-18 12:05:09.654803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.313 [2024-04-18 12:05:09.654832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.313 [2024-04-18 12:05:09.663761] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.313 [2024-04-18 12:05:09.664170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.313 [2024-04-18 12:05:09.664201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.313 [2024-04-18 12:05:09.673605] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.313 [2024-04-18 12:05:09.674010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.313 [2024-04-18 12:05:09.674039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:19.313 [2024-04-18 12:05:09.682508] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.313 [2024-04-18 12:05:09.682933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.313 [2024-04-18 12:05:09.682961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:19.313 [2024-04-18 12:05:09.691210] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.313 [2024-04-18 12:05:09.691614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.313 [2024-04-18 12:05:09.691641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.313 [2024-04-18 12:05:09.699837] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.313 [2024-04-18 12:05:09.700247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.313 [2024-04-18 12:05:09.700274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.313 [2024-04-18 12:05:09.708989] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.313 [2024-04-18 12:05:09.709403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.313 [2024-04-18 12:05:09.709429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:19.313 [2024-04-18 12:05:09.716436] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.313 [2024-04-18 12:05:09.716872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.313 [2024-04-18 12:05:09.716898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:19.313 [2024-04-18 12:05:09.724648] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.313 [2024-04-18 12:05:09.725063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.313 [2024-04-18 12:05:09.725090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.313 [2024-04-18 12:05:09.732607] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.313 [2024-04-18 12:05:09.732796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.313 [2024-04-18 12:05:09.732822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.313 [2024-04-18 12:05:09.740777] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.313 [2024-04-18 12:05:09.741166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.313 [2024-04-18 12:05:09.741193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:19.313 [2024-04-18 12:05:09.748906] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.313 [2024-04-18 12:05:09.749373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.313 [2024-04-18 12:05:09.749400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:19.313 [2024-04-18 12:05:09.757928] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.313 [2024-04-18 12:05:09.758368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.313 [2024-04-18 12:05:09.758394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.313 [2024-04-18 12:05:09.766590] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.313 [2024-04-18 12:05:09.767032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.314 [2024-04-18 12:05:09.767058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.314 [2024-04-18 12:05:09.775366] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.314 [2024-04-18 12:05:09.775931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.314 [2024-04-18 12:05:09.775958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:19.314 [2024-04-18 12:05:09.784334] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.314 [2024-04-18 12:05:09.784877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.314 [2024-04-18 12:05:09.784904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:19.314 [2024-04-18 12:05:09.792826] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.314 [2024-04-18 12:05:09.793311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.314 [2024-04-18 12:05:09.793337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.314 [2024-04-18 12:05:09.801414] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.314 [2024-04-18 12:05:09.801867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.314 [2024-04-18 12:05:09.801893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.314 [2024-04-18 12:05:09.810622] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.314 [2024-04-18 12:05:09.811134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.314 [2024-04-18 12:05:09.811160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:19.314 [2024-04-18 12:05:09.819751] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.314 [2024-04-18 12:05:09.820158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.314 [2024-04-18 12:05:09.820184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:19.314 [2024-04-18 12:05:09.828319] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.314 [2024-04-18 12:05:09.828723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.314 [2024-04-18 12:05:09.828749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.314 [2024-04-18 12:05:09.837250] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.314 [2024-04-18 12:05:09.837730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.314 [2024-04-18 12:05:09.837764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.314 [2024-04-18 12:05:09.846578] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.314 [2024-04-18 12:05:09.847063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.314 [2024-04-18 12:05:09.847090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:19.314 [2024-04-18 12:05:09.856364] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.314 [2024-04-18 12:05:09.856844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.314 [2024-04-18 12:05:09.856871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:19.574 [2024-04-18 12:05:09.866014] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.574 [2024-04-18 12:05:09.866533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.574 [2024-04-18 12:05:09.866560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.574 [2024-04-18 12:05:09.875575] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.574 [2024-04-18 12:05:09.876044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.574 [2024-04-18 12:05:09.876070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.574 [2024-04-18 12:05:09.884231] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.574 [2024-04-18 12:05:09.884677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.574 [2024-04-18 12:05:09.884704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:19.574 [2024-04-18 12:05:09.893154] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.574 [2024-04-18 12:05:09.893608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.574 [2024-04-18 12:05:09.893635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:19.574 [2024-04-18 12:05:09.902589] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.574 [2024-04-18 12:05:09.903105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.574 [2024-04-18 12:05:09.903131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.574 [2024-04-18 12:05:09.912136] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.574 [2024-04-18 12:05:09.912598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.574 [2024-04-18 12:05:09.912625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.574 [2024-04-18 12:05:09.921274] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.574 [2024-04-18 12:05:09.921699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.574 [2024-04-18 12:05:09.921726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:19.574 [2024-04-18 12:05:09.930494] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.574 [2024-04-18 12:05:09.931055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.574 [2024-04-18 12:05:09.931082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:19.574 [2024-04-18 12:05:09.939220] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.574 [2024-04-18 12:05:09.939643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.574 [2024-04-18 12:05:09.939669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.574 [2024-04-18 12:05:09.947191] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.574 [2024-04-18 12:05:09.947595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.574 [2024-04-18 12:05:09.947620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.574 [2024-04-18 12:05:09.955096] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.574 [2024-04-18 12:05:09.955487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.574 [2024-04-18 12:05:09.955514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:19.574 [2024-04-18 12:05:09.964002] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.574 [2024-04-18 12:05:09.964487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.574 [2024-04-18 12:05:09.964513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:19.574 [2024-04-18 12:05:09.973156] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.574 [2024-04-18 12:05:09.973601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.574 [2024-04-18 12:05:09.973627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.574 [2024-04-18 12:05:09.982652] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.574 [2024-04-18 12:05:09.983052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.574 [2024-04-18 12:05:09.983079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.575 [2024-04-18 12:05:09.992319] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.575 [2024-04-18 12:05:09.992769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.575 [2024-04-18 12:05:09.992795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:19.575 [2024-04-18 12:05:10.001224] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.575 [2024-04-18 12:05:10.001627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.575 [2024-04-18 12:05:10.001655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:19.575 [2024-04-18 12:05:10.010110] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.575 [2024-04-18 12:05:10.010580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.575 [2024-04-18 12:05:10.010607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.575 [2024-04-18 12:05:10.019071] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.575 [2024-04-18 12:05:10.019530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.575 [2024-04-18 12:05:10.019557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.575 [2024-04-18 12:05:10.028056] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.575 [2024-04-18 12:05:10.028504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.575 [2024-04-18 12:05:10.028532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:19.575 [2024-04-18 12:05:10.036301] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.575 [2024-04-18 12:05:10.036733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.575 [2024-04-18 12:05:10.036762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:19.575 [2024-04-18 12:05:10.044230] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.575 [2024-04-18 12:05:10.044645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.575 [2024-04-18 12:05:10.044673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.575 [2024-04-18 12:05:10.052745] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.575 [2024-04-18 12:05:10.053126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.575 [2024-04-18 12:05:10.053154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.575 [2024-04-18 12:05:10.060415] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.575 [2024-04-18 12:05:10.060893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.575 [2024-04-18 12:05:10.060921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:19.575 [2024-04-18 12:05:10.068649] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.575 [2024-04-18 12:05:10.069050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.575 [2024-04-18 12:05:10.069079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:19.575 [2024-04-18 12:05:10.077162] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.575 [2024-04-18 12:05:10.077634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.575 [2024-04-18 12:05:10.077662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.575 [2024-04-18 12:05:10.085873] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.575 [2024-04-18 12:05:10.086345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.575 [2024-04-18 12:05:10.086373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.575 [2024-04-18 12:05:10.094335] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.575 [2024-04-18 12:05:10.094722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.575 [2024-04-18 12:05:10.094750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:19.575 [2024-04-18 12:05:10.102178] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.575 [2024-04-18 12:05:10.102559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.575 [2024-04-18 12:05:10.102586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:19.575 [2024-04-18 12:05:10.109485] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.575 [2024-04-18 12:05:10.109847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.575 [2024-04-18 12:05:10.109874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.575 [2024-04-18 12:05:10.117096] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.575 [2024-04-18 12:05:10.117573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.575 [2024-04-18 12:05:10.117602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.836 [2024-04-18 12:05:10.125535] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.836 [2024-04-18 12:05:10.125909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.836 [2024-04-18 12:05:10.125936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:19.836 [2024-04-18 12:05:10.132616] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.836 [2024-04-18 12:05:10.133072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.836 [2024-04-18 12:05:10.133099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:19.836 [2024-04-18 12:05:10.140305] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.836 [2024-04-18 12:05:10.140788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.836 [2024-04-18 12:05:10.140816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.836 [2024-04-18 12:05:10.148243] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.836 [2024-04-18 12:05:10.148626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.836 [2024-04-18 12:05:10.148653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.836 [2024-04-18 12:05:10.156854] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.836 [2024-04-18 12:05:10.157256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.836 [2024-04-18 12:05:10.157283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:19.836 [2024-04-18 12:05:10.164359] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.836 [2024-04-18 12:05:10.164737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.836 [2024-04-18 12:05:10.164764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:19.836 [2024-04-18 12:05:10.172489] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.836 [2024-04-18 12:05:10.172911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.836 [2024-04-18 12:05:10.172939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.836 [2024-04-18 12:05:10.180506] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.836 [2024-04-18 12:05:10.180945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.836 [2024-04-18 12:05:10.180978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.836 [2024-04-18 12:05:10.188356] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.836 [2024-04-18 12:05:10.188785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.836 [2024-04-18 12:05:10.188811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:19.836 [2024-04-18 12:05:10.196818] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.836 [2024-04-18 12:05:10.197199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.836 [2024-04-18 12:05:10.197226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:19.836 [2024-04-18 12:05:10.204688] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.836 [2024-04-18 12:05:10.205066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.836 [2024-04-18 12:05:10.205093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.836 [2024-04-18 12:05:10.212415] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.836 [2024-04-18 12:05:10.212834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.836 [2024-04-18 12:05:10.212861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.836 [2024-04-18 12:05:10.220467] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.836 [2024-04-18 12:05:10.220866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.836 [2024-04-18 12:05:10.220893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:19.836 [2024-04-18 12:05:10.228201] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.836 [2024-04-18 12:05:10.228699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.836 [2024-04-18 12:05:10.228725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:19.836 [2024-04-18 12:05:10.236243] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.836 [2024-04-18 12:05:10.236671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.836 [2024-04-18 12:05:10.236698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.836 [2024-04-18 12:05:10.243859] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.836 [2024-04-18 12:05:10.244255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.836 [2024-04-18 12:05:10.244281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.836 [2024-04-18 12:05:10.251026] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.836 [2024-04-18 12:05:10.251402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.836 [2024-04-18 12:05:10.251429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:19.836 [2024-04-18 12:05:10.258835] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.836 [2024-04-18 12:05:10.259213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.836 [2024-04-18 12:05:10.259240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:19.836 [2024-04-18 12:05:10.265943] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.836 [2024-04-18 12:05:10.266328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.836 [2024-04-18 12:05:10.266356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.836 [2024-04-18 12:05:10.273172] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.836 [2024-04-18 12:05:10.273615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.836 [2024-04-18 12:05:10.273641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.836 [2024-04-18 12:05:10.281205] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.836 [2024-04-18 12:05:10.281608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.836 [2024-04-18 12:05:10.281635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:19.836 [2024-04-18 12:05:10.288607] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.836 [2024-04-18 12:05:10.288994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.836 [2024-04-18 12:05:10.289021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:19.836 [2024-04-18 12:05:10.296086] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.836 [2024-04-18 12:05:10.296471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.836 [2024-04-18 12:05:10.296498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.836 [2024-04-18 12:05:10.304514] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.837 [2024-04-18 12:05:10.304911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.837 [2024-04-18 12:05:10.304953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.837 [2024-04-18 12:05:10.312274] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.837 [2024-04-18 12:05:10.312648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.837 [2024-04-18 12:05:10.312679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:19.837 [2024-04-18 12:05:10.320256] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.837 [2024-04-18 12:05:10.320650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.837 [2024-04-18 12:05:10.320678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:19.837 [2024-04-18 12:05:10.328098] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.837 [2024-04-18 12:05:10.328514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.837 [2024-04-18 12:05:10.328540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.837 [2024-04-18 12:05:10.335896] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.837 [2024-04-18 12:05:10.336279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.837 [2024-04-18 12:05:10.336306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.837 [2024-04-18 12:05:10.343823] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.837 [2024-04-18 12:05:10.344208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.837 [2024-04-18 12:05:10.344235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:19.837 [2024-04-18 12:05:10.351318] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.837 [2024-04-18 12:05:10.351716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.837 [2024-04-18 12:05:10.351742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:19.837 [2024-04-18 12:05:10.359099] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.837 [2024-04-18 12:05:10.359515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.837 [2024-04-18 12:05:10.359541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.837 [2024-04-18 12:05:10.366722] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.837 [2024-04-18 12:05:10.367103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.837 [2024-04-18 12:05:10.367137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.837 [2024-04-18 12:05:10.374338] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.837 [2024-04-18 12:05:10.374769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.837 [2024-04-18 12:05:10.374795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:19.837 [2024-04-18 12:05:10.382363] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:19.837 [2024-04-18 12:05:10.382746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.837 [2024-04-18 12:05:10.382773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.097 [2024-04-18 12:05:10.390158] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.097 [2024-04-18 12:05:10.390579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.097 [2024-04-18 12:05:10.390605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.097 [2024-04-18 12:05:10.397519] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.097 [2024-04-18 12:05:10.397992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.097 [2024-04-18 12:05:10.398019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.097 [2024-04-18 12:05:10.404977] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.097 [2024-04-18 12:05:10.405352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.097 [2024-04-18 12:05:10.405379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.097 [2024-04-18 12:05:10.412814] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.097 [2024-04-18 12:05:10.413192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.097 [2024-04-18 12:05:10.413219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.097 [2024-04-18 12:05:10.420580] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.097 [2024-04-18 12:05:10.420961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.097 [2024-04-18 12:05:10.420989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.097 [2024-04-18 12:05:10.429319] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.097 [2024-04-18 12:05:10.429738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.097 [2024-04-18 12:05:10.429765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.097 [2024-04-18 12:05:10.438207] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.097 [2024-04-18 12:05:10.438651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.097 [2024-04-18 12:05:10.438678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.097 [2024-04-18 12:05:10.446019] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.097 [2024-04-18 12:05:10.446394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.097 [2024-04-18 12:05:10.446424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.097 [2024-04-18 12:05:10.453412] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.097 [2024-04-18 12:05:10.453842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.097 [2024-04-18 12:05:10.453869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.097 [2024-04-18 12:05:10.462941] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.097 [2024-04-18 12:05:10.463432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.097 [2024-04-18 12:05:10.463465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.097 [2024-04-18 12:05:10.473326] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.097 [2024-04-18 12:05:10.473762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.097 [2024-04-18 12:05:10.473790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.097 [2024-04-18 12:05:10.482589] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.097 [2024-04-18 12:05:10.483060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.097 [2024-04-18 12:05:10.483087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.097 [2024-04-18 12:05:10.490725] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.097 [2024-04-18 12:05:10.491195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.097 [2024-04-18 12:05:10.491221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.097 [2024-04-18 12:05:10.500287] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.097 [2024-04-18 12:05:10.500734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.097 [2024-04-18 12:05:10.500761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.097 [2024-04-18 12:05:10.510430] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.097 [2024-04-18 12:05:10.510820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.097 [2024-04-18 12:05:10.510847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.097 [2024-04-18 12:05:10.525090] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.097 [2024-04-18 12:05:10.525791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.098 [2024-04-18 12:05:10.525818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.098 [2024-04-18 12:05:10.538390] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.098 [2024-04-18 12:05:10.538903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.098 [2024-04-18 12:05:10.538931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.098 [2024-04-18 12:05:10.548118] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.098 [2024-04-18 12:05:10.548599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.098 [2024-04-18 12:05:10.548626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.098 [2024-04-18 12:05:10.556377] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.098 [2024-04-18 12:05:10.556823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.098 [2024-04-18 12:05:10.556849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.098 [2024-04-18 12:05:10.565441] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.098 [2024-04-18 12:05:10.565973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.098 [2024-04-18 12:05:10.565999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.098 [2024-04-18 12:05:10.581917] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.098 [2024-04-18 12:05:10.582519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.098 [2024-04-18 12:05:10.582547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.098 [2024-04-18 12:05:10.594803] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.098 [2024-04-18 12:05:10.595240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.098 [2024-04-18 12:05:10.595267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.098 [2024-04-18 12:05:10.603033] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.098 [2024-04-18 12:05:10.603515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.098 [2024-04-18 12:05:10.603542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.098 [2024-04-18 12:05:10.610858] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.098 [2024-04-18 12:05:10.611273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.098 [2024-04-18 12:05:10.611300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.098 [2024-04-18 12:05:10.620352] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.098 [2024-04-18 12:05:10.620752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.098 [2024-04-18 12:05:10.620782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.098 [2024-04-18 12:05:10.629825] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.098 [2024-04-18 12:05:10.630205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.098 [2024-04-18 12:05:10.630231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.098 [2024-04-18 12:05:10.638462] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.098 [2024-04-18 12:05:10.638878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.098 [2024-04-18 12:05:10.638903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.358 [2024-04-18 12:05:10.646824] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.358 [2024-04-18 12:05:10.647196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.358 [2024-04-18 12:05:10.647223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.358 [2024-04-18 12:05:10.654735] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.358 [2024-04-18 12:05:10.655247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.358 [2024-04-18 12:05:10.655273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.358 [2024-04-18 12:05:10.663303] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.358 [2024-04-18 12:05:10.663718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.358 [2024-04-18 12:05:10.663744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.358 [2024-04-18 12:05:10.670374] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.358 [2024-04-18 12:05:10.670691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.358 [2024-04-18 12:05:10.670717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.358 [2024-04-18 12:05:10.678818] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.358 [2024-04-18 12:05:10.679199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.358 [2024-04-18 12:05:10.679225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.358 [2024-04-18 12:05:10.687652] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.358 [2024-04-18 12:05:10.687987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.358 [2024-04-18 12:05:10.688014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.358 [2024-04-18 12:05:10.695950] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.358 [2024-04-18 12:05:10.696256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.359 [2024-04-18 12:05:10.696283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.359 [2024-04-18 12:05:10.704387] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.359 [2024-04-18 12:05:10.704820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.359 [2024-04-18 12:05:10.704846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.359 [2024-04-18 12:05:10.711579] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.359 [2024-04-18 12:05:10.711890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.359 [2024-04-18 12:05:10.711916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.359 [2024-04-18 12:05:10.720219] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.359 [2024-04-18 12:05:10.720546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.359 [2024-04-18 12:05:10.720572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.359 [2024-04-18 12:05:10.727807] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.359 [2024-04-18 12:05:10.728244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.359 [2024-04-18 12:05:10.728270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.359 [2024-04-18 12:05:10.743922] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.359 [2024-04-18 12:05:10.744335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.359 [2024-04-18 12:05:10.744361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.359 [2024-04-18 12:05:10.753611] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.359 [2024-04-18 12:05:10.754018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.359 [2024-04-18 12:05:10.754045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.359 [2024-04-18 12:05:10.762135] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.359 [2024-04-18 12:05:10.762591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.359 [2024-04-18 12:05:10.762618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.359 [2024-04-18 12:05:10.771306] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.359 [2024-04-18 12:05:10.771873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.359 [2024-04-18 12:05:10.771899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.359 [2024-04-18 12:05:10.781010] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.359 [2024-04-18 12:05:10.781432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.359 [2024-04-18 12:05:10.781465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.359 [2024-04-18 12:05:10.789074] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.359 [2024-04-18 12:05:10.789545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.359 [2024-04-18 12:05:10.789572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.359 [2024-04-18 12:05:10.797562] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.359 [2024-04-18 12:05:10.797956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.359 [2024-04-18 12:05:10.797983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.359 [2024-04-18 12:05:10.805758] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.359 [2024-04-18 12:05:10.806226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.359 [2024-04-18 12:05:10.806252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.359 [2024-04-18 12:05:10.813626] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.359 [2024-04-18 12:05:10.814006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.359 [2024-04-18 12:05:10.814031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.359 [2024-04-18 12:05:10.822671] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.359 [2024-04-18 12:05:10.822989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.359 [2024-04-18 12:05:10.823015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.359 [2024-04-18 12:05:10.829995] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.359 [2024-04-18 12:05:10.830390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.359 [2024-04-18 12:05:10.830417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.359 [2024-04-18 12:05:10.839201] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.359 [2024-04-18 12:05:10.839613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.359 [2024-04-18 12:05:10.839639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.359 [2024-04-18 12:05:10.855923] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.359 [2024-04-18 12:05:10.856339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.359 [2024-04-18 12:05:10.856365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.359 [2024-04-18 12:05:10.866244] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.359 [2024-04-18 12:05:10.866590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.359 [2024-04-18 12:05:10.866616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.359 [2024-04-18 12:05:10.874081] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.359 [2024-04-18 12:05:10.874398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.359 [2024-04-18 12:05:10.874423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.359 [2024-04-18 12:05:10.881765] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.359 [2024-04-18 12:05:10.882141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.359 [2024-04-18 12:05:10.882167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.359 [2024-04-18 12:05:10.888180] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.359 [2024-04-18 12:05:10.888485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.359 [2024-04-18 12:05:10.888512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.359 [2024-04-18 12:05:10.896395] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.359 [2024-04-18 12:05:10.896804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.359 [2024-04-18 12:05:10.896829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.359 [2024-04-18 12:05:10.904572] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.359 [2024-04-18 12:05:10.904883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.359 [2024-04-18 12:05:10.904909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.620 [2024-04-18 12:05:10.911836] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.620 [2024-04-18 12:05:10.912181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.620 [2024-04-18 12:05:10.912206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.620 [2024-04-18 12:05:10.919150] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.620 [2024-04-18 12:05:10.919521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.620 [2024-04-18 12:05:10.919560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.620 [2024-04-18 12:05:10.927001] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.620 [2024-04-18 12:05:10.927367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.620 [2024-04-18 12:05:10.927393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.620 [2024-04-18 12:05:10.934533] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.620 [2024-04-18 12:05:10.934908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.620 [2024-04-18 12:05:10.934935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.620 [2024-04-18 12:05:10.941679] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.620 [2024-04-18 12:05:10.942087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.620 [2024-04-18 12:05:10.942122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.620 [2024-04-18 12:05:10.949866] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.620 [2024-04-18 12:05:10.950243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.620 [2024-04-18 12:05:10.950269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.620 [2024-04-18 12:05:10.957941] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.620 [2024-04-18 12:05:10.958310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.620 [2024-04-18 12:05:10.958336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.620 [2024-04-18 12:05:10.965253] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.620 [2024-04-18 12:05:10.965606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.620 [2024-04-18 12:05:10.965632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.620 [2024-04-18 12:05:10.973091] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.620 [2024-04-18 12:05:10.973434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.620 [2024-04-18 12:05:10.973464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.620 [2024-04-18 12:05:10.980814] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.620 [2024-04-18 12:05:10.981130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.620 [2024-04-18 12:05:10.981155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.620 [2024-04-18 12:05:10.988165] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.620 [2024-04-18 12:05:10.988490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.620 [2024-04-18 12:05:10.988520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.620 [2024-04-18 12:05:10.995676] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.620 [2024-04-18 12:05:10.996043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.620 [2024-04-18 12:05:10.996069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.621 [2024-04-18 12:05:11.003246] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.621 [2024-04-18 12:05:11.003592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.621 [2024-04-18 12:05:11.003618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.621 [2024-04-18 12:05:11.010530] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.621 [2024-04-18 12:05:11.010875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.621 [2024-04-18 12:05:11.010906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.621 [2024-04-18 12:05:11.018166] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.621 [2024-04-18 12:05:11.018549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.621 [2024-04-18 12:05:11.018575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.621 [2024-04-18 12:05:11.025832] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.621 [2024-04-18 12:05:11.026144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.621 [2024-04-18 12:05:11.026170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.621 [2024-04-18 12:05:11.032871] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.621 [2024-04-18 12:05:11.033215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.621 [2024-04-18 12:05:11.033241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.621 [2024-04-18 12:05:11.040797] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.621 [2024-04-18 12:05:11.041116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.621 [2024-04-18 12:05:11.041142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.621 [2024-04-18 12:05:11.048297] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.621 [2024-04-18 12:05:11.048649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.621 [2024-04-18 12:05:11.048675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.621 [2024-04-18 12:05:11.055721] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.621 [2024-04-18 12:05:11.056068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.621 [2024-04-18 12:05:11.056093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.621 [2024-04-18 12:05:11.062723] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.621 [2024-04-18 12:05:11.063055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.621 [2024-04-18 12:05:11.063081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.621 [2024-04-18 12:05:11.070137] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.621 [2024-04-18 12:05:11.070446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.621 [2024-04-18 12:05:11.070478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.621 [2024-04-18 12:05:11.077595] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.621 [2024-04-18 12:05:11.077901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.621 [2024-04-18 12:05:11.077926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.621 [2024-04-18 12:05:11.085291] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.621 [2024-04-18 12:05:11.085692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.621 [2024-04-18 12:05:11.085718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.621 [2024-04-18 12:05:11.092445] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.621 [2024-04-18 12:05:11.092792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.621 [2024-04-18 12:05:11.092817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.621 [2024-04-18 12:05:11.099987] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.621 [2024-04-18 12:05:11.100357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.621 [2024-04-18 12:05:11.100383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.621 [2024-04-18 12:05:11.107264] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.621 [2024-04-18 12:05:11.107619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.621 [2024-04-18 12:05:11.107644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.621 [2024-04-18 12:05:11.114429] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.621 [2024-04-18 12:05:11.114825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.621 [2024-04-18 12:05:11.114855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.621 [2024-04-18 12:05:11.121833] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.621 [2024-04-18 12:05:11.122181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.621 [2024-04-18 12:05:11.122207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.621 [2024-04-18 12:05:11.129625] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.621 [2024-04-18 12:05:11.129971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.621 [2024-04-18 12:05:11.129997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.621 [2024-04-18 12:05:11.138033] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.621 [2024-04-18 12:05:11.138425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.621 [2024-04-18 12:05:11.138457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.621 [2024-04-18 12:05:11.147254] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.621 [2024-04-18 12:05:11.147729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.621 [2024-04-18 12:05:11.147755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.621 [2024-04-18 12:05:11.156570] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.621 [2024-04-18 12:05:11.156823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.621 [2024-04-18 12:05:11.156849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.621 [2024-04-18 12:05:11.165664] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.621 [2024-04-18 12:05:11.166093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.621 [2024-04-18 12:05:11.166119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.882 [2024-04-18 12:05:11.174347] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.882 [2024-04-18 12:05:11.174759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.882 [2024-04-18 12:05:11.174786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.882 [2024-04-18 12:05:11.183141] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.882 [2024-04-18 12:05:11.183531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.882 [2024-04-18 12:05:11.183557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.882 [2024-04-18 12:05:11.192224] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.882 [2024-04-18 12:05:11.192672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.882 [2024-04-18 12:05:11.192698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.882 [2024-04-18 12:05:11.201321] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.882 [2024-04-18 12:05:11.201725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.882 [2024-04-18 12:05:11.201751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.882 [2024-04-18 12:05:11.210547] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.882 [2024-04-18 12:05:11.211027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.882 [2024-04-18 12:05:11.211054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.882 [2024-04-18 12:05:11.219636] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.882 [2024-04-18 12:05:11.220082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.882 [2024-04-18 12:05:11.220108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.882 [2024-04-18 12:05:11.229083] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.882 [2024-04-18 12:05:11.229489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.882 [2024-04-18 12:05:11.229515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.882 [2024-04-18 12:05:11.238373] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.882 [2024-04-18 12:05:11.238778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.882 [2024-04-18 12:05:11.238804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.882 [2024-04-18 12:05:11.247949] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.882 [2024-04-18 12:05:11.248321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.882 [2024-04-18 12:05:11.248347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.882 [2024-04-18 12:05:11.256967] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.882 [2024-04-18 12:05:11.257364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.882 [2024-04-18 12:05:11.257390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.882 [2024-04-18 12:05:11.266046] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.882 [2024-04-18 12:05:11.266455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.882 [2024-04-18 12:05:11.266502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.882 [2024-04-18 12:05:11.275317] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.882 [2024-04-18 12:05:11.275693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.882 [2024-04-18 12:05:11.275718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.882 [2024-04-18 12:05:11.284330] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.882 [2024-04-18 12:05:11.284747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.882 [2024-04-18 12:05:11.284774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.882 [2024-04-18 12:05:11.292997] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.882 [2024-04-18 12:05:11.293469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.882 [2024-04-18 12:05:11.293495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.882 [2024-04-18 12:05:11.301619] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.882 [2024-04-18 12:05:11.301984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.882 [2024-04-18 12:05:11.302010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.882 [2024-04-18 12:05:11.310736] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.882 [2024-04-18 12:05:11.311114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.882 [2024-04-18 12:05:11.311142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.882 [2024-04-18 12:05:11.319639] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.882 [2024-04-18 12:05:11.320081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.882 [2024-04-18 12:05:11.320107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.883 [2024-04-18 12:05:11.329260] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.883 [2024-04-18 12:05:11.329616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.883 [2024-04-18 12:05:11.329642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.883 [2024-04-18 12:05:11.338173] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.883 [2024-04-18 12:05:11.338622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.883 [2024-04-18 12:05:11.338648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.883 [2024-04-18 12:05:11.346862] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.883 [2024-04-18 12:05:11.347221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.883 [2024-04-18 12:05:11.347247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.883 [2024-04-18 12:05:11.356106] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.883 [2024-04-18 12:05:11.356536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.883 [2024-04-18 12:05:11.356563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.883 [2024-04-18 12:05:11.363773] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.883 [2024-04-18 12:05:11.364170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.883 [2024-04-18 12:05:11.364196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.883 [2024-04-18 12:05:11.372716] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.883 [2024-04-18 12:05:11.373019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.883 [2024-04-18 12:05:11.373044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.883 [2024-04-18 12:05:11.381150] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.883 [2024-04-18 12:05:11.381516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.883 [2024-04-18 12:05:11.381542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.883 [2024-04-18 12:05:11.390350] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.883 [2024-04-18 12:05:11.390732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.883 [2024-04-18 12:05:11.390758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.883 [2024-04-18 12:05:11.399681] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.883 [2024-04-18 12:05:11.400045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.883 [2024-04-18 12:05:11.400070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.883 [2024-04-18 12:05:11.408375] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.883 [2024-04-18 12:05:11.408732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.883 [2024-04-18 12:05:11.408759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.883 [2024-04-18 12:05:11.417184] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.883 [2024-04-18 12:05:11.417633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.883 [2024-04-18 12:05:11.417663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.883 [2024-04-18 12:05:11.426070] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:20.883 [2024-04-18 12:05:11.426526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.883 [2024-04-18 12:05:11.426552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.143 [2024-04-18 12:05:11.434338] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:21.143 [2024-04-18 12:05:11.434739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.143 [2024-04-18 12:05:11.434766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.143 [2024-04-18 12:05:11.442533] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:21.143 [2024-04-18 12:05:11.442961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.143 [2024-04-18 12:05:11.442987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.143 [2024-04-18 12:05:11.450844] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:21.143 [2024-04-18 12:05:11.451221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.143 [2024-04-18 12:05:11.451247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.143 [2024-04-18 12:05:11.459315] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:21.143 [2024-04-18 12:05:11.459769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.143 [2024-04-18 12:05:11.459796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.143 [2024-04-18 12:05:11.467591] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:21.143 [2024-04-18 12:05:11.468050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.143 [2024-04-18 12:05:11.468076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.143 [2024-04-18 12:05:11.475802] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:21.143 [2024-04-18 12:05:11.476253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.143 [2024-04-18 12:05:11.476288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.143 [2024-04-18 12:05:11.484752] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:21.143 [2024-04-18 12:05:11.485162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.143 [2024-04-18 12:05:11.485188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.143 [2024-04-18 12:05:11.492240] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:21.143 [2024-04-18 12:05:11.492701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.143 [2024-04-18 12:05:11.492727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.143 [2024-04-18 12:05:11.500571] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:21.143 [2024-04-18 12:05:11.501014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.143 [2024-04-18 12:05:11.501040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.143 [2024-04-18 12:05:11.508030] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:21.143 [2024-04-18 12:05:11.508504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.143 [2024-04-18 12:05:11.508530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.143 [2024-04-18 12:05:11.515629] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:21.143 [2024-04-18 12:05:11.516084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.143 [2024-04-18 12:05:11.516112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.143 [2024-04-18 12:05:11.523854] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:21.143 [2024-04-18 12:05:11.524210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.143 [2024-04-18 12:05:11.524237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.143 [2024-04-18 12:05:11.531298] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:21.143 [2024-04-18 12:05:11.531637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.143 [2024-04-18 12:05:11.531663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.143 [2024-04-18 12:05:11.538981] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:21.143 [2024-04-18 12:05:11.539325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.143 [2024-04-18 12:05:11.539351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.143 [2024-04-18 12:05:11.548012] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:21.143 [2024-04-18 12:05:11.548512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.143 [2024-04-18 12:05:11.548539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.143 [2024-04-18 12:05:11.557417] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:21.143 [2024-04-18 12:05:11.557857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.143 [2024-04-18 12:05:11.557883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.143 [2024-04-18 12:05:11.566963] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:21.143 [2024-04-18 12:05:11.567360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.143 [2024-04-18 12:05:11.567387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.143 [2024-04-18 12:05:11.576080] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:21.143 [2024-04-18 12:05:11.576430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.143 [2024-04-18 12:05:11.576462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.143 [2024-04-18 12:05:11.585282] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:21.143 [2024-04-18 12:05:11.585691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.143 [2024-04-18 12:05:11.585717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.143 [2024-04-18 12:05:11.594799] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:21.143 [2024-04-18 12:05:11.595255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.143 [2024-04-18 12:05:11.595282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.143 [2024-04-18 12:05:11.604463] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:21.143 [2024-04-18 12:05:11.604840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.143 [2024-04-18 12:05:11.604866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.143 [2024-04-18 12:05:11.613357] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:21.143 [2024-04-18 12:05:11.613588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.143 [2024-04-18 12:05:11.613625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.143 00:29:21.143 Latency(us) 00:29:21.143 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:21.143 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:21.143 nvme0n1 : 2.00 3588.25 448.53 0.00 0.00 4449.77 2896.69 20656.95 00:29:21.143 =================================================================================================================== 00:29:21.143 Total : 3588.25 448.53 0.00 0.00 4449.77 2896.69 20656.95 00:29:21.143 0 00:29:21.143 12:05:11 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:21.143 12:05:11 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:21.143 12:05:11 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:21.143 | .driver_specific 00:29:21.143 | .nvme_error 00:29:21.143 | .status_code 00:29:21.143 | .command_transient_transport_error' 00:29:21.143 12:05:11 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:21.403 12:05:11 -- host/digest.sh@71 -- # (( 232 > 0 )) 00:29:21.403 12:05:11 -- host/digest.sh@73 -- # killprocess 2644098 00:29:21.403 12:05:11 -- common/autotest_common.sh@936 -- # '[' -z 2644098 ']' 00:29:21.403 12:05:11 -- common/autotest_common.sh@940 -- # kill -0 2644098 00:29:21.403 12:05:11 -- common/autotest_common.sh@941 -- # uname 00:29:21.403 12:05:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:21.403 12:05:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2644098 00:29:21.403 12:05:11 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:21.403 12:05:11 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:21.403 12:05:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2644098' 00:29:21.403 killing process with pid 2644098 00:29:21.403 12:05:11 -- common/autotest_common.sh@955 -- # kill 2644098 00:29:21.403 Received shutdown signal, test time was about 2.000000 seconds 00:29:21.403 00:29:21.403 Latency(us) 00:29:21.403 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:21.403 =================================================================================================================== 00:29:21.403 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:21.403 12:05:11 -- common/autotest_common.sh@960 -- # wait 2644098 00:29:22.340 12:05:12 -- host/digest.sh@116 -- # killprocess 2641638 00:29:22.340 12:05:12 -- common/autotest_common.sh@936 -- # '[' -z 2641638 ']' 00:29:22.340 12:05:12 -- common/autotest_common.sh@940 -- # kill -0 2641638 00:29:22.340 12:05:12 -- common/autotest_common.sh@941 -- # uname 00:29:22.340 12:05:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:22.340 12:05:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2641638 00:29:22.600 12:05:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:22.600 12:05:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:22.600 12:05:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2641638' 00:29:22.600 killing process with pid 2641638 00:29:22.600 12:05:12 -- common/autotest_common.sh@955 -- # kill 2641638 00:29:22.600 12:05:12 -- common/autotest_common.sh@960 -- # wait 2641638 00:29:23.976 00:29:23.976 real 0m21.191s 00:29:23.976 user 0m38.812s 00:29:23.976 sys 0m5.152s 00:29:23.976 12:05:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:23.976 12:05:14 -- common/autotest_common.sh@10 -- # set +x 00:29:23.976 ************************************ 00:29:23.976 END TEST nvmf_digest_error 00:29:23.976 ************************************ 00:29:23.976 12:05:14 -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:29:23.976 12:05:14 -- host/digest.sh@150 -- # nvmftestfini 00:29:23.976 12:05:14 -- nvmf/common.sh@477 -- # nvmfcleanup 00:29:23.976 12:05:14 -- nvmf/common.sh@117 -- # sync 00:29:23.976 12:05:14 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:23.976 12:05:14 -- nvmf/common.sh@120 -- # set +e 00:29:23.976 12:05:14 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:23.976 12:05:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:23.976 rmmod nvme_tcp 00:29:23.976 rmmod nvme_fabrics 00:29:23.976 rmmod nvme_keyring 00:29:23.976 12:05:14 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:23.976 12:05:14 -- nvmf/common.sh@124 -- # set -e 00:29:23.976 12:05:14 -- nvmf/common.sh@125 -- # return 0 00:29:23.976 12:05:14 -- nvmf/common.sh@478 -- # '[' -n 2641638 ']' 00:29:23.976 12:05:14 -- nvmf/common.sh@479 -- # killprocess 2641638 00:29:23.976 12:05:14 -- common/autotest_common.sh@936 -- # '[' -z 2641638 ']' 00:29:23.976 12:05:14 -- common/autotest_common.sh@940 -- # kill -0 2641638 00:29:23.976 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (2641638) - No such process 00:29:23.976 12:05:14 -- common/autotest_common.sh@963 -- # echo 'Process with pid 2641638 is not found' 00:29:23.976 Process with pid 2641638 is not found 00:29:23.976 12:05:14 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:29:23.976 12:05:14 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:29:23.976 12:05:14 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:29:23.976 12:05:14 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:23.976 12:05:14 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:23.976 12:05:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:23.976 12:05:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:23.976 12:05:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:25.881 12:05:16 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:25.881 00:29:25.881 real 0m52.878s 00:29:25.881 user 1m22.393s 00:29:25.881 sys 0m15.386s 00:29:25.881 12:05:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:25.881 12:05:16 -- common/autotest_common.sh@10 -- # set +x 00:29:25.881 ************************************ 00:29:25.881 END TEST nvmf_digest 00:29:25.881 ************************************ 00:29:25.881 12:05:16 -- nvmf/nvmf.sh@108 -- # [[ 0 -eq 1 ]] 00:29:25.881 12:05:16 -- nvmf/nvmf.sh@113 -- # [[ 0 -eq 1 ]] 00:29:25.881 12:05:16 -- nvmf/nvmf.sh@118 -- # [[ phy == phy ]] 00:29:25.881 12:05:16 -- nvmf/nvmf.sh@119 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:25.881 12:05:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:25.881 12:05:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:25.881 12:05:16 -- common/autotest_common.sh@10 -- # set +x 00:29:26.143 ************************************ 00:29:26.143 START TEST nvmf_bdevperf 00:29:26.143 ************************************ 00:29:26.143 12:05:16 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:26.143 * Looking for test storage... 00:29:26.143 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:26.143 12:05:16 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:26.143 12:05:16 -- nvmf/common.sh@7 -- # uname -s 00:29:26.143 12:05:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:26.143 12:05:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:26.143 12:05:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:26.143 12:05:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:26.143 12:05:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:26.143 12:05:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:26.143 12:05:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:26.143 12:05:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:26.143 12:05:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:26.143 12:05:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:26.143 12:05:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:29:26.143 12:05:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:29:26.143 12:05:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:26.143 12:05:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:26.143 12:05:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:26.143 12:05:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:26.143 12:05:16 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:26.143 12:05:16 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:26.143 12:05:16 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:26.143 12:05:16 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:26.143 12:05:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:26.143 12:05:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:26.143 12:05:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:26.143 12:05:16 -- paths/export.sh@5 -- # export PATH 00:29:26.143 12:05:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:26.143 12:05:16 -- nvmf/common.sh@47 -- # : 0 00:29:26.143 12:05:16 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:26.143 12:05:16 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:26.143 12:05:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:26.143 12:05:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:26.143 12:05:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:26.143 12:05:16 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:26.143 12:05:16 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:26.143 12:05:16 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:26.143 12:05:16 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:26.143 12:05:16 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:26.143 12:05:16 -- host/bdevperf.sh@24 -- # nvmftestinit 00:29:26.143 12:05:16 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:29:26.143 12:05:16 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:26.143 12:05:16 -- nvmf/common.sh@437 -- # prepare_net_devs 00:29:26.143 12:05:16 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:29:26.143 12:05:16 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:29:26.143 12:05:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:26.143 12:05:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:26.143 12:05:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:26.143 12:05:16 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:29:26.143 12:05:16 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:29:26.143 12:05:16 -- nvmf/common.sh@285 -- # xtrace_disable 00:29:26.143 12:05:16 -- common/autotest_common.sh@10 -- # set +x 00:29:32.708 12:05:23 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:32.708 12:05:23 -- nvmf/common.sh@291 -- # pci_devs=() 00:29:32.708 12:05:23 -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:32.708 12:05:23 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:32.708 12:05:23 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:32.708 12:05:23 -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:32.708 12:05:23 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:32.708 12:05:23 -- nvmf/common.sh@295 -- # net_devs=() 00:29:32.708 12:05:23 -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:32.708 12:05:23 -- nvmf/common.sh@296 -- # e810=() 00:29:32.708 12:05:23 -- nvmf/common.sh@296 -- # local -ga e810 00:29:32.708 12:05:23 -- nvmf/common.sh@297 -- # x722=() 00:29:32.708 12:05:23 -- nvmf/common.sh@297 -- # local -ga x722 00:29:32.708 12:05:23 -- nvmf/common.sh@298 -- # mlx=() 00:29:32.708 12:05:23 -- nvmf/common.sh@298 -- # local -ga mlx 00:29:32.708 12:05:23 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:32.708 12:05:23 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:32.708 12:05:23 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:32.708 12:05:23 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:32.708 12:05:23 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:32.708 12:05:23 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:32.708 12:05:23 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:32.708 12:05:23 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:32.708 12:05:23 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:32.708 12:05:23 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:32.708 12:05:23 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:32.708 12:05:23 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:32.708 12:05:23 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:32.708 12:05:23 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:32.708 12:05:23 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:32.708 12:05:23 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:32.708 12:05:23 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:32.708 12:05:23 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:32.708 12:05:23 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:32.708 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:32.708 12:05:23 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:32.708 12:05:23 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:32.708 12:05:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:32.708 12:05:23 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:32.708 12:05:23 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:32.708 12:05:23 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:32.708 12:05:23 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:32.708 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:32.708 12:05:23 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:32.708 12:05:23 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:32.708 12:05:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:32.708 12:05:23 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:32.708 12:05:23 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:32.708 12:05:23 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:32.708 12:05:23 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:32.708 12:05:23 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:32.708 12:05:23 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:32.708 12:05:23 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:32.708 12:05:23 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:29:32.708 12:05:23 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:32.708 12:05:23 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:32.708 Found net devices under 0000:af:00.0: cvl_0_0 00:29:32.708 12:05:23 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:29:32.708 12:05:23 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:32.708 12:05:23 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:32.708 12:05:23 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:29:32.708 12:05:23 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:32.708 12:05:23 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:32.708 Found net devices under 0000:af:00.1: cvl_0_1 00:29:32.708 12:05:23 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:29:32.708 12:05:23 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:29:32.708 12:05:23 -- nvmf/common.sh@403 -- # is_hw=yes 00:29:32.708 12:05:23 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:29:32.708 12:05:23 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:29:32.708 12:05:23 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:29:32.708 12:05:23 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:32.708 12:05:23 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:32.708 12:05:23 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:32.708 12:05:23 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:32.708 12:05:23 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:32.708 12:05:23 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:32.708 12:05:23 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:32.708 12:05:23 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:32.708 12:05:23 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:32.708 12:05:23 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:32.708 12:05:23 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:32.708 12:05:23 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:32.708 12:05:23 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:32.968 12:05:23 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:32.968 12:05:23 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:32.968 12:05:23 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:32.968 12:05:23 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:32.968 12:05:23 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:32.968 12:05:23 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:32.968 12:05:23 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:32.968 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:32.968 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:29:32.968 00:29:32.968 --- 10.0.0.2 ping statistics --- 00:29:32.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:32.968 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:29:32.968 12:05:23 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:32.968 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:32.968 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:29:32.968 00:29:32.968 --- 10.0.0.1 ping statistics --- 00:29:32.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:32.968 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:29:32.968 12:05:23 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:32.968 12:05:23 -- nvmf/common.sh@411 -- # return 0 00:29:32.968 12:05:23 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:29:32.968 12:05:23 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:32.968 12:05:23 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:29:32.968 12:05:23 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:29:32.968 12:05:23 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:32.968 12:05:23 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:29:32.968 12:05:23 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:29:33.227 12:05:23 -- host/bdevperf.sh@25 -- # tgt_init 00:29:33.227 12:05:23 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:33.227 12:05:23 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:29:33.227 12:05:23 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:33.227 12:05:23 -- common/autotest_common.sh@10 -- # set +x 00:29:33.227 12:05:23 -- nvmf/common.sh@470 -- # nvmfpid=2648871 00:29:33.227 12:05:23 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:33.227 12:05:23 -- nvmf/common.sh@471 -- # waitforlisten 2648871 00:29:33.227 12:05:23 -- common/autotest_common.sh@817 -- # '[' -z 2648871 ']' 00:29:33.227 12:05:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:33.227 12:05:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:33.227 12:05:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:33.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:33.227 12:05:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:33.227 12:05:23 -- common/autotest_common.sh@10 -- # set +x 00:29:33.227 [2024-04-18 12:05:23.600309] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:29:33.227 [2024-04-18 12:05:23.600398] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:33.227 EAL: No free 2048 kB hugepages reported on node 1 00:29:33.227 [2024-04-18 12:05:23.729417] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:33.486 [2024-04-18 12:05:23.939621] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:33.486 [2024-04-18 12:05:23.939669] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:33.486 [2024-04-18 12:05:23.939681] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:33.486 [2024-04-18 12:05:23.939695] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:33.486 [2024-04-18 12:05:23.939707] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:33.486 [2024-04-18 12:05:23.939848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:33.486 [2024-04-18 12:05:23.939912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:33.486 [2024-04-18 12:05:23.939920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:34.054 12:05:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:34.054 12:05:24 -- common/autotest_common.sh@850 -- # return 0 00:29:34.054 12:05:24 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:29:34.054 12:05:24 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:34.054 12:05:24 -- common/autotest_common.sh@10 -- # set +x 00:29:34.054 12:05:24 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:34.054 12:05:24 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:34.054 12:05:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:34.054 12:05:24 -- common/autotest_common.sh@10 -- # set +x 00:29:34.054 [2024-04-18 12:05:24.422279] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:34.054 12:05:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:34.054 12:05:24 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:34.054 12:05:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:34.054 12:05:24 -- common/autotest_common.sh@10 -- # set +x 00:29:34.054 Malloc0 00:29:34.054 12:05:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:34.054 12:05:24 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:34.054 12:05:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:34.054 12:05:24 -- common/autotest_common.sh@10 -- # set +x 00:29:34.054 12:05:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:34.054 12:05:24 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:34.054 12:05:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:34.054 12:05:24 -- common/autotest_common.sh@10 -- # set +x 00:29:34.054 12:05:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:34.054 12:05:24 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:34.054 12:05:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:34.054 12:05:24 -- common/autotest_common.sh@10 -- # set +x 00:29:34.054 [2024-04-18 12:05:24.557527] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:34.054 12:05:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:34.054 12:05:24 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:34.054 12:05:24 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:34.054 12:05:24 -- nvmf/common.sh@521 -- # config=() 00:29:34.054 12:05:24 -- nvmf/common.sh@521 -- # local subsystem config 00:29:34.054 12:05:24 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:29:34.054 12:05:24 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:29:34.054 { 00:29:34.054 "params": { 00:29:34.054 "name": "Nvme$subsystem", 00:29:34.054 "trtype": "$TEST_TRANSPORT", 00:29:34.054 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:34.054 "adrfam": "ipv4", 00:29:34.054 "trsvcid": "$NVMF_PORT", 00:29:34.054 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:34.054 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:34.054 "hdgst": ${hdgst:-false}, 00:29:34.054 "ddgst": ${ddgst:-false} 00:29:34.054 }, 00:29:34.054 "method": "bdev_nvme_attach_controller" 00:29:34.054 } 00:29:34.054 EOF 00:29:34.054 )") 00:29:34.054 12:05:24 -- nvmf/common.sh@543 -- # cat 00:29:34.054 12:05:24 -- nvmf/common.sh@545 -- # jq . 00:29:34.054 12:05:24 -- nvmf/common.sh@546 -- # IFS=, 00:29:34.054 12:05:24 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:29:34.054 "params": { 00:29:34.054 "name": "Nvme1", 00:29:34.054 "trtype": "tcp", 00:29:34.054 "traddr": "10.0.0.2", 00:29:34.054 "adrfam": "ipv4", 00:29:34.054 "trsvcid": "4420", 00:29:34.054 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:34.054 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:34.054 "hdgst": false, 00:29:34.054 "ddgst": false 00:29:34.054 }, 00:29:34.054 "method": "bdev_nvme_attach_controller" 00:29:34.054 }' 00:29:34.313 [2024-04-18 12:05:24.641145] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:29:34.313 [2024-04-18 12:05:24.641234] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2648966 ] 00:29:34.313 EAL: No free 2048 kB hugepages reported on node 1 00:29:34.313 [2024-04-18 12:05:24.768190] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:34.571 [2024-04-18 12:05:24.991022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:35.139 Running I/O for 1 seconds... 00:29:36.075 00:29:36.075 Latency(us) 00:29:36.075 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:36.075 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:36.075 Verification LBA range: start 0x0 length 0x4000 00:29:36.075 Nvme1n1 : 1.00 10078.42 39.37 0.00 0.00 12652.21 2700.08 12635.34 00:29:36.075 =================================================================================================================== 00:29:36.075 Total : 10078.42 39.37 0.00 0.00 12652.21 2700.08 12635.34 00:29:37.012 12:05:27 -- host/bdevperf.sh@30 -- # bdevperfpid=2649443 00:29:37.012 12:05:27 -- host/bdevperf.sh@32 -- # sleep 3 00:29:37.012 12:05:27 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:37.012 12:05:27 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:37.012 12:05:27 -- nvmf/common.sh@521 -- # config=() 00:29:37.012 12:05:27 -- nvmf/common.sh@521 -- # local subsystem config 00:29:37.012 12:05:27 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:29:37.012 12:05:27 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:29:37.012 { 00:29:37.012 "params": { 00:29:37.012 "name": "Nvme$subsystem", 00:29:37.012 "trtype": "$TEST_TRANSPORT", 00:29:37.012 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:37.012 "adrfam": "ipv4", 00:29:37.012 "trsvcid": "$NVMF_PORT", 00:29:37.012 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:37.012 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:37.012 "hdgst": ${hdgst:-false}, 00:29:37.012 "ddgst": ${ddgst:-false} 00:29:37.012 }, 00:29:37.012 "method": "bdev_nvme_attach_controller" 00:29:37.012 } 00:29:37.012 EOF 00:29:37.012 )") 00:29:37.012 12:05:27 -- nvmf/common.sh@543 -- # cat 00:29:37.012 12:05:27 -- nvmf/common.sh@545 -- # jq . 00:29:37.012 12:05:27 -- nvmf/common.sh@546 -- # IFS=, 00:29:37.012 12:05:27 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:29:37.012 "params": { 00:29:37.012 "name": "Nvme1", 00:29:37.012 "trtype": "tcp", 00:29:37.012 "traddr": "10.0.0.2", 00:29:37.012 "adrfam": "ipv4", 00:29:37.012 "trsvcid": "4420", 00:29:37.012 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:37.012 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:37.012 "hdgst": false, 00:29:37.012 "ddgst": false 00:29:37.012 }, 00:29:37.012 "method": "bdev_nvme_attach_controller" 00:29:37.012 }' 00:29:37.270 [2024-04-18 12:05:27.617898] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:29:37.271 [2024-04-18 12:05:27.617985] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2649443 ] 00:29:37.271 EAL: No free 2048 kB hugepages reported on node 1 00:29:37.271 [2024-04-18 12:05:27.742984] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:37.530 [2024-04-18 12:05:27.970569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:38.098 Running I/O for 15 seconds... 00:29:40.001 12:05:30 -- host/bdevperf.sh@33 -- # kill -9 2648871 00:29:40.001 12:05:30 -- host/bdevperf.sh@35 -- # sleep 3 00:29:40.262 [2024-04-18 12:05:30.568696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:41424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.262 [2024-04-18 12:05:30.568752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.262 [2024-04-18 12:05:30.568788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:41432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.262 [2024-04-18 12:05:30.568805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.262 [2024-04-18 12:05:30.568822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:41440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.262 [2024-04-18 12:05:30.568834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.262 [2024-04-18 12:05:30.568851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:41448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.262 [2024-04-18 12:05:30.568864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.262 [2024-04-18 12:05:30.568878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:41456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.262 [2024-04-18 12:05:30.568891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.262 [2024-04-18 12:05:30.568906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:41464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.262 [2024-04-18 12:05:30.568926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.262 [2024-04-18 12:05:30.568941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:41472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.262 [2024-04-18 12:05:30.568954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.262 [2024-04-18 12:05:30.568971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.262 [2024-04-18 12:05:30.568984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.262 [2024-04-18 12:05:30.568999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:41488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.262 [2024-04-18 12:05:30.569013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.262 [2024-04-18 12:05:30.569028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:41496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.262 [2024-04-18 12:05:30.569041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.262 [2024-04-18 12:05:30.569056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:41504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.262 [2024-04-18 12:05:30.569068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.262 [2024-04-18 12:05:30.569082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:41512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.262 [2024-04-18 12:05:30.569094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.262 [2024-04-18 12:05:30.569108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:41520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.262 [2024-04-18 12:05:30.569120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.262 [2024-04-18 12:05:30.569134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:41528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.262 [2024-04-18 12:05:30.569146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.262 [2024-04-18 12:05:30.569159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:41536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.262 [2024-04-18 12:05:30.569171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.262 [2024-04-18 12:05:30.569185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:41544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.262 [2024-04-18 12:05:30.569197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.262 [2024-04-18 12:05:30.569211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:41872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.262 [2024-04-18 12:05:30.569223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.262 [2024-04-18 12:05:30.569238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:41880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.262 [2024-04-18 12:05:30.569251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.262 [2024-04-18 12:05:30.569266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:41888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.262 [2024-04-18 12:05:30.569286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.262 [2024-04-18 12:05:30.569300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:41896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.262 [2024-04-18 12:05:30.569312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.262 [2024-04-18 12:05:30.569326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:41904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.262 [2024-04-18 12:05:30.569338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.262 [2024-04-18 12:05:30.569352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:41912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.262 [2024-04-18 12:05:30.569364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.263 [2024-04-18 12:05:30.569378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:41920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.263 [2024-04-18 12:05:30.569390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.263 [2024-04-18 12:05:30.569403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:41928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.263 [2024-04-18 12:05:30.569415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.263 [2024-04-18 12:05:30.569429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:41936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.263 [2024-04-18 12:05:30.569441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.263 [2024-04-18 12:05:30.569461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:41944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.263 [2024-04-18 12:05:30.569473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.263 [2024-04-18 12:05:30.569487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:41952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.263 [2024-04-18 12:05:30.569499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.263 [2024-04-18 12:05:30.569512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:41960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.263 [2024-04-18 12:05:30.569524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.263 [2024-04-18 12:05:30.569538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:41968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.263 [2024-04-18 12:05:30.569550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.263 [2024-04-18 12:05:30.569564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:41976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.263 [2024-04-18 12:05:30.569575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.263 [2024-04-18 12:05:30.569589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:41984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.263 [2024-04-18 12:05:30.569603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.263 [2024-04-18 12:05:30.569617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:41992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.263 [2024-04-18 12:05:30.569629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.263 [2024-04-18 12:05:30.569642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:42000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.263 [2024-04-18 12:05:30.569654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.263 [2024-04-18 12:05:30.569668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:42008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.263 [2024-04-18 12:05:30.569680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.263 [2024-04-18 12:05:30.569693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:42016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.263 [2024-04-18 12:05:30.569705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.263 [2024-04-18 12:05:30.569718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:42024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.263 [2024-04-18 12:05:30.569730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.263 [2024-04-18 12:05:30.569744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:42032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.263 [2024-04-18 12:05:30.569756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.263 [2024-04-18 12:05:30.569769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:42040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.263 [2024-04-18 12:05:30.569781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.263 [2024-04-18 12:05:30.569794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:42048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.263 [2024-04-18 12:05:30.569806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.263 [2024-04-18 12:05:30.569819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:42056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.263 [2024-04-18 12:05:30.569831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.263 [2024-04-18 12:05:30.569845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:42064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.263 [2024-04-18 12:05:30.569857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.263 [2024-04-18 12:05:30.569870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:42072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.263 [2024-04-18 12:05:30.569882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.263 [2024-04-18 12:05:30.569895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:42080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.263 [2024-04-18 12:05:30.569907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.263 [2024-04-18 12:05:30.569921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:42088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.263 [2024-04-18 12:05:30.569934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.263 [2024-04-18 12:05:30.569947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:42096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.263 [2024-04-18 12:05:30.569959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.263 [2024-04-18 12:05:30.569973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:42104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.263 [2024-04-18 12:05:30.569986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.263 [2024-04-18 12:05:30.569999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:42112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.263 [2024-04-18 12:05:30.570011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.263 [2024-04-18 12:05:30.570024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:42120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.263 [2024-04-18 12:05:30.570036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.263 [2024-04-18 12:05:30.570049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:42128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.263 [2024-04-18 12:05:30.570061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.263 [2024-04-18 12:05:30.570075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:42136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.263 [2024-04-18 12:05:30.570087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.263 [2024-04-18 12:05:30.570101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:42144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.263 [2024-04-18 12:05:30.570113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.263 [2024-04-18 12:05:30.570126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:42152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.263 [2024-04-18 12:05:30.570138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.263 [2024-04-18 12:05:30.570152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:42160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.263 [2024-04-18 12:05:30.570163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.263 [2024-04-18 12:05:30.570177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:42168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.263 [2024-04-18 12:05:30.570188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.263 [2024-04-18 12:05:30.570201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:42176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.263 [2024-04-18 12:05:30.570214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.263 [2024-04-18 12:05:30.570228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:42184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.263 [2024-04-18 12:05:30.570239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.263 [2024-04-18 12:05:30.570255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:41552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.263 [2024-04-18 12:05:30.570267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.263 [2024-04-18 12:05:30.570282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:41560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.263 [2024-04-18 12:05:30.570294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.263 [2024-04-18 12:05:30.570308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:41568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.263 [2024-04-18 12:05:30.570320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.263 [2024-04-18 12:05:30.570333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:41576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.263 [2024-04-18 12:05:30.570345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.263 [2024-04-18 12:05:30.570358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:41584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.264 [2024-04-18 12:05:30.570370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.264 [2024-04-18 12:05:30.570384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:41592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.264 [2024-04-18 12:05:30.570396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.264 [2024-04-18 12:05:30.570409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:41600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.264 [2024-04-18 12:05:30.570421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.264 [2024-04-18 12:05:30.570435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:41608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.264 [2024-04-18 12:05:30.570447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.264 [2024-04-18 12:05:30.570466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:42192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.264 [2024-04-18 12:05:30.570478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.264 [2024-04-18 12:05:30.570492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:42200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.264 [2024-04-18 12:05:30.570504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.264 [2024-04-18 12:05:30.570518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:42208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.264 [2024-04-18 12:05:30.570529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.264 [2024-04-18 12:05:30.570543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:42216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.264 [2024-04-18 12:05:30.570556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.264 [2024-04-18 12:05:30.570569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:42224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.264 [2024-04-18 12:05:30.570583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.264 [2024-04-18 12:05:30.570596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:42232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.264 [2024-04-18 12:05:30.570608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.264 [2024-04-18 12:05:30.570621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:42240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.264 [2024-04-18 12:05:30.570634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.264 [2024-04-18 12:05:30.570647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:42248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.264 [2024-04-18 12:05:30.570659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.264 [2024-04-18 12:05:30.570672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:41616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.264 [2024-04-18 12:05:30.570684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.264 [2024-04-18 12:05:30.570698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:41624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.264 [2024-04-18 12:05:30.570710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.264 [2024-04-18 12:05:30.570723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:41632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.264 [2024-04-18 12:05:30.570734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.264 [2024-04-18 12:05:30.570748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:41640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.264 [2024-04-18 12:05:30.570760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.264 [2024-04-18 12:05:30.570774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:41648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.264 [2024-04-18 12:05:30.570785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.264 [2024-04-18 12:05:30.570798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:41656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.264 [2024-04-18 12:05:30.570810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.264 [2024-04-18 12:05:30.570824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:41664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.264 [2024-04-18 12:05:30.570836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.264 [2024-04-18 12:05:30.570849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:41672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.264 [2024-04-18 12:05:30.570861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.264 [2024-04-18 12:05:30.570875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:41680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.264 [2024-04-18 12:05:30.570886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.264 [2024-04-18 12:05:30.570902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:41688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.264 [2024-04-18 12:05:30.570914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.264 [2024-04-18 12:05:30.570928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:41696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.264 [2024-04-18 12:05:30.570945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.264 [2024-04-18 12:05:30.570959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:41704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.264 [2024-04-18 12:05:30.570971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.264 [2024-04-18 12:05:30.570985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:41712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.264 [2024-04-18 12:05:30.570997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.264 [2024-04-18 12:05:30.571012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:41720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.264 [2024-04-18 12:05:30.571024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.264 [2024-04-18 12:05:30.571037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:41728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.264 [2024-04-18 12:05:30.571049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.264 [2024-04-18 12:05:30.571063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:41736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.264 [2024-04-18 12:05:30.571075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.264 [2024-04-18 12:05:30.571089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:41744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.264 [2024-04-18 12:05:30.571101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.264 [2024-04-18 12:05:30.571114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:41752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.264 [2024-04-18 12:05:30.571126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.264 [2024-04-18 12:05:30.571140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:41760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.264 [2024-04-18 12:05:30.571151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.264 [2024-04-18 12:05:30.571165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:41768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.264 [2024-04-18 12:05:30.571177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.264 [2024-04-18 12:05:30.571190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:41776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.264 [2024-04-18 12:05:30.571202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.264 [2024-04-18 12:05:30.571216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:41784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.264 [2024-04-18 12:05:30.571228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.264 [2024-04-18 12:05:30.571242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:41792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.264 [2024-04-18 12:05:30.571254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.264 [2024-04-18 12:05:30.571268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:41800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.264 [2024-04-18 12:05:30.571280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.264 [2024-04-18 12:05:30.571293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:42256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.264 [2024-04-18 12:05:30.571305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.264 [2024-04-18 12:05:30.571327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:42264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.264 [2024-04-18 12:05:30.571339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.264 [2024-04-18 12:05:30.571353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:42272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.264 [2024-04-18 12:05:30.571364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.264 [2024-04-18 12:05:30.571377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:42280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.264 [2024-04-18 12:05:30.571390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.264 [2024-04-18 12:05:30.571404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:42288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.265 [2024-04-18 12:05:30.571415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.265 [2024-04-18 12:05:30.571429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:42296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.265 [2024-04-18 12:05:30.571441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.265 [2024-04-18 12:05:30.571460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:42304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.265 [2024-04-18 12:05:30.571472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.265 [2024-04-18 12:05:30.571485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:42312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.265 [2024-04-18 12:05:30.571497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.265 [2024-04-18 12:05:30.571511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:42320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.265 [2024-04-18 12:05:30.571523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.265 [2024-04-18 12:05:30.571537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:42328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.265 [2024-04-18 12:05:30.571549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.265 [2024-04-18 12:05:30.571563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.265 [2024-04-18 12:05:30.571576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.265 [2024-04-18 12:05:30.571590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:42344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.265 [2024-04-18 12:05:30.571602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.265 [2024-04-18 12:05:30.571615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:42352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.265 [2024-04-18 12:05:30.571627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.265 [2024-04-18 12:05:30.571640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:42360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.265 [2024-04-18 12:05:30.571652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.265 [2024-04-18 12:05:30.571665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:42368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.265 [2024-04-18 12:05:30.571677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.265 [2024-04-18 12:05:30.571691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:42376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.265 [2024-04-18 12:05:30.571702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.265 [2024-04-18 12:05:30.571716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:42384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.265 [2024-04-18 12:05:30.571728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.265 [2024-04-18 12:05:30.571742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.265 [2024-04-18 12:05:30.571755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.265 [2024-04-18 12:05:30.571768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.265 [2024-04-18 12:05:30.571781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.265 [2024-04-18 12:05:30.571794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:42408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.265 [2024-04-18 12:05:30.571806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.265 [2024-04-18 12:05:30.571819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:42416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.265 [2024-04-18 12:05:30.571831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.265 [2024-04-18 12:05:30.571844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:42424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.265 [2024-04-18 12:05:30.571856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.265 [2024-04-18 12:05:30.571869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:42432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.265 [2024-04-18 12:05:30.571881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.265 [2024-04-18 12:05:30.571896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:42440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.265 [2024-04-18 12:05:30.571908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.265 [2024-04-18 12:05:30.571921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:41808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.265 [2024-04-18 12:05:30.571933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.265 [2024-04-18 12:05:30.571947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:41816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.265 [2024-04-18 12:05:30.571958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.265 [2024-04-18 12:05:30.571972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:41824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.265 [2024-04-18 12:05:30.571984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.265 [2024-04-18 12:05:30.571997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:41832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.265 [2024-04-18 12:05:30.572009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.265 [2024-04-18 12:05:30.572022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:41840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.265 [2024-04-18 12:05:30.572034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.265 [2024-04-18 12:05:30.572048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:41848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.265 [2024-04-18 12:05:30.572059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.265 [2024-04-18 12:05:30.572073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:41856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.265 [2024-04-18 12:05:30.572084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.265 [2024-04-18 12:05:30.572097] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007e40 is same with the state(5) to be set 00:29:40.265 [2024-04-18 12:05:30.572113] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:40.265 [2024-04-18 12:05:30.572124] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:40.265 [2024-04-18 12:05:30.572135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41864 len:8 PRP1 0x0 PRP2 0x0 00:29:40.265 [2024-04-18 12:05:30.572149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.265 [2024-04-18 12:05:30.572417] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x614000007e40 was disconnected and freed. reset controller. 00:29:40.265 [2024-04-18 12:05:30.575372] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.265 [2024-04-18 12:05:30.575460] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:40.265 [2024-04-18 12:05:30.576236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.265 [2024-04-18 12:05:30.576478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.265 [2024-04-18 12:05:30.576496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:40.265 [2024-04-18 12:05:30.576515] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:40.265 [2024-04-18 12:05:30.576713] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:40.265 [2024-04-18 12:05:30.576906] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.265 [2024-04-18 12:05:30.576928] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.265 [2024-04-18 12:05:30.576942] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.265 [2024-04-18 12:05:30.579876] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.265 [2024-04-18 12:05:30.588853] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.265 [2024-04-18 12:05:30.589283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.265 [2024-04-18 12:05:30.589655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.265 [2024-04-18 12:05:30.589711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:40.265 [2024-04-18 12:05:30.589755] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:40.265 [2024-04-18 12:05:30.590412] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:40.265 [2024-04-18 12:05:30.590948] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.265 [2024-04-18 12:05:30.590963] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.265 [2024-04-18 12:05:30.590974] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.265 [2024-04-18 12:05:30.593916] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.265 [2024-04-18 12:05:30.601995] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.266 [2024-04-18 12:05:30.602593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.266 [2024-04-18 12:05:30.602994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.266 [2024-04-18 12:05:30.603047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:40.266 [2024-04-18 12:05:30.603090] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:40.266 [2024-04-18 12:05:30.603767] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:40.266 [2024-04-18 12:05:30.603958] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.266 [2024-04-18 12:05:30.603971] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.266 [2024-04-18 12:05:30.603982] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.266 [2024-04-18 12:05:30.606854] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.266 [2024-04-18 12:05:30.615065] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.266 [2024-04-18 12:05:30.615598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.266 [2024-04-18 12:05:30.615906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.266 [2024-04-18 12:05:30.615922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:40.266 [2024-04-18 12:05:30.615934] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:40.266 [2024-04-18 12:05:30.616124] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:40.266 [2024-04-18 12:05:30.616308] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.266 [2024-04-18 12:05:30.616323] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.266 [2024-04-18 12:05:30.616334] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.266 [2024-04-18 12:05:30.619172] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.266 [2024-04-18 12:05:30.628129] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.266 [2024-04-18 12:05:30.628642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.266 [2024-04-18 12:05:30.628931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.266 [2024-04-18 12:05:30.628947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:40.266 [2024-04-18 12:05:30.628959] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:40.266 [2024-04-18 12:05:30.629145] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:40.266 [2024-04-18 12:05:30.629329] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.266 [2024-04-18 12:05:30.629342] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.266 [2024-04-18 12:05:30.629353] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.266 [2024-04-18 12:05:30.632217] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.266 [2024-04-18 12:05:30.641306] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.266 [2024-04-18 12:05:30.641913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.266 [2024-04-18 12:05:30.642302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.266 [2024-04-18 12:05:30.642354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:40.266 [2024-04-18 12:05:30.642395] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:40.266 [2024-04-18 12:05:30.642841] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:40.266 [2024-04-18 12:05:30.643030] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.266 [2024-04-18 12:05:30.643043] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.266 [2024-04-18 12:05:30.643054] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.266 [2024-04-18 12:05:30.645932] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.266 [2024-04-18 12:05:30.654454] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.266 [2024-04-18 12:05:30.655049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.266 [2024-04-18 12:05:30.655289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.266 [2024-04-18 12:05:30.655305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:40.266 [2024-04-18 12:05:30.655317] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:40.266 [2024-04-18 12:05:30.655511] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:40.266 [2024-04-18 12:05:30.655698] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.266 [2024-04-18 12:05:30.655710] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.266 [2024-04-18 12:05:30.655721] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.266 [2024-04-18 12:05:30.658565] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.266 [2024-04-18 12:05:30.667486] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.266 [2024-04-18 12:05:30.668096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.266 [2024-04-18 12:05:30.668527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.266 [2024-04-18 12:05:30.668584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:40.266 [2024-04-18 12:05:30.668626] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:40.266 [2024-04-18 12:05:30.669132] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:40.266 [2024-04-18 12:05:30.669316] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.266 [2024-04-18 12:05:30.669329] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.266 [2024-04-18 12:05:30.669340] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.266 [2024-04-18 12:05:30.673244] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.266 [2024-04-18 12:05:30.681237] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.266 [2024-04-18 12:05:30.681842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.266 [2024-04-18 12:05:30.682254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.266 [2024-04-18 12:05:30.682305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:40.266 [2024-04-18 12:05:30.682348] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:40.266 [2024-04-18 12:05:30.682874] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:40.266 [2024-04-18 12:05:30.683073] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.266 [2024-04-18 12:05:30.683086] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.266 [2024-04-18 12:05:30.683097] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.266 [2024-04-18 12:05:30.685952] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.266 [2024-04-18 12:05:30.694260] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.266 [2024-04-18 12:05:30.694832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.266 [2024-04-18 12:05:30.695174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.266 [2024-04-18 12:05:30.695225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:40.266 [2024-04-18 12:05:30.695267] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:40.266 [2024-04-18 12:05:30.695937] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:40.266 [2024-04-18 12:05:30.696252] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.266 [2024-04-18 12:05:30.696266] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.266 [2024-04-18 12:05:30.696277] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.266 [2024-04-18 12:05:30.699174] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.266 [2024-04-18 12:05:30.707275] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.266 [2024-04-18 12:05:30.707713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.267 [2024-04-18 12:05:30.708140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.267 [2024-04-18 12:05:30.708192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:40.267 [2024-04-18 12:05:30.708228] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:40.267 [2024-04-18 12:05:30.708426] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:40.267 [2024-04-18 12:05:30.708643] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.267 [2024-04-18 12:05:30.708657] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.267 [2024-04-18 12:05:30.708668] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.267 [2024-04-18 12:05:30.711580] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.267 [2024-04-18 12:05:30.720303] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.267 [2024-04-18 12:05:30.720890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.267 [2024-04-18 12:05:30.721153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.267 [2024-04-18 12:05:30.721205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:40.267 [2024-04-18 12:05:30.721247] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:40.267 [2024-04-18 12:05:30.721522] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:40.267 [2024-04-18 12:05:30.721725] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.267 [2024-04-18 12:05:30.721738] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.267 [2024-04-18 12:05:30.721750] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.267 [2024-04-18 12:05:30.724611] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.267 [2024-04-18 12:05:30.733452] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.267 [2024-04-18 12:05:30.733863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.267 [2024-04-18 12:05:30.734243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.267 [2024-04-18 12:05:30.734296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:40.267 [2024-04-18 12:05:30.734337] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:40.267 [2024-04-18 12:05:30.734944] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:40.267 [2024-04-18 12:05:30.735133] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.267 [2024-04-18 12:05:30.735149] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.267 [2024-04-18 12:05:30.735160] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.267 [2024-04-18 12:05:30.737945] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.267 [2024-04-18 12:05:30.746413] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.267 [2024-04-18 12:05:30.746954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.267 [2024-04-18 12:05:30.747335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.267 [2024-04-18 12:05:30.747388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:40.267 [2024-04-18 12:05:30.747434] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:40.267 [2024-04-18 12:05:30.747646] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:40.267 [2024-04-18 12:05:30.747837] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.267 [2024-04-18 12:05:30.747850] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.267 [2024-04-18 12:05:30.747861] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.267 [2024-04-18 12:05:30.750728] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.267 [2024-04-18 12:05:30.759368] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.267 [2024-04-18 12:05:30.759970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.267 [2024-04-18 12:05:30.760407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.267 [2024-04-18 12:05:30.760472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:40.267 [2024-04-18 12:05:30.760516] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:40.267 [2024-04-18 12:05:30.761169] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:40.267 [2024-04-18 12:05:30.761647] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.267 [2024-04-18 12:05:30.761666] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.267 [2024-04-18 12:05:30.761681] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.267 [2024-04-18 12:05:30.765761] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.267 [2024-04-18 12:05:30.773063] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.267 [2024-04-18 12:05:30.773547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.267 [2024-04-18 12:05:30.773806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.267 [2024-04-18 12:05:30.773858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:40.267 [2024-04-18 12:05:30.773899] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:40.267 [2024-04-18 12:05:30.774376] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:40.267 [2024-04-18 12:05:30.774584] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.267 [2024-04-18 12:05:30.774603] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.267 [2024-04-18 12:05:30.774614] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.267 [2024-04-18 12:05:30.777472] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.267 [2024-04-18 12:05:30.786000] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.267 [2024-04-18 12:05:30.786587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.267 [2024-04-18 12:05:30.786955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.267 [2024-04-18 12:05:30.787007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:40.267 [2024-04-18 12:05:30.787048] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:40.267 [2024-04-18 12:05:30.787272] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:40.267 [2024-04-18 12:05:30.787462] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.267 [2024-04-18 12:05:30.787476] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.267 [2024-04-18 12:05:30.787487] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.267 [2024-04-18 12:05:30.790278] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.267 [2024-04-18 12:05:30.798968] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.267 [2024-04-18 12:05:30.799516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.267 [2024-04-18 12:05:30.799781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.267 [2024-04-18 12:05:30.799796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:40.267 [2024-04-18 12:05:30.799807] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:40.267 [2024-04-18 12:05:30.799983] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:40.267 [2024-04-18 12:05:30.800179] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.267 [2024-04-18 12:05:30.800192] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.267 [2024-04-18 12:05:30.800203] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.267 [2024-04-18 12:05:30.803116] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.527 [2024-04-18 12:05:30.812172] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.527 [2024-04-18 12:05:30.812696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.527 [2024-04-18 12:05:30.812993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.527 [2024-04-18 12:05:30.813009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:40.527 [2024-04-18 12:05:30.813021] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:40.527 [2024-04-18 12:05:30.813211] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:40.527 [2024-04-18 12:05:30.813406] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.527 [2024-04-18 12:05:30.813419] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.527 [2024-04-18 12:05:30.813433] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.527 [2024-04-18 12:05:30.816343] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.527 [2024-04-18 12:05:30.825310] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.527 [2024-04-18 12:05:30.825798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.527 [2024-04-18 12:05:30.826165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.527 [2024-04-18 12:05:30.826216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:40.527 [2024-04-18 12:05:30.826258] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:40.527 [2024-04-18 12:05:30.826662] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:40.528 [2024-04-18 12:05:30.826845] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.528 [2024-04-18 12:05:30.826858] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.528 [2024-04-18 12:05:30.826869] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.528 [2024-04-18 12:05:30.829832] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.528 [2024-04-18 12:05:30.838481] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.528 [2024-04-18 12:05:30.838832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.528 [2024-04-18 12:05:30.839172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.528 [2024-04-18 12:05:30.839188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:40.528 [2024-04-18 12:05:30.839200] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:40.528 [2024-04-18 12:05:30.839389] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:40.528 [2024-04-18 12:05:30.839582] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.528 [2024-04-18 12:05:30.839596] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.528 [2024-04-18 12:05:30.839607] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.528 [2024-04-18 12:05:30.842531] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.528 [2024-04-18 12:05:30.851738] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.528 [2024-04-18 12:05:30.852244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.528 [2024-04-18 12:05:30.852627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.528 [2024-04-18 12:05:30.852682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:40.528 [2024-04-18 12:05:30.852714] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:40.528 [2024-04-18 12:05:30.852904] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:40.528 [2024-04-18 12:05:30.853092] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.528 [2024-04-18 12:05:30.853105] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.528 [2024-04-18 12:05:30.853116] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.528 [2024-04-18 12:05:30.856027] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.528 [2024-04-18 12:05:30.864779] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.528 [2024-04-18 12:05:30.865374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.528 [2024-04-18 12:05:30.865602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.528 [2024-04-18 12:05:30.865620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:40.528 [2024-04-18 12:05:30.865632] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:40.528 [2024-04-18 12:05:30.865818] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:40.528 [2024-04-18 12:05:30.866000] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.528 [2024-04-18 12:05:30.866013] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.528 [2024-04-18 12:05:30.866024] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.528 [2024-04-18 12:05:30.868918] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.528 [2024-04-18 12:05:30.877822] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.528 [2024-04-18 12:05:30.878402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.528 [2024-04-18 12:05:30.878848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.528 [2024-04-18 12:05:30.878901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:40.528 [2024-04-18 12:05:30.878943] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:40.528 [2024-04-18 12:05:30.879613] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:40.528 [2024-04-18 12:05:30.879955] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.528 [2024-04-18 12:05:30.879969] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.528 [2024-04-18 12:05:30.879980] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.528 [2024-04-18 12:05:30.882827] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.528 [2024-04-18 12:05:30.890803] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.528 [2024-04-18 12:05:30.891394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.528 [2024-04-18 12:05:30.891678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.528 [2024-04-18 12:05:30.891694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:40.528 [2024-04-18 12:05:30.891706] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:40.528 [2024-04-18 12:05:30.891891] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:40.528 [2024-04-18 12:05:30.892074] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.528 [2024-04-18 12:05:30.892086] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.528 [2024-04-18 12:05:30.892097] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.528 [2024-04-18 12:05:30.894958] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.528 [2024-04-18 12:05:30.903871] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.528 [2024-04-18 12:05:30.904439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.528 [2024-04-18 12:05:30.904811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.528 [2024-04-18 12:05:30.904864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:40.528 [2024-04-18 12:05:30.904906] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:40.528 [2024-04-18 12:05:30.905438] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:40.528 [2024-04-18 12:05:30.905645] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.528 [2024-04-18 12:05:30.905659] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.528 [2024-04-18 12:05:30.905670] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.528 [2024-04-18 12:05:30.908535] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.528 [2024-04-18 12:05:30.916831] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.528 [2024-04-18 12:05:30.917369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.528 [2024-04-18 12:05:30.917745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.528 [2024-04-18 12:05:30.917799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:40.528 [2024-04-18 12:05:30.917841] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:40.528 [2024-04-18 12:05:30.918505] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:40.528 [2024-04-18 12:05:30.918941] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.528 [2024-04-18 12:05:30.918955] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.528 [2024-04-18 12:05:30.918966] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.528 [2024-04-18 12:05:30.921826] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.528 [2024-04-18 12:05:30.930045] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.528 [2024-04-18 12:05:30.930588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.528 [2024-04-18 12:05:30.930863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.528 [2024-04-18 12:05:30.930879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:40.528 [2024-04-18 12:05:30.930891] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:40.528 [2024-04-18 12:05:30.931075] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:40.528 [2024-04-18 12:05:30.931258] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.528 [2024-04-18 12:05:30.931271] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.528 [2024-04-18 12:05:30.931282] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.528 [2024-04-18 12:05:30.934172] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.528 [2024-04-18 12:05:30.943223] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.528 [2024-04-18 12:05:30.943808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.528 [2024-04-18 12:05:30.944239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.528 [2024-04-18 12:05:30.944290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:40.528 [2024-04-18 12:05:30.944331] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:40.528 [2024-04-18 12:05:30.944620] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:40.528 [2024-04-18 12:05:30.944803] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.528 [2024-04-18 12:05:30.944816] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.528 [2024-04-18 12:05:30.944834] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.528 [2024-04-18 12:05:30.947619] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.529 [2024-04-18 12:05:30.956357] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.529 [2024-04-18 12:05:30.956952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.529 [2024-04-18 12:05:30.957323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.529 [2024-04-18 12:05:30.957374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:40.529 [2024-04-18 12:05:30.957415] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:40.529 [2024-04-18 12:05:30.957927] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:40.529 [2024-04-18 12:05:30.958111] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.529 [2024-04-18 12:05:30.958124] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.529 [2024-04-18 12:05:30.958135] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.529 [2024-04-18 12:05:30.960907] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.529 [2024-04-18 12:05:30.969402] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.529 [2024-04-18 12:05:30.969982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.529 [2024-04-18 12:05:30.970316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.529 [2024-04-18 12:05:30.970368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:40.529 [2024-04-18 12:05:30.970410] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:40.529 [2024-04-18 12:05:30.971077] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:40.529 [2024-04-18 12:05:30.971283] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.529 [2024-04-18 12:05:30.971298] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.529 [2024-04-18 12:05:30.971308] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.529 [2024-04-18 12:05:30.974068] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.529 [2024-04-18 12:05:30.982416] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.529 [2024-04-18 12:05:30.982976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.529 [2024-04-18 12:05:30.983210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.529 [2024-04-18 12:05:30.983225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:40.529 [2024-04-18 12:05:30.983237] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:40.529 [2024-04-18 12:05:30.983412] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:40.529 [2024-04-18 12:05:30.983613] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.529 [2024-04-18 12:05:30.983627] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.529 [2024-04-18 12:05:30.983638] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.529 [2024-04-18 12:05:30.986397] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.529 [2024-04-18 12:05:30.995342] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.529 [2024-04-18 12:05:30.995917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.529 [2024-04-18 12:05:30.996328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.529 [2024-04-18 12:05:30.996381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:40.529 [2024-04-18 12:05:30.996422] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:40.529 [2024-04-18 12:05:30.996999] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:40.529 [2024-04-18 12:05:30.997184] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.529 [2024-04-18 12:05:30.997197] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.529 [2024-04-18 12:05:30.997208] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.529 [2024-04-18 12:05:30.999967] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.529 [2024-04-18 12:05:31.008443] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.529 [2024-04-18 12:05:31.008982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.529 [2024-04-18 12:05:31.009380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.529 [2024-04-18 12:05:31.009432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:40.529 [2024-04-18 12:05:31.009490] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:40.529 [2024-04-18 12:05:31.009797] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:40.529 [2024-04-18 12:05:31.009986] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.529 [2024-04-18 12:05:31.010000] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.529 [2024-04-18 12:05:31.010011] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.529 [2024-04-18 12:05:31.012930] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.529 [2024-04-18 12:05:31.021712] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.529 [2024-04-18 12:05:31.022287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.529 [2024-04-18 12:05:31.022528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.529 [2024-04-18 12:05:31.022545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:40.529 [2024-04-18 12:05:31.022558] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:40.529 [2024-04-18 12:05:31.022754] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:40.529 [2024-04-18 12:05:31.022937] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.529 [2024-04-18 12:05:31.022950] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.529 [2024-04-18 12:05:31.022961] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.529 [2024-04-18 12:05:31.025793] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.529 [2024-04-18 12:05:31.034779] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.529 [2024-04-18 12:05:31.035359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.529 [2024-04-18 12:05:31.035810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.529 [2024-04-18 12:05:31.035864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:40.529 [2024-04-18 12:05:31.035906] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:40.529 [2024-04-18 12:05:31.036151] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:40.529 [2024-04-18 12:05:31.036334] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.529 [2024-04-18 12:05:31.036347] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.529 [2024-04-18 12:05:31.036358] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.529 [2024-04-18 12:05:31.039114] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.529 [2024-04-18 12:05:31.047758] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.529 [2024-04-18 12:05:31.048276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.529 [2024-04-18 12:05:31.048664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.529 [2024-04-18 12:05:31.048719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:40.529 [2024-04-18 12:05:31.048760] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:40.529 [2024-04-18 12:05:31.049019] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:40.529 [2024-04-18 12:05:31.049204] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.529 [2024-04-18 12:05:31.049217] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.529 [2024-04-18 12:05:31.049228] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.529 [2024-04-18 12:05:31.051993] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.529 [2024-04-18 12:05:31.060798] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.529 [2024-04-18 12:05:31.061353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.529 [2024-04-18 12:05:31.061637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.529 [2024-04-18 12:05:31.061657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:40.529 [2024-04-18 12:05:31.061669] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:40.529 [2024-04-18 12:05:31.061854] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:40.529 [2024-04-18 12:05:31.062037] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.529 [2024-04-18 12:05:31.062050] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.529 [2024-04-18 12:05:31.062061] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.529 [2024-04-18 12:05:31.064902] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.529 [2024-04-18 12:05:31.073965] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.529 [2024-04-18 12:05:31.074396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.529 [2024-04-18 12:05:31.074677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.529 [2024-04-18 12:05:31.074694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:40.530 [2024-04-18 12:05:31.074706] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:40.530 [2024-04-18 12:05:31.074896] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:40.789 [2024-04-18 12:05:31.075083] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.789 [2024-04-18 12:05:31.075097] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.789 [2024-04-18 12:05:31.075108] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.789 [2024-04-18 12:05:31.077989] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.789 [2024-04-18 12:05:31.087270] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.789 [2024-04-18 12:05:31.087612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.789 [2024-04-18 12:05:31.087969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.789 [2024-04-18 12:05:31.087986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:40.789 [2024-04-18 12:05:31.087998] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:40.789 [2024-04-18 12:05:31.088187] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:40.789 [2024-04-18 12:05:31.088375] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.789 [2024-04-18 12:05:31.088388] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.789 [2024-04-18 12:05:31.088400] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.789 [2024-04-18 12:05:31.091312] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.789 [2024-04-18 12:05:31.100438] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.789 [2024-04-18 12:05:31.101016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.789 [2024-04-18 12:05:31.101243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.789 [2024-04-18 12:05:31.101264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:40.789 [2024-04-18 12:05:31.101279] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:40.789 [2024-04-18 12:05:31.101475] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:40.789 [2024-04-18 12:05:31.101664] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.789 [2024-04-18 12:05:31.101678] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.789 [2024-04-18 12:05:31.101689] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.789 [2024-04-18 12:05:31.104602] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.789 [2024-04-18 12:05:31.113712] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.789 [2024-04-18 12:05:31.114286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.789 [2024-04-18 12:05:31.114617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.790 [2024-04-18 12:05:31.114634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:40.790 [2024-04-18 12:05:31.114647] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:40.790 [2024-04-18 12:05:31.114837] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:40.790 [2024-04-18 12:05:31.115050] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.790 [2024-04-18 12:05:31.115064] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.790 [2024-04-18 12:05:31.115075] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.790 [2024-04-18 12:05:31.117992] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.790 [2024-04-18 12:05:31.126939] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.790 [2024-04-18 12:05:31.127513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.790 [2024-04-18 12:05:31.127846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.790 [2024-04-18 12:05:31.127862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:40.790 [2024-04-18 12:05:31.127874] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:40.790 [2024-04-18 12:05:31.128064] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:40.790 [2024-04-18 12:05:31.128252] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.790 [2024-04-18 12:05:31.128265] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.790 [2024-04-18 12:05:31.128276] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.790 [2024-04-18 12:05:31.131191] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.790 [2024-04-18 12:05:31.140137] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.790 [2024-04-18 12:05:31.140545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.790 [2024-04-18 12:05:31.140885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.790 [2024-04-18 12:05:31.140901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:40.790 [2024-04-18 12:05:31.140913] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:40.790 [2024-04-18 12:05:31.141107] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:40.790 [2024-04-18 12:05:31.141295] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.790 [2024-04-18 12:05:31.141308] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.790 [2024-04-18 12:05:31.141319] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.790 [2024-04-18 12:05:31.144239] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.790 [2024-04-18 12:05:31.153355] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.790 [2024-04-18 12:05:31.153923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.790 [2024-04-18 12:05:31.154351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.790 [2024-04-18 12:05:31.154404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:40.790 [2024-04-18 12:05:31.154445] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:40.790 [2024-04-18 12:05:31.154964] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:40.790 [2024-04-18 12:05:31.155226] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.790 [2024-04-18 12:05:31.155244] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.790 [2024-04-18 12:05:31.155260] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.790 [2024-04-18 12:05:31.159338] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.790 [2024-04-18 12:05:31.166970] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.790 [2024-04-18 12:05:31.167530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.790 [2024-04-18 12:05:31.168773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.790 [2024-04-18 12:05:31.168802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:40.790 [2024-04-18 12:05:31.168832] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:40.790 [2024-04-18 12:05:31.169035] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:40.790 [2024-04-18 12:05:31.169225] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.790 [2024-04-18 12:05:31.169240] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.790 [2024-04-18 12:05:31.169252] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.790 [2024-04-18 12:05:31.172167] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.790 [2024-04-18 12:05:31.180255] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.790 [2024-04-18 12:05:31.180811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.790 [2024-04-18 12:05:31.181145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.790 [2024-04-18 12:05:31.181161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:40.790 [2024-04-18 12:05:31.181174] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:40.790 [2024-04-18 12:05:31.181367] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:40.790 [2024-04-18 12:05:31.181576] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.790 [2024-04-18 12:05:31.181591] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.790 [2024-04-18 12:05:31.181602] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.790 [2024-04-18 12:05:31.184517] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.790 [2024-04-18 12:05:31.193471] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.790 [2024-04-18 12:05:31.193909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.790 [2024-04-18 12:05:31.194200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.790 [2024-04-18 12:05:31.194216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:40.790 [2024-04-18 12:05:31.194229] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:40.790 [2024-04-18 12:05:31.194420] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:40.790 [2024-04-18 12:05:31.194613] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.790 [2024-04-18 12:05:31.194627] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.790 [2024-04-18 12:05:31.194638] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.790 [2024-04-18 12:05:31.197550] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.790 [2024-04-18 12:05:31.206664] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.790 [2024-04-18 12:05:31.207242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.790 [2024-04-18 12:05:31.207596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.790 [2024-04-18 12:05:31.207614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:40.790 [2024-04-18 12:05:31.207627] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:40.790 [2024-04-18 12:05:31.207817] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:40.790 [2024-04-18 12:05:31.208004] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.790 [2024-04-18 12:05:31.208018] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.790 [2024-04-18 12:05:31.208029] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.790 [2024-04-18 12:05:31.210942] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.790 [2024-04-18 12:05:31.219949] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.790 [2024-04-18 12:05:31.220512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.790 [2024-04-18 12:05:31.220870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.790 [2024-04-18 12:05:31.220886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:40.790 [2024-04-18 12:05:31.220899] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:40.790 [2024-04-18 12:05:31.221089] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:40.790 [2024-04-18 12:05:31.221280] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.790 [2024-04-18 12:05:31.221293] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.790 [2024-04-18 12:05:31.221304] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.790 [2024-04-18 12:05:31.224219] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.790 [2024-04-18 12:05:31.233161] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.790 [2024-04-18 12:05:31.233712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.790 [2024-04-18 12:05:31.234068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.790 [2024-04-18 12:05:31.234084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:40.790 [2024-04-18 12:05:31.234096] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:40.791 [2024-04-18 12:05:31.234286] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:40.791 [2024-04-18 12:05:31.234480] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.791 [2024-04-18 12:05:31.234494] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.791 [2024-04-18 12:05:31.234505] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.791 [2024-04-18 12:05:31.237417] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.791 [2024-04-18 12:05:31.246364] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.791 [2024-04-18 12:05:31.246948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.791 [2024-04-18 12:05:31.247301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.791 [2024-04-18 12:05:31.247317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:40.791 [2024-04-18 12:05:31.247329] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:40.791 [2024-04-18 12:05:31.247525] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:40.791 [2024-04-18 12:05:31.247713] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.791 [2024-04-18 12:05:31.247727] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.791 [2024-04-18 12:05:31.247738] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.791 [2024-04-18 12:05:31.250650] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.791 [2024-04-18 12:05:31.259593] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.791 [2024-04-18 12:05:31.260174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.791 [2024-04-18 12:05:31.260526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.791 [2024-04-18 12:05:31.260543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:40.791 [2024-04-18 12:05:31.260555] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:40.791 [2024-04-18 12:05:31.260745] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:40.791 [2024-04-18 12:05:31.260936] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.791 [2024-04-18 12:05:31.260950] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.791 [2024-04-18 12:05:31.260961] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.791 [2024-04-18 12:05:31.263875] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.791 [2024-04-18 12:05:31.272825] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.791 [2024-04-18 12:05:31.273381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.791 [2024-04-18 12:05:31.273657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.791 [2024-04-18 12:05:31.273674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:40.791 [2024-04-18 12:05:31.273687] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:40.791 [2024-04-18 12:05:31.273877] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:40.791 [2024-04-18 12:05:31.274066] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.791 [2024-04-18 12:05:31.274079] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.791 [2024-04-18 12:05:31.274091] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.791 [2024-04-18 12:05:31.277003] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.791 [2024-04-18 12:05:31.286121] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.791 [2024-04-18 12:05:31.286696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.791 [2024-04-18 12:05:31.287030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.791 [2024-04-18 12:05:31.287046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:40.791 [2024-04-18 12:05:31.287066] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:40.791 [2024-04-18 12:05:31.287256] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:40.791 [2024-04-18 12:05:31.287444] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.791 [2024-04-18 12:05:31.287462] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.791 [2024-04-18 12:05:31.287473] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.791 [2024-04-18 12:05:31.290384] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.791 [2024-04-18 12:05:31.299330] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.791 [2024-04-18 12:05:31.299923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.791 [2024-04-18 12:05:31.300199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.791 [2024-04-18 12:05:31.300216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:40.791 [2024-04-18 12:05:31.300228] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:40.791 [2024-04-18 12:05:31.300417] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:40.791 [2024-04-18 12:05:31.300612] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.791 [2024-04-18 12:05:31.300629] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.791 [2024-04-18 12:05:31.300641] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.791 [2024-04-18 12:05:31.303559] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.791 [2024-04-18 12:05:31.312517] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.791 [2024-04-18 12:05:31.313076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.791 [2024-04-18 12:05:31.313433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.791 [2024-04-18 12:05:31.313454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:40.791 [2024-04-18 12:05:31.313468] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:40.791 [2024-04-18 12:05:31.313662] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:40.791 [2024-04-18 12:05:31.313849] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.791 [2024-04-18 12:05:31.313863] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.791 [2024-04-18 12:05:31.313874] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.791 [2024-04-18 12:05:31.316790] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.791 [2024-04-18 12:05:31.325743] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.791 [2024-04-18 12:05:31.326249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.791 [2024-04-18 12:05:31.326527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.791 [2024-04-18 12:05:31.326545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:40.791 [2024-04-18 12:05:31.326558] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:40.791 [2024-04-18 12:05:31.326749] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:40.791 [2024-04-18 12:05:31.326937] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.791 [2024-04-18 12:05:31.326950] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.791 [2024-04-18 12:05:31.326961] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.791 [2024-04-18 12:05:31.329874] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.052 [2024-04-18 12:05:31.339004] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.052 [2024-04-18 12:05:31.339578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.052 [2024-04-18 12:05:31.339895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.052 [2024-04-18 12:05:31.339912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.052 [2024-04-18 12:05:31.339924] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.052 [2024-04-18 12:05:31.340114] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.052 [2024-04-18 12:05:31.340303] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.052 [2024-04-18 12:05:31.340316] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.052 [2024-04-18 12:05:31.340331] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.052 [2024-04-18 12:05:31.343250] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.052 [2024-04-18 12:05:31.352204] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.052 [2024-04-18 12:05:31.352777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.052 [2024-04-18 12:05:31.352955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.052 [2024-04-18 12:05:31.352971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.052 [2024-04-18 12:05:31.352984] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.052 [2024-04-18 12:05:31.353174] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.052 [2024-04-18 12:05:31.353363] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.052 [2024-04-18 12:05:31.353376] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.052 [2024-04-18 12:05:31.353387] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.052 [2024-04-18 12:05:31.356303] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.052 [2024-04-18 12:05:31.365421] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.052 [2024-04-18 12:05:31.365995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.052 [2024-04-18 12:05:31.366287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.052 [2024-04-18 12:05:31.366303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.052 [2024-04-18 12:05:31.366316] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.052 [2024-04-18 12:05:31.366512] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.052 [2024-04-18 12:05:31.366701] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.052 [2024-04-18 12:05:31.366714] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.052 [2024-04-18 12:05:31.366725] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.052 [2024-04-18 12:05:31.369637] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.052 [2024-04-18 12:05:31.378580] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.052 [2024-04-18 12:05:31.379155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.052 [2024-04-18 12:05:31.379485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.052 [2024-04-18 12:05:31.379502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.052 [2024-04-18 12:05:31.379515] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.052 [2024-04-18 12:05:31.379705] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.052 [2024-04-18 12:05:31.379894] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.052 [2024-04-18 12:05:31.379907] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.052 [2024-04-18 12:05:31.379918] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.052 [2024-04-18 12:05:31.382836] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.052 [2024-04-18 12:05:31.391781] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.052 [2024-04-18 12:05:31.392372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.052 [2024-04-18 12:05:31.392654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.052 [2024-04-18 12:05:31.392671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.052 [2024-04-18 12:05:31.392684] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.052 [2024-04-18 12:05:31.392873] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.052 [2024-04-18 12:05:31.393060] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.052 [2024-04-18 12:05:31.393074] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.052 [2024-04-18 12:05:31.393085] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.052 [2024-04-18 12:05:31.395997] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.052 [2024-04-18 12:05:31.404952] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.052 [2024-04-18 12:05:31.405548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.052 [2024-04-18 12:05:31.405834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.052 [2024-04-18 12:05:31.405850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.052 [2024-04-18 12:05:31.405863] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.052 [2024-04-18 12:05:31.406052] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.052 [2024-04-18 12:05:31.406241] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.052 [2024-04-18 12:05:31.406254] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.052 [2024-04-18 12:05:31.406265] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.052 [2024-04-18 12:05:31.409225] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.052 [2024-04-18 12:05:31.418177] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.052 [2024-04-18 12:05:31.418758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.052 [2024-04-18 12:05:31.419032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.052 [2024-04-18 12:05:31.419048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.052 [2024-04-18 12:05:31.419060] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.052 [2024-04-18 12:05:31.419250] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.052 [2024-04-18 12:05:31.419439] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.052 [2024-04-18 12:05:31.419459] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.053 [2024-04-18 12:05:31.419471] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.053 [2024-04-18 12:05:31.422387] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.053 [2024-04-18 12:05:31.431337] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.053 [2024-04-18 12:05:31.431745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.053 [2024-04-18 12:05:31.432078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.053 [2024-04-18 12:05:31.432095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.053 [2024-04-18 12:05:31.432107] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.053 [2024-04-18 12:05:31.432297] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.053 [2024-04-18 12:05:31.432491] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.053 [2024-04-18 12:05:31.432505] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.053 [2024-04-18 12:05:31.432516] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.053 [2024-04-18 12:05:31.435429] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.053 [2024-04-18 12:05:31.444556] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.053 [2024-04-18 12:05:31.445136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.053 [2024-04-18 12:05:31.445492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.053 [2024-04-18 12:05:31.445509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.053 [2024-04-18 12:05:31.445522] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.053 [2024-04-18 12:05:31.445711] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.053 [2024-04-18 12:05:31.445899] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.053 [2024-04-18 12:05:31.445913] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.053 [2024-04-18 12:05:31.445924] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.053 [2024-04-18 12:05:31.448842] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.053 [2024-04-18 12:05:31.457799] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.053 [2024-04-18 12:05:31.458289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.053 [2024-04-18 12:05:31.458651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.053 [2024-04-18 12:05:31.458668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.053 [2024-04-18 12:05:31.458680] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.053 [2024-04-18 12:05:31.458871] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.053 [2024-04-18 12:05:31.459060] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.053 [2024-04-18 12:05:31.459073] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.053 [2024-04-18 12:05:31.459084] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.053 [2024-04-18 12:05:31.461998] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.053 [2024-04-18 12:05:31.470950] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.053 [2024-04-18 12:05:31.471528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.053 [2024-04-18 12:05:31.471858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.053 [2024-04-18 12:05:31.471874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.053 [2024-04-18 12:05:31.471887] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.053 [2024-04-18 12:05:31.472077] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.053 [2024-04-18 12:05:31.472266] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.053 [2024-04-18 12:05:31.472284] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.053 [2024-04-18 12:05:31.472295] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.053 [2024-04-18 12:05:31.475214] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.053 [2024-04-18 12:05:31.484164] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.053 [2024-04-18 12:05:31.484733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.053 [2024-04-18 12:05:31.485078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.053 [2024-04-18 12:05:31.485094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.053 [2024-04-18 12:05:31.485107] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.053 [2024-04-18 12:05:31.485296] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.053 [2024-04-18 12:05:31.485489] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.053 [2024-04-18 12:05:31.485503] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.053 [2024-04-18 12:05:31.485515] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.053 [2024-04-18 12:05:31.488422] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.053 [2024-04-18 12:05:31.497363] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.053 [2024-04-18 12:05:31.497941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.053 [2024-04-18 12:05:31.498293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.053 [2024-04-18 12:05:31.498309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.053 [2024-04-18 12:05:31.498321] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.053 [2024-04-18 12:05:31.498518] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.053 [2024-04-18 12:05:31.498707] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.053 [2024-04-18 12:05:31.498721] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.053 [2024-04-18 12:05:31.498732] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.053 [2024-04-18 12:05:31.501645] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.053 [2024-04-18 12:05:31.510601] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.053 [2024-04-18 12:05:31.511159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.053 [2024-04-18 12:05:31.511493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.053 [2024-04-18 12:05:31.511553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.053 [2024-04-18 12:05:31.511596] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.053 [2024-04-18 12:05:31.512141] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.053 [2024-04-18 12:05:31.512330] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.053 [2024-04-18 12:05:31.512344] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.053 [2024-04-18 12:05:31.512355] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.053 [2024-04-18 12:05:31.515272] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.053 [2024-04-18 12:05:31.523817] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.053 [2024-04-18 12:05:31.524404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.053 [2024-04-18 12:05:31.524800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.053 [2024-04-18 12:05:31.524853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.053 [2024-04-18 12:05:31.524894] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.053 [2024-04-18 12:05:31.525093] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.053 [2024-04-18 12:05:31.525276] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.053 [2024-04-18 12:05:31.525289] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.053 [2024-04-18 12:05:31.525300] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.053 [2024-04-18 12:05:31.528164] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.053 [2024-04-18 12:05:31.536864] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.053 [2024-04-18 12:05:31.537443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.053 [2024-04-18 12:05:31.537840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.053 [2024-04-18 12:05:31.537891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.053 [2024-04-18 12:05:31.537940] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.053 [2024-04-18 12:05:31.538124] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.053 [2024-04-18 12:05:31.538308] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.053 [2024-04-18 12:05:31.538321] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.053 [2024-04-18 12:05:31.538332] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.053 [2024-04-18 12:05:31.541151] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.054 [2024-04-18 12:05:31.549910] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.054 [2024-04-18 12:05:31.550481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.054 [2024-04-18 12:05:31.550920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.054 [2024-04-18 12:05:31.550974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.054 [2024-04-18 12:05:31.551013] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.054 [2024-04-18 12:05:31.551188] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.054 [2024-04-18 12:05:31.551361] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.054 [2024-04-18 12:05:31.551373] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.054 [2024-04-18 12:05:31.551383] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.054 [2024-04-18 12:05:31.554157] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.054 [2024-04-18 12:05:31.562991] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.054 [2024-04-18 12:05:31.563563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.054 [2024-04-18 12:05:31.564019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.054 [2024-04-18 12:05:31.564071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.054 [2024-04-18 12:05:31.564113] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.054 [2024-04-18 12:05:31.564778] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.054 [2024-04-18 12:05:31.565338] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.054 [2024-04-18 12:05:31.565351] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.054 [2024-04-18 12:05:31.565362] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.054 [2024-04-18 12:05:31.568171] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.054 [2024-04-18 12:05:31.575948] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.054 [2024-04-18 12:05:31.576454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.054 [2024-04-18 12:05:31.576844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.054 [2024-04-18 12:05:31.576897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.054 [2024-04-18 12:05:31.576939] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.054 [2024-04-18 12:05:31.577360] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.054 [2024-04-18 12:05:31.577549] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.054 [2024-04-18 12:05:31.577563] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.054 [2024-04-18 12:05:31.577574] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.054 [2024-04-18 12:05:31.580403] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.054 [2024-04-18 12:05:31.589230] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.054 [2024-04-18 12:05:31.589788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.054 [2024-04-18 12:05:31.590158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.054 [2024-04-18 12:05:31.590219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.054 [2024-04-18 12:05:31.590261] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.054 [2024-04-18 12:05:31.590572] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.054 [2024-04-18 12:05:31.590836] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.054 [2024-04-18 12:05:31.590854] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.054 [2024-04-18 12:05:31.590870] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.054 [2024-04-18 12:05:31.594945] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.314 [2024-04-18 12:05:31.602629] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.314 [2024-04-18 12:05:31.603174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.314 [2024-04-18 12:05:31.603530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.314 [2024-04-18 12:05:31.603546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.314 [2024-04-18 12:05:31.603558] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.314 [2024-04-18 12:05:31.603769] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.314 [2024-04-18 12:05:31.603953] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.314 [2024-04-18 12:05:31.603966] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.314 [2024-04-18 12:05:31.603977] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.314 [2024-04-18 12:05:31.606887] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.314 [2024-04-18 12:05:31.615602] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.314 [2024-04-18 12:05:31.616168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.314 [2024-04-18 12:05:31.616623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.314 [2024-04-18 12:05:31.616678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.314 [2024-04-18 12:05:31.616720] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.314 [2024-04-18 12:05:31.617328] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.314 [2024-04-18 12:05:31.617518] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.314 [2024-04-18 12:05:31.617532] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.314 [2024-04-18 12:05:31.617543] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.314 [2024-04-18 12:05:31.620304] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.314 [2024-04-18 12:05:31.628609] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.314 [2024-04-18 12:05:31.629132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.314 [2024-04-18 12:05:31.629562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.314 [2024-04-18 12:05:31.629616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.314 [2024-04-18 12:05:31.629666] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.314 [2024-04-18 12:05:31.630231] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.314 [2024-04-18 12:05:31.630414] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.314 [2024-04-18 12:05:31.630427] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.314 [2024-04-18 12:05:31.630461] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.314 [2024-04-18 12:05:31.634544] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.314 [2024-04-18 12:05:31.642255] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.314 [2024-04-18 12:05:31.642791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.314 [2024-04-18 12:05:31.643201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.314 [2024-04-18 12:05:31.643253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.314 [2024-04-18 12:05:31.643295] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.314 [2024-04-18 12:05:31.643510] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.314 [2024-04-18 12:05:31.643694] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.314 [2024-04-18 12:05:31.643708] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.314 [2024-04-18 12:05:31.643719] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.314 [2024-04-18 12:05:31.646518] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.314 [2024-04-18 12:05:31.655247] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.314 [2024-04-18 12:05:31.655819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.314 [2024-04-18 12:05:31.656173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.314 [2024-04-18 12:05:31.656225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.314 [2024-04-18 12:05:31.656267] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.314 [2024-04-18 12:05:31.656484] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.315 [2024-04-18 12:05:31.656668] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.315 [2024-04-18 12:05:31.656681] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.315 [2024-04-18 12:05:31.656692] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.315 [2024-04-18 12:05:31.659453] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.315 [2024-04-18 12:05:31.668223] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.315 [2024-04-18 12:05:31.668781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.315 [2024-04-18 12:05:31.669084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.315 [2024-04-18 12:05:31.669099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.315 [2024-04-18 12:05:31.669114] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.315 [2024-04-18 12:05:31.669299] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.315 [2024-04-18 12:05:31.669490] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.315 [2024-04-18 12:05:31.669504] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.315 [2024-04-18 12:05:31.669515] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.315 [2024-04-18 12:05:31.672276] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.315 [2024-04-18 12:05:31.681248] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.315 [2024-04-18 12:05:31.681839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.315 [2024-04-18 12:05:31.682196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.315 [2024-04-18 12:05:31.682212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.315 [2024-04-18 12:05:31.682224] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.315 [2024-04-18 12:05:31.682409] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.315 [2024-04-18 12:05:31.682596] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.315 [2024-04-18 12:05:31.682610] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.315 [2024-04-18 12:05:31.682621] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.315 [2024-04-18 12:05:31.685439] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.315 [2024-04-18 12:05:31.694187] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.315 [2024-04-18 12:05:31.694754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.315 [2024-04-18 12:05:31.695164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.315 [2024-04-18 12:05:31.695218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.315 [2024-04-18 12:05:31.695259] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.315 [2024-04-18 12:05:31.695487] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.315 [2024-04-18 12:05:31.695672] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.315 [2024-04-18 12:05:31.695685] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.315 [2024-04-18 12:05:31.695696] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.315 [2024-04-18 12:05:31.698494] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.315 [2024-04-18 12:05:31.707152] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.315 [2024-04-18 12:05:31.707739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.315 [2024-04-18 12:05:31.708104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.315 [2024-04-18 12:05:31.708157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.315 [2024-04-18 12:05:31.708199] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.315 [2024-04-18 12:05:31.708505] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.315 [2024-04-18 12:05:31.708690] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.315 [2024-04-18 12:05:31.708703] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.315 [2024-04-18 12:05:31.708714] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.315 [2024-04-18 12:05:31.711542] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.315 [2024-04-18 12:05:31.720190] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.315 [2024-04-18 12:05:31.720780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.315 [2024-04-18 12:05:31.721058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.315 [2024-04-18 12:05:31.721110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.315 [2024-04-18 12:05:31.721152] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.315 [2024-04-18 12:05:31.721509] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.315 [2024-04-18 12:05:31.721693] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.315 [2024-04-18 12:05:31.721706] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.315 [2024-04-18 12:05:31.721717] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.315 [2024-04-18 12:05:31.725671] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.315 [2024-04-18 12:05:31.733631] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.315 [2024-04-18 12:05:31.734224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.315 [2024-04-18 12:05:31.734622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.315 [2024-04-18 12:05:31.734638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.315 [2024-04-18 12:05:31.734651] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.315 [2024-04-18 12:05:31.734835] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.315 [2024-04-18 12:05:31.735018] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.315 [2024-04-18 12:05:31.735031] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.315 [2024-04-18 12:05:31.735042] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.315 [2024-04-18 12:05:31.737802] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.315 [2024-04-18 12:05:31.746642] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.315 [2024-04-18 12:05:31.747208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.315 [2024-04-18 12:05:31.747632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.315 [2024-04-18 12:05:31.747688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.315 [2024-04-18 12:05:31.747737] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.315 [2024-04-18 12:05:31.747921] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.315 [2024-04-18 12:05:31.748108] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.315 [2024-04-18 12:05:31.748121] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.315 [2024-04-18 12:05:31.748131] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.315 [2024-04-18 12:05:31.750926] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.315 [2024-04-18 12:05:31.759709] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.315 [2024-04-18 12:05:31.760286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.315 [2024-04-18 12:05:31.760788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.315 [2024-04-18 12:05:31.760847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.315 [2024-04-18 12:05:31.760888] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.315 [2024-04-18 12:05:31.761489] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.315 [2024-04-18 12:05:31.761673] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.315 [2024-04-18 12:05:31.761685] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.315 [2024-04-18 12:05:31.761696] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.315 [2024-04-18 12:05:31.764455] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.315 [2024-04-18 12:05:31.772621] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.315 [2024-04-18 12:05:31.773203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.315 [2024-04-18 12:05:31.773535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.315 [2024-04-18 12:05:31.773551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.315 [2024-04-18 12:05:31.773563] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.315 [2024-04-18 12:05:31.773751] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.315 [2024-04-18 12:05:31.773925] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.315 [2024-04-18 12:05:31.773937] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.315 [2024-04-18 12:05:31.773948] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.315 [2024-04-18 12:05:31.776630] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.316 [2024-04-18 12:05:31.785615] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.316 [2024-04-18 12:05:31.786102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.316 [2024-04-18 12:05:31.786478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.316 [2024-04-18 12:05:31.786532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.316 [2024-04-18 12:05:31.786573] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.316 [2024-04-18 12:05:31.787225] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.316 [2024-04-18 12:05:31.787781] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.316 [2024-04-18 12:05:31.787795] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.316 [2024-04-18 12:05:31.787812] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.316 [2024-04-18 12:05:31.790599] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.316 [2024-04-18 12:05:31.798643] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.316 [2024-04-18 12:05:31.799213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.316 [2024-04-18 12:05:31.799642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.316 [2024-04-18 12:05:31.799697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.316 [2024-04-18 12:05:31.799709] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.316 [2024-04-18 12:05:31.799884] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.316 [2024-04-18 12:05:31.800058] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.316 [2024-04-18 12:05:31.800070] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.316 [2024-04-18 12:05:31.800080] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.316 [2024-04-18 12:05:31.802850] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.316 [2024-04-18 12:05:31.811576] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.316 [2024-04-18 12:05:31.812124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.316 [2024-04-18 12:05:31.812576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.316 [2024-04-18 12:05:31.812598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.316 [2024-04-18 12:05:31.812616] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.316 [2024-04-18 12:05:31.812879] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.316 [2024-04-18 12:05:31.813141] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.316 [2024-04-18 12:05:31.813158] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.316 [2024-04-18 12:05:31.813174] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.316 [2024-04-18 12:05:31.817246] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.316 [2024-04-18 12:05:31.825240] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.316 [2024-04-18 12:05:31.825762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.316 [2024-04-18 12:05:31.826207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.316 [2024-04-18 12:05:31.826259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.316 [2024-04-18 12:05:31.826300] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.316 [2024-04-18 12:05:31.826948] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.316 [2024-04-18 12:05:31.827133] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.316 [2024-04-18 12:05:31.827149] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.316 [2024-04-18 12:05:31.827160] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.316 [2024-04-18 12:05:31.829960] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.316 [2024-04-18 12:05:31.838207] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.316 [2024-04-18 12:05:31.838786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.316 [2024-04-18 12:05:31.839103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.316 [2024-04-18 12:05:31.839155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.316 [2024-04-18 12:05:31.839197] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.316 [2024-04-18 12:05:31.839729] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.316 [2024-04-18 12:05:31.839917] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.316 [2024-04-18 12:05:31.839931] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.316 [2024-04-18 12:05:31.839942] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.316 [2024-04-18 12:05:31.842838] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.316 [2024-04-18 12:05:31.851436] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.316 [2024-04-18 12:05:31.852056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.316 [2024-04-18 12:05:31.852473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.316 [2024-04-18 12:05:31.852527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.316 [2024-04-18 12:05:31.852568] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.316 [2024-04-18 12:05:31.852809] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.316 [2024-04-18 12:05:31.852993] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.316 [2024-04-18 12:05:31.853005] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.316 [2024-04-18 12:05:31.853016] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.316 [2024-04-18 12:05:31.856958] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.576 [2024-04-18 12:05:31.865037] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.576 [2024-04-18 12:05:31.865597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.576 [2024-04-18 12:05:31.865997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.576 [2024-04-18 12:05:31.866013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.576 [2024-04-18 12:05:31.866025] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.576 [2024-04-18 12:05:31.866214] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.576 [2024-04-18 12:05:31.866402] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.576 [2024-04-18 12:05:31.866415] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.576 [2024-04-18 12:05:31.866430] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.576 [2024-04-18 12:05:31.869320] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.576 [2024-04-18 12:05:31.878003] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.576 [2024-04-18 12:05:31.878541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.576 [2024-04-18 12:05:31.878885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.576 [2024-04-18 12:05:31.878937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.576 [2024-04-18 12:05:31.878978] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.576 [2024-04-18 12:05:31.879646] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.576 [2024-04-18 12:05:31.880067] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.576 [2024-04-18 12:05:31.880080] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.576 [2024-04-18 12:05:31.880091] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.576 [2024-04-18 12:05:31.882864] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.576 [2024-04-18 12:05:31.890902] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.576 [2024-04-18 12:05:31.891443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.576 [2024-04-18 12:05:31.891889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.576 [2024-04-18 12:05:31.891941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.576 [2024-04-18 12:05:31.891984] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.576 [2024-04-18 12:05:31.892654] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.576 [2024-04-18 12:05:31.892945] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.576 [2024-04-18 12:05:31.892958] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.576 [2024-04-18 12:05:31.892970] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.576 [2024-04-18 12:05:31.895883] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.576 [2024-04-18 12:05:31.904020] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.576 [2024-04-18 12:05:31.904562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.576 [2024-04-18 12:05:31.904921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.576 [2024-04-18 12:05:31.904936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.576 [2024-04-18 12:05:31.904947] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.576 [2024-04-18 12:05:31.905123] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.576 [2024-04-18 12:05:31.905297] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.576 [2024-04-18 12:05:31.905309] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.576 [2024-04-18 12:05:31.905322] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.576 [2024-04-18 12:05:31.908090] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.576 [2024-04-18 12:05:31.916980] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.576 [2024-04-18 12:05:31.917534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.576 [2024-04-18 12:05:31.917942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.576 [2024-04-18 12:05:31.917993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.576 [2024-04-18 12:05:31.918034] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.576 [2024-04-18 12:05:31.918243] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.576 [2024-04-18 12:05:31.918416] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.576 [2024-04-18 12:05:31.918429] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.576 [2024-04-18 12:05:31.918439] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.576 [2024-04-18 12:05:31.921215] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.576 [2024-04-18 12:05:31.929921] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.576 [2024-04-18 12:05:31.930494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.576 [2024-04-18 12:05:31.930878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.576 [2024-04-18 12:05:31.930930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.576 [2024-04-18 12:05:31.930971] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.576 [2024-04-18 12:05:31.931641] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.576 [2024-04-18 12:05:31.932192] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.577 [2024-04-18 12:05:31.932205] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.577 [2024-04-18 12:05:31.932216] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.577 [2024-04-18 12:05:31.934977] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.577 [2024-04-18 12:05:31.942863] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.577 [2024-04-18 12:05:31.943436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.577 [2024-04-18 12:05:31.943927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.577 [2024-04-18 12:05:31.943979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.577 [2024-04-18 12:05:31.944020] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.577 [2024-04-18 12:05:31.944457] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.577 [2024-04-18 12:05:31.944641] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.577 [2024-04-18 12:05:31.944653] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.577 [2024-04-18 12:05:31.944664] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.577 [2024-04-18 12:05:31.948652] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.577 [2024-04-18 12:05:31.956526] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.577 [2024-04-18 12:05:31.957116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.577 [2024-04-18 12:05:31.957560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.577 [2024-04-18 12:05:31.957615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.577 [2024-04-18 12:05:31.957658] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.577 [2024-04-18 12:05:31.958280] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.577 [2024-04-18 12:05:31.958474] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.577 [2024-04-18 12:05:31.958488] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.577 [2024-04-18 12:05:31.958498] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.577 [2024-04-18 12:05:31.961264] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.577 [2024-04-18 12:05:31.969434] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.577 [2024-04-18 12:05:31.970009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.577 [2024-04-18 12:05:31.970371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.577 [2024-04-18 12:05:31.970423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.577 [2024-04-18 12:05:31.970482] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.577 [2024-04-18 12:05:31.971145] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.577 [2024-04-18 12:05:31.971329] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.577 [2024-04-18 12:05:31.971342] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.577 [2024-04-18 12:05:31.971353] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.577 [2024-04-18 12:05:31.974111] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.577 [2024-04-18 12:05:31.982447] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.577 [2024-04-18 12:05:31.983024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.577 [2024-04-18 12:05:31.983467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.577 [2024-04-18 12:05:31.983522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.577 [2024-04-18 12:05:31.983563] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.577 [2024-04-18 12:05:31.984060] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.577 [2024-04-18 12:05:31.984243] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.577 [2024-04-18 12:05:31.984255] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.577 [2024-04-18 12:05:31.984267] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.577 [2024-04-18 12:05:31.987025] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.577 [2024-04-18 12:05:31.995494] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.577 [2024-04-18 12:05:31.996043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.577 [2024-04-18 12:05:31.996482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.577 [2024-04-18 12:05:31.996536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.577 [2024-04-18 12:05:31.996579] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.577 [2024-04-18 12:05:31.997231] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.577 [2024-04-18 12:05:31.997628] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.577 [2024-04-18 12:05:31.997641] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.577 [2024-04-18 12:05:31.997652] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.577 [2024-04-18 12:05:32.000410] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.577 [2024-04-18 12:05:32.008532] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.577 [2024-04-18 12:05:32.009091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.577 [2024-04-18 12:05:32.009522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.577 [2024-04-18 12:05:32.009577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.577 [2024-04-18 12:05:32.009619] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.577 [2024-04-18 12:05:32.010099] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.577 [2024-04-18 12:05:32.010282] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.577 [2024-04-18 12:05:32.010295] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.577 [2024-04-18 12:05:32.010306] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.577 [2024-04-18 12:05:32.013066] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.577 [2024-04-18 12:05:32.021506] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.577 [2024-04-18 12:05:32.022082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.577 [2024-04-18 12:05:32.022388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.577 [2024-04-18 12:05:32.022403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.577 [2024-04-18 12:05:32.022415] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.577 [2024-04-18 12:05:32.022607] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.577 [2024-04-18 12:05:32.022790] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.577 [2024-04-18 12:05:32.022803] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.577 [2024-04-18 12:05:32.022814] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.577 [2024-04-18 12:05:32.025576] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.577 [2024-04-18 12:05:32.034502] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.577 [2024-04-18 12:05:32.035077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.577 [2024-04-18 12:05:32.035525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.577 [2024-04-18 12:05:32.035580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.577 [2024-04-18 12:05:32.035622] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.577 [2024-04-18 12:05:32.036275] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.577 [2024-04-18 12:05:32.036618] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.577 [2024-04-18 12:05:32.036637] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.577 [2024-04-18 12:05:32.036653] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.577 [2024-04-18 12:05:32.040732] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.577 [2024-04-18 12:05:32.047727] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.578 [2024-04-18 12:05:32.048306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.578 [2024-04-18 12:05:32.048782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.578 [2024-04-18 12:05:32.048837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.578 [2024-04-18 12:05:32.048879] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.578 [2024-04-18 12:05:32.049433] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.578 [2024-04-18 12:05:32.049621] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.578 [2024-04-18 12:05:32.049635] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.578 [2024-04-18 12:05:32.049646] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.578 [2024-04-18 12:05:32.052444] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.578 [2024-04-18 12:05:32.060670] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.578 [2024-04-18 12:05:32.061163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.578 [2024-04-18 12:05:32.061528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.578 [2024-04-18 12:05:32.061583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.578 [2024-04-18 12:05:32.061624] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.578 [2024-04-18 12:05:32.062209] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.578 [2024-04-18 12:05:32.062392] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.578 [2024-04-18 12:05:32.062405] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.578 [2024-04-18 12:05:32.062416] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.578 [2024-04-18 12:05:32.065176] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.578 [2024-04-18 12:05:32.073599] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.578 [2024-04-18 12:05:32.074144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.578 [2024-04-18 12:05:32.074599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.578 [2024-04-18 12:05:32.074654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.578 [2024-04-18 12:05:32.074695] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.578 [2024-04-18 12:05:32.075349] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.578 [2024-04-18 12:05:32.075658] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.578 [2024-04-18 12:05:32.075671] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.578 [2024-04-18 12:05:32.075682] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.578 [2024-04-18 12:05:32.079375] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.578 [2024-04-18 12:05:32.087209] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.578 [2024-04-18 12:05:32.087768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.578 [2024-04-18 12:05:32.088152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.578 [2024-04-18 12:05:32.088167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.578 [2024-04-18 12:05:32.088179] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.578 [2024-04-18 12:05:32.088355] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.578 [2024-04-18 12:05:32.088552] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.578 [2024-04-18 12:05:32.088566] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.578 [2024-04-18 12:05:32.088577] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.578 [2024-04-18 12:05:32.091468] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.578 [2024-04-18 12:05:32.100362] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.578 [2024-04-18 12:05:32.100901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.578 [2024-04-18 12:05:32.101241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.578 [2024-04-18 12:05:32.101294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.578 [2024-04-18 12:05:32.101335] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.578 [2024-04-18 12:05:32.101786] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.578 [2024-04-18 12:05:32.101970] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.578 [2024-04-18 12:05:32.101983] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.578 [2024-04-18 12:05:32.101994] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.578 [2024-04-18 12:05:32.104785] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.578 [2024-04-18 12:05:32.113332] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.578 [2024-04-18 12:05:32.113898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.578 [2024-04-18 12:05:32.114215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.578 [2024-04-18 12:05:32.114274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.578 [2024-04-18 12:05:32.114316] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.578 [2024-04-18 12:05:32.114747] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.578 [2024-04-18 12:05:32.114931] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.578 [2024-04-18 12:05:32.114944] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.578 [2024-04-18 12:05:32.114955] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.578 [2024-04-18 12:05:32.117715] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.839 [2024-04-18 12:05:32.126442] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.839 [2024-04-18 12:05:32.127043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.839 [2024-04-18 12:05:32.127397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.839 [2024-04-18 12:05:32.127413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.839 [2024-04-18 12:05:32.127432] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.839 [2024-04-18 12:05:32.127643] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.839 [2024-04-18 12:05:32.127832] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.839 [2024-04-18 12:05:32.127845] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.839 [2024-04-18 12:05:32.127857] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.839 [2024-04-18 12:05:32.130784] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.840 [2024-04-18 12:05:32.139380] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.840 [2024-04-18 12:05:32.139973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-04-18 12:05:32.140329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-04-18 12:05:32.140345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.840 [2024-04-18 12:05:32.140358] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.840 [2024-04-18 12:05:32.140553] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.840 [2024-04-18 12:05:32.140738] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.840 [2024-04-18 12:05:32.140751] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.840 [2024-04-18 12:05:32.140762] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.840 [2024-04-18 12:05:32.143527] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.840 [2024-04-18 12:05:32.152375] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.840 [2024-04-18 12:05:32.152953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-04-18 12:05:32.153431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-04-18 12:05:32.153498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.840 [2024-04-18 12:05:32.153533] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.840 [2024-04-18 12:05:32.153718] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.840 [2024-04-18 12:05:32.153901] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.840 [2024-04-18 12:05:32.153914] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.840 [2024-04-18 12:05:32.153925] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.840 [2024-04-18 12:05:32.156684] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.840 [2024-04-18 12:05:32.165442] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.840 [2024-04-18 12:05:32.166026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-04-18 12:05:32.166383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-04-18 12:05:32.166434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.840 [2024-04-18 12:05:32.166494] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.840 [2024-04-18 12:05:32.166966] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.840 [2024-04-18 12:05:32.167150] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.840 [2024-04-18 12:05:32.167163] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.840 [2024-04-18 12:05:32.167174] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.840 [2024-04-18 12:05:32.169930] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.840 [2024-04-18 12:05:32.178427] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.840 [2024-04-18 12:05:32.179003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-04-18 12:05:32.179490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-04-18 12:05:32.179545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.840 [2024-04-18 12:05:32.179587] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.840 [2024-04-18 12:05:32.180240] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.840 [2024-04-18 12:05:32.180808] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.840 [2024-04-18 12:05:32.180822] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.840 [2024-04-18 12:05:32.180833] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.840 [2024-04-18 12:05:32.183563] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.840 [2024-04-18 12:05:32.191514] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.840 [2024-04-18 12:05:32.192093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-04-18 12:05:32.192527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-04-18 12:05:32.192582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.840 [2024-04-18 12:05:32.192641] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.840 [2024-04-18 12:05:32.192818] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.840 [2024-04-18 12:05:32.192992] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.840 [2024-04-18 12:05:32.193004] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.840 [2024-04-18 12:05:32.193015] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.840 [2024-04-18 12:05:32.195797] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.840 [2024-04-18 12:05:32.204591] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.840 [2024-04-18 12:05:32.205186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-04-18 12:05:32.205555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-04-18 12:05:32.205609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.840 [2024-04-18 12:05:32.205651] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.840 [2024-04-18 12:05:32.206302] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.840 [2024-04-18 12:05:32.206922] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.840 [2024-04-18 12:05:32.206936] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.840 [2024-04-18 12:05:32.206947] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.840 [2024-04-18 12:05:32.209724] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.840 [2024-04-18 12:05:32.217641] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.840 [2024-04-18 12:05:32.218214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-04-18 12:05:32.218563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-04-18 12:05:32.218584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.840 [2024-04-18 12:05:32.218596] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.840 [2024-04-18 12:05:32.218781] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.840 [2024-04-18 12:05:32.218964] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.840 [2024-04-18 12:05:32.218977] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.840 [2024-04-18 12:05:32.218988] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.840 [2024-04-18 12:05:32.221773] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.840 [2024-04-18 12:05:32.230749] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.840 [2024-04-18 12:05:32.231340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-04-18 12:05:32.231692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-04-18 12:05:32.231747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.840 [2024-04-18 12:05:32.231791] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.840 [2024-04-18 12:05:32.231978] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.840 [2024-04-18 12:05:32.232162] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.840 [2024-04-18 12:05:32.232175] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.840 [2024-04-18 12:05:32.232186] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.840 [2024-04-18 12:05:32.234947] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.840 [2024-04-18 12:05:32.243772] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.840 [2024-04-18 12:05:32.244357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-04-18 12:05:32.244796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-04-18 12:05:32.244852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.840 [2024-04-18 12:05:32.244864] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.840 [2024-04-18 12:05:32.245050] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.840 [2024-04-18 12:05:32.245233] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.840 [2024-04-18 12:05:32.245246] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.840 [2024-04-18 12:05:32.245257] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.840 [2024-04-18 12:05:32.248018] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.840 [2024-04-18 12:05:32.256771] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.840 [2024-04-18 12:05:32.257280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-04-18 12:05:32.257759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-04-18 12:05:32.257811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.841 [2024-04-18 12:05:32.257823] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.841 [2024-04-18 12:05:32.258009] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.841 [2024-04-18 12:05:32.258191] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.841 [2024-04-18 12:05:32.258204] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.841 [2024-04-18 12:05:32.258215] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.841 [2024-04-18 12:05:32.260976] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.841 [2024-04-18 12:05:32.269777] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.841 [2024-04-18 12:05:32.270322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-04-18 12:05:32.270759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-04-18 12:05:32.270810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.841 [2024-04-18 12:05:32.270822] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.841 [2024-04-18 12:05:32.271010] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.841 [2024-04-18 12:05:32.271194] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.841 [2024-04-18 12:05:32.271207] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.841 [2024-04-18 12:05:32.271219] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.841 [2024-04-18 12:05:32.273980] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.841 [2024-04-18 12:05:32.282726] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.841 [2024-04-18 12:05:32.283269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-04-18 12:05:32.283703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-04-18 12:05:32.283746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.841 [2024-04-18 12:05:32.283758] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.841 [2024-04-18 12:05:32.283942] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.841 [2024-04-18 12:05:32.284125] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.841 [2024-04-18 12:05:32.284137] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.841 [2024-04-18 12:05:32.284148] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.841 [2024-04-18 12:05:32.286863] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.841 [2024-04-18 12:05:32.295640] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.841 [2024-04-18 12:05:32.296172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-04-18 12:05:32.296474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-04-18 12:05:32.296506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.841 [2024-04-18 12:05:32.296518] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.841 [2024-04-18 12:05:32.296703] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.841 [2024-04-18 12:05:32.296886] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.841 [2024-04-18 12:05:32.296899] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.841 [2024-04-18 12:05:32.296910] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.841 [2024-04-18 12:05:32.299761] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.841 [2024-04-18 12:05:32.308684] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.841 [2024-04-18 12:05:32.309265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-04-18 12:05:32.309716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-04-18 12:05:32.309771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.841 [2024-04-18 12:05:32.309815] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.841 [2024-04-18 12:05:32.310000] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.841 [2024-04-18 12:05:32.310185] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.841 [2024-04-18 12:05:32.310199] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.841 [2024-04-18 12:05:32.310209] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.841 [2024-04-18 12:05:32.312967] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.841 [2024-04-18 12:05:32.321590] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.841 [2024-04-18 12:05:32.322146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-04-18 12:05:32.322564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-04-18 12:05:32.322618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.841 [2024-04-18 12:05:32.322659] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.841 [2024-04-18 12:05:32.323295] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.841 [2024-04-18 12:05:32.323483] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.841 [2024-04-18 12:05:32.323497] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.841 [2024-04-18 12:05:32.323508] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.841 [2024-04-18 12:05:32.326200] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.841 [2024-04-18 12:05:32.334523] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.841 [2024-04-18 12:05:32.335075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-04-18 12:05:32.335476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-04-18 12:05:32.335531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.841 [2024-04-18 12:05:32.335572] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.841 [2024-04-18 12:05:32.336153] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.841 [2024-04-18 12:05:32.336337] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.841 [2024-04-18 12:05:32.336350] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.841 [2024-04-18 12:05:32.336361] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.841 [2024-04-18 12:05:32.339121] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.841 [2024-04-18 12:05:32.347711] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.841 [2024-04-18 12:05:32.348306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-04-18 12:05:32.348733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-04-18 12:05:32.348788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.841 [2024-04-18 12:05:32.348830] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.841 [2024-04-18 12:05:32.349364] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.841 [2024-04-18 12:05:32.349631] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.841 [2024-04-18 12:05:32.349654] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.841 [2024-04-18 12:05:32.349669] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.841 [2024-04-18 12:05:32.353749] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.841 [2024-04-18 12:05:32.361276] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.841 [2024-04-18 12:05:32.361825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-04-18 12:05:32.362122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-04-18 12:05:32.362138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.841 [2024-04-18 12:05:32.362150] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.841 [2024-04-18 12:05:32.362333] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.841 [2024-04-18 12:05:32.362524] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.841 [2024-04-18 12:05:32.362537] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.841 [2024-04-18 12:05:32.362548] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.841 [2024-04-18 12:05:32.365304] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.841 [2024-04-18 12:05:32.374187] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.841 [2024-04-18 12:05:32.374768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-04-18 12:05:32.375245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-04-18 12:05:32.375297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:41.842 [2024-04-18 12:05:32.375309] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:41.842 [2024-04-18 12:05:32.375500] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:41.842 [2024-04-18 12:05:32.375684] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.842 [2024-04-18 12:05:32.375697] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.842 [2024-04-18 12:05:32.375708] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.842 [2024-04-18 12:05:32.378465] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.102 [2024-04-18 12:05:32.387420] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.102 [2024-04-18 12:05:32.387985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.102 [2024-04-18 12:05:32.388436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.102 [2024-04-18 12:05:32.388504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.102 [2024-04-18 12:05:32.388546] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.102 [2024-04-18 12:05:32.389097] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.102 [2024-04-18 12:05:32.389285] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.102 [2024-04-18 12:05:32.389302] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.102 [2024-04-18 12:05:32.389313] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.102 [2024-04-18 12:05:32.392192] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.102 [2024-04-18 12:05:32.400395] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.102 [2024-04-18 12:05:32.400974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.102 [2024-04-18 12:05:32.401408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.102 [2024-04-18 12:05:32.401473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.102 [2024-04-18 12:05:32.401516] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.102 [2024-04-18 12:05:32.402169] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.102 [2024-04-18 12:05:32.402520] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.102 [2024-04-18 12:05:32.402534] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.102 [2024-04-18 12:05:32.402545] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.102 [2024-04-18 12:05:32.405360] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.102 [2024-04-18 12:05:32.413448] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.102 [2024-04-18 12:05:32.414012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.102 [2024-04-18 12:05:32.414397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.102 [2024-04-18 12:05:32.414463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.102 [2024-04-18 12:05:32.414506] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.102 [2024-04-18 12:05:32.415037] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.102 [2024-04-18 12:05:32.415221] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.102 [2024-04-18 12:05:32.415234] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.102 [2024-04-18 12:05:32.415245] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.102 [2024-04-18 12:05:32.418002] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.102 [2024-04-18 12:05:32.426348] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.102 [2024-04-18 12:05:32.426926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.102 [2024-04-18 12:05:32.427421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.102 [2024-04-18 12:05:32.427490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.102 [2024-04-18 12:05:32.427533] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.102 [2024-04-18 12:05:32.428185] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.102 [2024-04-18 12:05:32.428549] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.102 [2024-04-18 12:05:32.428562] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.102 [2024-04-18 12:05:32.428576] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.102 [2024-04-18 12:05:32.431514] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.102 [2024-04-18 12:05:32.439415] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.102 [2024-04-18 12:05:32.439979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.102 [2024-04-18 12:05:32.440365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.102 [2024-04-18 12:05:32.440418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.102 [2024-04-18 12:05:32.440475] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.102 [2024-04-18 12:05:32.441132] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.102 [2024-04-18 12:05:32.441316] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.102 [2024-04-18 12:05:32.441328] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.102 [2024-04-18 12:05:32.441339] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.102 [2024-04-18 12:05:32.444105] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.102 [2024-04-18 12:05:32.452445] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.102 [2024-04-18 12:05:32.453004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.102 [2024-04-18 12:05:32.453465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.102 [2024-04-18 12:05:32.453519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.102 [2024-04-18 12:05:32.453560] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.102 [2024-04-18 12:05:32.454211] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.102 [2024-04-18 12:05:32.454561] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.102 [2024-04-18 12:05:32.454574] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.102 [2024-04-18 12:05:32.454595] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.102 [2024-04-18 12:05:32.457275] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.102 [2024-04-18 12:05:32.465376] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.102 [2024-04-18 12:05:32.465865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.102 [2024-04-18 12:05:32.466238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.102 [2024-04-18 12:05:32.466290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.102 [2024-04-18 12:05:32.466331] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.102 [2024-04-18 12:05:32.466997] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.102 [2024-04-18 12:05:32.467446] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.102 [2024-04-18 12:05:32.467464] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.102 [2024-04-18 12:05:32.467476] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.102 [2024-04-18 12:05:32.470231] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.102 [2024-04-18 12:05:32.478304] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.102 [2024-04-18 12:05:32.478814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.102 [2024-04-18 12:05:32.479247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.102 [2024-04-18 12:05:32.479300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.102 [2024-04-18 12:05:32.479340] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.102 [2024-04-18 12:05:32.479575] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.102 [2024-04-18 12:05:32.479758] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.102 [2024-04-18 12:05:32.479771] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.102 [2024-04-18 12:05:32.479782] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.102 [2024-04-18 12:05:32.482544] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.102 [2024-04-18 12:05:32.491322] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.102 [2024-04-18 12:05:32.491885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.102 [2024-04-18 12:05:32.492285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.102 [2024-04-18 12:05:32.492338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.102 [2024-04-18 12:05:32.492379] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.102 [2024-04-18 12:05:32.493045] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.102 [2024-04-18 12:05:32.493519] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.102 [2024-04-18 12:05:32.493532] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.102 [2024-04-18 12:05:32.493543] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.102 [2024-04-18 12:05:32.496297] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.102 [2024-04-18 12:05:32.504360] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.102 [2024-04-18 12:05:32.504955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.102 [2024-04-18 12:05:32.505345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.102 [2024-04-18 12:05:32.505399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.102 [2024-04-18 12:05:32.505439] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.102 [2024-04-18 12:05:32.506109] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.102 [2024-04-18 12:05:32.506586] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.102 [2024-04-18 12:05:32.506600] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.102 [2024-04-18 12:05:32.506611] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.102 [2024-04-18 12:05:32.509430] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.102 [2024-04-18 12:05:32.517421] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.102 [2024-04-18 12:05:32.518039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.102 [2024-04-18 12:05:32.518349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.102 [2024-04-18 12:05:32.518402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.102 [2024-04-18 12:05:32.518443] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.102 [2024-04-18 12:05:32.518848] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.102 [2024-04-18 12:05:32.519030] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.102 [2024-04-18 12:05:32.519043] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.102 [2024-04-18 12:05:32.519054] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.102 [2024-04-18 12:05:32.521827] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.102 [2024-04-18 12:05:32.530489] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.102 [2024-04-18 12:05:32.530998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.102 [2024-04-18 12:05:32.531386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.102 [2024-04-18 12:05:32.531438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.102 [2024-04-18 12:05:32.531495] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.102 [2024-04-18 12:05:32.532147] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.102 [2024-04-18 12:05:32.532633] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.102 [2024-04-18 12:05:32.532647] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.102 [2024-04-18 12:05:32.532658] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.102 [2024-04-18 12:05:32.535412] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.102 [2024-04-18 12:05:32.543582] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.102 [2024-04-18 12:05:32.544167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.102 [2024-04-18 12:05:32.544516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.102 [2024-04-18 12:05:32.544571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.102 [2024-04-18 12:05:32.544613] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.102 [2024-04-18 12:05:32.545265] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.102 [2024-04-18 12:05:32.545824] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.102 [2024-04-18 12:05:32.545838] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.103 [2024-04-18 12:05:32.545849] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.103 [2024-04-18 12:05:32.548685] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.103 [2024-04-18 12:05:32.556654] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.103 [2024-04-18 12:05:32.557176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.103 [2024-04-18 12:05:32.557604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.103 [2024-04-18 12:05:32.557668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.103 [2024-04-18 12:05:32.557680] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.103 [2024-04-18 12:05:32.557857] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.103 [2024-04-18 12:05:32.558031] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.103 [2024-04-18 12:05:32.558044] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.103 [2024-04-18 12:05:32.558054] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.103 [2024-04-18 12:05:32.560774] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.103 [2024-04-18 12:05:32.569755] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.103 [2024-04-18 12:05:32.570264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.103 [2024-04-18 12:05:32.570580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.103 [2024-04-18 12:05:32.570637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.103 [2024-04-18 12:05:32.570685] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.103 [2024-04-18 12:05:32.570862] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.103 [2024-04-18 12:05:32.571038] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.103 [2024-04-18 12:05:32.571050] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.103 [2024-04-18 12:05:32.571061] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.103 [2024-04-18 12:05:32.573884] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.103 [2024-04-18 12:05:32.582670] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.103 [2024-04-18 12:05:32.583229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.103 [2024-04-18 12:05:32.584909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.103 [2024-04-18 12:05:32.584939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.103 [2024-04-18 12:05:32.584954] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.103 [2024-04-18 12:05:32.585147] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.103 [2024-04-18 12:05:32.585332] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.103 [2024-04-18 12:05:32.585345] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.103 [2024-04-18 12:05:32.585356] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.103 [2024-04-18 12:05:32.588206] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.103 [2024-04-18 12:05:32.595771] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.103 [2024-04-18 12:05:32.596228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.103 [2024-04-18 12:05:32.596535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.103 [2024-04-18 12:05:32.596552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.103 [2024-04-18 12:05:32.596565] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.103 [2024-04-18 12:05:32.596755] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.103 [2024-04-18 12:05:32.596957] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.103 [2024-04-18 12:05:32.596970] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.103 [2024-04-18 12:05:32.596981] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.103 [2024-04-18 12:05:32.599878] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.103 [2024-04-18 12:05:32.608802] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.103 [2024-04-18 12:05:32.609383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.103 [2024-04-18 12:05:32.609766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.103 [2024-04-18 12:05:32.609821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.103 [2024-04-18 12:05:32.609850] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.103 [2024-04-18 12:05:32.610040] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.103 [2024-04-18 12:05:32.610228] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.103 [2024-04-18 12:05:32.610241] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.103 [2024-04-18 12:05:32.610252] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.103 [2024-04-18 12:05:32.613179] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.103 [2024-04-18 12:05:32.621942] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.103 [2024-04-18 12:05:32.622516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.103 [2024-04-18 12:05:32.622748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.103 [2024-04-18 12:05:32.622765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.103 [2024-04-18 12:05:32.622777] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.103 [2024-04-18 12:05:32.622962] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.103 [2024-04-18 12:05:32.623146] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.103 [2024-04-18 12:05:32.623159] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.103 [2024-04-18 12:05:32.623177] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.103 [2024-04-18 12:05:32.626000] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.103 [2024-04-18 12:05:32.635055] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.103 [2024-04-18 12:05:32.635541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.103 [2024-04-18 12:05:32.635928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.103 [2024-04-18 12:05:32.635981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.103 [2024-04-18 12:05:32.636023] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.103 [2024-04-18 12:05:32.636691] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.103 [2024-04-18 12:05:32.636966] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.103 [2024-04-18 12:05:32.636979] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.103 [2024-04-18 12:05:32.636990] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.103 [2024-04-18 12:05:32.639751] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.103 [2024-04-18 12:05:32.648322] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.103 [2024-04-18 12:05:32.648765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.103 [2024-04-18 12:05:32.649118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.103 [2024-04-18 12:05:32.649134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.103 [2024-04-18 12:05:32.649146] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.103 [2024-04-18 12:05:32.649336] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.363 [2024-04-18 12:05:32.649529] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.363 [2024-04-18 12:05:32.649543] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.363 [2024-04-18 12:05:32.649555] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.363 [2024-04-18 12:05:32.652386] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.363 [2024-04-18 12:05:32.661435] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.363 [2024-04-18 12:05:32.662003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.363 [2024-04-18 12:05:32.662356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.363 [2024-04-18 12:05:32.662408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.363 [2024-04-18 12:05:32.662467] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.363 [2024-04-18 12:05:32.662856] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.363 [2024-04-18 12:05:32.663119] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.363 [2024-04-18 12:05:32.663137] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.363 [2024-04-18 12:05:32.663153] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.363 [2024-04-18 12:05:32.667394] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.363 [2024-04-18 12:05:32.674992] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.363 [2024-04-18 12:05:32.675574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.363 [2024-04-18 12:05:32.675934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.363 [2024-04-18 12:05:32.675999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.363 [2024-04-18 12:05:32.676011] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.363 [2024-04-18 12:05:32.676196] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.363 [2024-04-18 12:05:32.676379] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.363 [2024-04-18 12:05:32.676392] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.363 [2024-04-18 12:05:32.676403] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.363 [2024-04-18 12:05:32.679214] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.363 [2024-04-18 12:05:32.688084] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.363 [2024-04-18 12:05:32.688468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.363 [2024-04-18 12:05:32.688835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.363 [2024-04-18 12:05:32.688889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.363 [2024-04-18 12:05:32.688931] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.363 [2024-04-18 12:05:32.689599] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.363 [2024-04-18 12:05:32.690053] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.363 [2024-04-18 12:05:32.690066] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.363 [2024-04-18 12:05:32.690077] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.363 [2024-04-18 12:05:32.692870] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.363 [2024-04-18 12:05:32.701240] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.363 [2024-04-18 12:05:32.701821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.363 [2024-04-18 12:05:32.702205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.363 [2024-04-18 12:05:32.702257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.363 [2024-04-18 12:05:32.702298] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.363 [2024-04-18 12:05:32.702811] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.363 [2024-04-18 12:05:32.702994] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.363 [2024-04-18 12:05:32.703007] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.363 [2024-04-18 12:05:32.703018] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.363 [2024-04-18 12:05:32.705808] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.363 [2024-04-18 12:05:32.714231] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.363 [2024-04-18 12:05:32.714653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.363 [2024-04-18 12:05:32.714935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.363 [2024-04-18 12:05:32.714951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.363 [2024-04-18 12:05:32.714966] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.363 [2024-04-18 12:05:32.715151] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.363 [2024-04-18 12:05:32.715333] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.363 [2024-04-18 12:05:32.715346] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.364 [2024-04-18 12:05:32.715357] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.364 [2024-04-18 12:05:32.718118] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.364 [2024-04-18 12:05:32.727250] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.364 [2024-04-18 12:05:32.727836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.364 [2024-04-18 12:05:32.728196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.364 [2024-04-18 12:05:32.728247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.364 [2024-04-18 12:05:32.728297] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.364 [2024-04-18 12:05:32.728488] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.364 [2024-04-18 12:05:32.728672] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.364 [2024-04-18 12:05:32.728685] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.364 [2024-04-18 12:05:32.728696] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.364 [2024-04-18 12:05:32.731525] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.364 [2024-04-18 12:05:32.740320] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.364 [2024-04-18 12:05:32.740890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.364 [2024-04-18 12:05:32.741199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.364 [2024-04-18 12:05:32.741252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.364 [2024-04-18 12:05:32.741292] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.364 [2024-04-18 12:05:32.741534] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.364 [2024-04-18 12:05:32.741718] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.364 [2024-04-18 12:05:32.741731] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.364 [2024-04-18 12:05:32.741742] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.364 [2024-04-18 12:05:32.744516] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.364 [2024-04-18 12:05:32.753369] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.364 [2024-04-18 12:05:32.753971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.364 [2024-04-18 12:05:32.754352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.364 [2024-04-18 12:05:32.754404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.364 [2024-04-18 12:05:32.754445] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.364 [2024-04-18 12:05:32.755119] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.364 [2024-04-18 12:05:32.755365] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.364 [2024-04-18 12:05:32.755378] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.364 [2024-04-18 12:05:32.755389] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.364 [2024-04-18 12:05:32.758146] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.364 [2024-04-18 12:05:32.766415] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.364 [2024-04-18 12:05:32.766918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.364 [2024-04-18 12:05:32.767269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.364 [2024-04-18 12:05:32.767321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.364 [2024-04-18 12:05:32.767362] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.364 [2024-04-18 12:05:32.767582] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.364 [2024-04-18 12:05:32.767766] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.364 [2024-04-18 12:05:32.767779] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.364 [2024-04-18 12:05:32.767790] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.364 [2024-04-18 12:05:32.770568] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.364 [2024-04-18 12:05:32.779518] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.364 [2024-04-18 12:05:32.779942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.364 [2024-04-18 12:05:32.780233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.364 [2024-04-18 12:05:32.780285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.364 [2024-04-18 12:05:32.780327] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.364 [2024-04-18 12:05:32.780953] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.364 [2024-04-18 12:05:32.781138] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.364 [2024-04-18 12:05:32.781151] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.364 [2024-04-18 12:05:32.781162] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.364 [2024-04-18 12:05:32.784003] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.364 [2024-04-18 12:05:32.792436] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.364 [2024-04-18 12:05:32.792896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.364 [2024-04-18 12:05:32.793237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.364 [2024-04-18 12:05:32.793289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.364 [2024-04-18 12:05:32.793331] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.364 [2024-04-18 12:05:32.794011] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.364 [2024-04-18 12:05:32.794355] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.364 [2024-04-18 12:05:32.794369] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.364 [2024-04-18 12:05:32.794380] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.364 [2024-04-18 12:05:32.797139] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.364 [2024-04-18 12:05:32.805557] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.364 [2024-04-18 12:05:32.806063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.364 [2024-04-18 12:05:32.806477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.364 [2024-04-18 12:05:32.806521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.364 [2024-04-18 12:05:32.806533] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.364 [2024-04-18 12:05:32.806718] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.364 [2024-04-18 12:05:32.806901] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.364 [2024-04-18 12:05:32.806914] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.364 [2024-04-18 12:05:32.806925] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.364 [2024-04-18 12:05:32.809766] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.364 [2024-04-18 12:05:32.818718] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.364 [2024-04-18 12:05:32.819147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.364 [2024-04-18 12:05:32.819396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.364 [2024-04-18 12:05:32.819447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.364 [2024-04-18 12:05:32.819508] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.364 [2024-04-18 12:05:32.820121] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.364 [2024-04-18 12:05:32.820304] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.364 [2024-04-18 12:05:32.820317] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.364 [2024-04-18 12:05:32.820328] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.364 [2024-04-18 12:05:32.823188] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.364 [2024-04-18 12:05:32.831780] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.364 [2024-04-18 12:05:32.832279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.364 [2024-04-18 12:05:32.832688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.364 [2024-04-18 12:05:32.832744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.364 [2024-04-18 12:05:32.832786] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.364 [2024-04-18 12:05:32.833438] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.364 [2024-04-18 12:05:32.833646] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.364 [2024-04-18 12:05:32.833659] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.364 [2024-04-18 12:05:32.833670] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.364 [2024-04-18 12:05:32.836500] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.364 [2024-04-18 12:05:32.844826] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.365 [2024-04-18 12:05:32.845345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.365 [2024-04-18 12:05:32.845574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.365 [2024-04-18 12:05:32.845591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.365 [2024-04-18 12:05:32.845604] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.365 [2024-04-18 12:05:32.845794] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.365 [2024-04-18 12:05:32.845983] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.365 [2024-04-18 12:05:32.845996] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.365 [2024-04-18 12:05:32.846008] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.365 [2024-04-18 12:05:32.848954] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.365 [2024-04-18 12:05:32.858076] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.365 [2024-04-18 12:05:32.858501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.365 [2024-04-18 12:05:32.858837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.365 [2024-04-18 12:05:32.858853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.365 [2024-04-18 12:05:32.858866] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.365 [2024-04-18 12:05:32.859056] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.365 [2024-04-18 12:05:32.859245] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.365 [2024-04-18 12:05:32.859258] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.365 [2024-04-18 12:05:32.859270] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.365 [2024-04-18 12:05:32.862187] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.365 [2024-04-18 12:05:32.871315] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.365 [2024-04-18 12:05:32.871897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.365 [2024-04-18 12:05:32.872250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.365 [2024-04-18 12:05:32.872267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.365 [2024-04-18 12:05:32.872279] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.365 [2024-04-18 12:05:32.872476] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.365 [2024-04-18 12:05:32.872668] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.365 [2024-04-18 12:05:32.872682] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.365 [2024-04-18 12:05:32.872693] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.365 [2024-04-18 12:05:32.875604] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.365 [2024-04-18 12:05:32.884548] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.365 [2024-04-18 12:05:32.885127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.365 [2024-04-18 12:05:32.885462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.365 [2024-04-18 12:05:32.885479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.365 [2024-04-18 12:05:32.885492] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.365 [2024-04-18 12:05:32.885681] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.365 [2024-04-18 12:05:32.885869] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.365 [2024-04-18 12:05:32.885882] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.365 [2024-04-18 12:05:32.885893] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.365 [2024-04-18 12:05:32.888808] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.365 [2024-04-18 12:05:32.897776] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.365 [2024-04-18 12:05:32.898354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.365 [2024-04-18 12:05:32.898646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.365 [2024-04-18 12:05:32.898664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.365 [2024-04-18 12:05:32.898676] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.365 [2024-04-18 12:05:32.898868] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.365 [2024-04-18 12:05:32.899057] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.365 [2024-04-18 12:05:32.899070] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.365 [2024-04-18 12:05:32.899081] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.365 [2024-04-18 12:05:32.901996] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.626 [2024-04-18 12:05:32.910946] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.626 [2024-04-18 12:05:32.911448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.626 [2024-04-18 12:05:32.911724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.626 [2024-04-18 12:05:32.911741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.626 [2024-04-18 12:05:32.911753] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.626 [2024-04-18 12:05:32.911942] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.626 [2024-04-18 12:05:32.912129] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.626 [2024-04-18 12:05:32.912146] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.626 [2024-04-18 12:05:32.912157] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.626 [2024-04-18 12:05:32.915072] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.626 [2024-04-18 12:05:32.924102] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.626 [2024-04-18 12:05:32.924649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.626 [2024-04-18 12:05:32.925043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.626 [2024-04-18 12:05:32.925096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.626 [2024-04-18 12:05:32.925138] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.626 [2024-04-18 12:05:32.925663] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.626 [2024-04-18 12:05:32.925853] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.626 [2024-04-18 12:05:32.925866] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.626 [2024-04-18 12:05:32.925877] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.626 [2024-04-18 12:05:32.928793] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.626 [2024-04-18 12:05:32.937398] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.626 [2024-04-18 12:05:32.937939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.626 [2024-04-18 12:05:32.938373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.626 [2024-04-18 12:05:32.938413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.626 [2024-04-18 12:05:32.938426] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.626 [2024-04-18 12:05:32.938621] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.626 [2024-04-18 12:05:32.938809] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.626 [2024-04-18 12:05:32.938822] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.626 [2024-04-18 12:05:32.938834] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.626 [2024-04-18 12:05:32.941755] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.626 [2024-04-18 12:05:32.950705] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.626 [2024-04-18 12:05:32.951266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.626 [2024-04-18 12:05:32.951571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.626 [2024-04-18 12:05:32.951626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.626 [2024-04-18 12:05:32.951667] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.626 [2024-04-18 12:05:32.952319] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.626 [2024-04-18 12:05:32.952534] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.626 [2024-04-18 12:05:32.952549] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.626 [2024-04-18 12:05:32.952563] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.626 [2024-04-18 12:05:32.955464] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.626 [2024-04-18 12:05:32.963914] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.626 [2024-04-18 12:05:32.964520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.626 [2024-04-18 12:05:32.964821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.626 [2024-04-18 12:05:32.964873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.626 [2024-04-18 12:05:32.964932] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.626 [2024-04-18 12:05:32.965433] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.626 [2024-04-18 12:05:32.965654] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.626 [2024-04-18 12:05:32.965668] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.626 [2024-04-18 12:05:32.965678] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.626 [2024-04-18 12:05:32.968438] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.626 [2024-04-18 12:05:32.976958] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.626 [2024-04-18 12:05:32.977486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.626 [2024-04-18 12:05:32.977919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.626 [2024-04-18 12:05:32.977972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.626 [2024-04-18 12:05:32.978014] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.626 [2024-04-18 12:05:32.978463] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.626 [2024-04-18 12:05:32.978646] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.626 [2024-04-18 12:05:32.978659] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.626 [2024-04-18 12:05:32.978670] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.626 [2024-04-18 12:05:32.981426] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.626 [2024-04-18 12:05:32.990048] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.626 [2024-04-18 12:05:32.990618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.626 [2024-04-18 12:05:32.990975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.626 [2024-04-18 12:05:32.991027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.626 [2024-04-18 12:05:32.991068] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.626 [2024-04-18 12:05:32.991731] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.626 [2024-04-18 12:05:32.991916] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.626 [2024-04-18 12:05:32.991929] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.626 [2024-04-18 12:05:32.991940] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.626 [2024-04-18 12:05:32.994735] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.626 [2024-04-18 12:05:33.003034] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.626 [2024-04-18 12:05:33.003384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.627 [2024-04-18 12:05:33.003670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.627 [2024-04-18 12:05:33.003687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.627 [2024-04-18 12:05:33.003699] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.627 [2024-04-18 12:05:33.003883] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.627 [2024-04-18 12:05:33.004066] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.627 [2024-04-18 12:05:33.004080] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.627 [2024-04-18 12:05:33.004090] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.627 [2024-04-18 12:05:33.006852] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.627 [2024-04-18 12:05:33.016025] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.627 [2024-04-18 12:05:33.016569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.627 [2024-04-18 12:05:33.016873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.627 [2024-04-18 12:05:33.016926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.627 [2024-04-18 12:05:33.016966] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.627 [2024-04-18 12:05:33.017628] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.627 [2024-04-18 12:05:33.017902] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.627 [2024-04-18 12:05:33.017915] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.627 [2024-04-18 12:05:33.017926] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.627 [2024-04-18 12:05:33.022001] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.627 [2024-04-18 12:05:33.029743] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.627 [2024-04-18 12:05:33.030244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.627 [2024-04-18 12:05:33.030605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.627 [2024-04-18 12:05:33.030659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.627 [2024-04-18 12:05:33.030700] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.627 [2024-04-18 12:05:33.030972] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.627 [2024-04-18 12:05:33.031155] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.627 [2024-04-18 12:05:33.031168] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.627 [2024-04-18 12:05:33.031179] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.627 [2024-04-18 12:05:33.033949] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.627 [2024-04-18 12:05:33.042781] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.627 [2024-04-18 12:05:33.043356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.627 [2024-04-18 12:05:33.043722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.627 [2024-04-18 12:05:33.043777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.627 [2024-04-18 12:05:33.043807] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.627 [2024-04-18 12:05:33.043990] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.627 [2024-04-18 12:05:33.044174] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.627 [2024-04-18 12:05:33.044187] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.627 [2024-04-18 12:05:33.044198] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.627 [2024-04-18 12:05:33.046960] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.627 [2024-04-18 12:05:33.055756] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.627 [2024-04-18 12:05:33.056162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.627 [2024-04-18 12:05:33.056503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.627 [2024-04-18 12:05:33.056558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.627 [2024-04-18 12:05:33.056599] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.627 [2024-04-18 12:05:33.057050] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.627 [2024-04-18 12:05:33.057233] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.627 [2024-04-18 12:05:33.057246] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.627 [2024-04-18 12:05:33.057256] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.627 [2024-04-18 12:05:33.060017] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.627 [2024-04-18 12:05:33.068791] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.627 [2024-04-18 12:05:33.069303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.627 [2024-04-18 12:05:33.069656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.627 [2024-04-18 12:05:33.069710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.627 [2024-04-18 12:05:33.069751] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.627 [2024-04-18 12:05:33.069935] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.627 [2024-04-18 12:05:33.070117] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.627 [2024-04-18 12:05:33.070131] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.627 [2024-04-18 12:05:33.070142] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.627 [2024-04-18 12:05:33.072945] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.627 [2024-04-18 12:05:33.081916] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.627 [2024-04-18 12:05:33.082409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.627 [2024-04-18 12:05:33.082699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.627 [2024-04-18 12:05:33.082754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.627 [2024-04-18 12:05:33.082795] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.627 [2024-04-18 12:05:33.083448] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.627 [2024-04-18 12:05:33.083963] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.627 [2024-04-18 12:05:33.083976] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.627 [2024-04-18 12:05:33.083987] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.627 [2024-04-18 12:05:33.086830] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.627 [2024-04-18 12:05:33.095019] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.627 [2024-04-18 12:05:33.095514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.627 [2024-04-18 12:05:33.095958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.627 [2024-04-18 12:05:33.096011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.627 [2024-04-18 12:05:33.096053] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.627 [2024-04-18 12:05:33.096265] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.627 [2024-04-18 12:05:33.096439] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.627 [2024-04-18 12:05:33.096457] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.627 [2024-04-18 12:05:33.096468] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.627 [2024-04-18 12:05:33.099420] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.627 [2024-04-18 12:05:33.108213] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.627 [2024-04-18 12:05:33.108701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.627 [2024-04-18 12:05:33.108931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.627 [2024-04-18 12:05:33.108947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.627 [2024-04-18 12:05:33.108959] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.627 [2024-04-18 12:05:33.109149] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.627 [2024-04-18 12:05:33.109337] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.627 [2024-04-18 12:05:33.109350] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.627 [2024-04-18 12:05:33.109362] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.627 [2024-04-18 12:05:33.112280] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.627 [2024-04-18 12:05:33.121402] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.627 [2024-04-18 12:05:33.121961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.627 [2024-04-18 12:05:33.122181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.627 [2024-04-18 12:05:33.122196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.627 [2024-04-18 12:05:33.122209] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.628 [2024-04-18 12:05:33.122398] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.628 [2024-04-18 12:05:33.122592] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.628 [2024-04-18 12:05:33.122606] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.628 [2024-04-18 12:05:33.122617] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.628 [2024-04-18 12:05:33.125524] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.628 [2024-04-18 12:05:33.134633] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.628 [2024-04-18 12:05:33.135208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.628 [2024-04-18 12:05:33.135586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.628 [2024-04-18 12:05:33.135642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.628 [2024-04-18 12:05:33.135683] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.628 [2024-04-18 12:05:33.135961] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.628 [2024-04-18 12:05:33.136149] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.628 [2024-04-18 12:05:33.136162] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.628 [2024-04-18 12:05:33.136173] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.628 [2024-04-18 12:05:33.139084] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.628 [2024-04-18 12:05:33.147778] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.628 [2024-04-18 12:05:33.148355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.628 [2024-04-18 12:05:33.148711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.628 [2024-04-18 12:05:33.148729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.628 [2024-04-18 12:05:33.148741] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.628 [2024-04-18 12:05:33.148926] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.628 [2024-04-18 12:05:33.149108] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.628 [2024-04-18 12:05:33.149121] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.628 [2024-04-18 12:05:33.149132] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.628 [2024-04-18 12:05:33.151892] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.628 [2024-04-18 12:05:33.160814] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.628 [2024-04-18 12:05:33.161362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.628 [2024-04-18 12:05:33.161799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.628 [2024-04-18 12:05:33.161853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.628 [2024-04-18 12:05:33.161895] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.628 [2024-04-18 12:05:33.162388] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.628 [2024-04-18 12:05:33.162576] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.628 [2024-04-18 12:05:33.162590] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.628 [2024-04-18 12:05:33.162601] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.628 [2024-04-18 12:05:33.165353] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.890 [2024-04-18 12:05:33.174093] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.890 [2024-04-18 12:05:33.174561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.890 [2024-04-18 12:05:33.174749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.890 [2024-04-18 12:05:33.174765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.890 [2024-04-18 12:05:33.174777] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.890 [2024-04-18 12:05:33.174976] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.890 [2024-04-18 12:05:33.175160] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.890 [2024-04-18 12:05:33.175173] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.890 [2024-04-18 12:05:33.175184] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.890 [2024-04-18 12:05:33.177991] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.890 [2024-04-18 12:05:33.187099] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.890 [2024-04-18 12:05:33.187655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.890 [2024-04-18 12:05:33.188091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.890 [2024-04-18 12:05:33.188126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.890 [2024-04-18 12:05:33.188137] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.890 [2024-04-18 12:05:33.188312] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.890 [2024-04-18 12:05:33.188511] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.890 [2024-04-18 12:05:33.188525] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.890 [2024-04-18 12:05:33.188536] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.890 [2024-04-18 12:05:33.191290] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.890 [2024-04-18 12:05:33.200231] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.890 [2024-04-18 12:05:33.200799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.890 [2024-04-18 12:05:33.201086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.890 [2024-04-18 12:05:33.201105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.890 [2024-04-18 12:05:33.201117] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.890 [2024-04-18 12:05:33.201302] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.890 [2024-04-18 12:05:33.201491] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.890 [2024-04-18 12:05:33.201505] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.890 [2024-04-18 12:05:33.201516] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.890 [2024-04-18 12:05:33.204270] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.890 [2024-04-18 12:05:33.213307] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.890 [2024-04-18 12:05:33.213915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.890 [2024-04-18 12:05:33.214325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.890 [2024-04-18 12:05:33.214377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.890 [2024-04-18 12:05:33.214418] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.890 [2024-04-18 12:05:33.215084] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.890 [2024-04-18 12:05:33.215525] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.890 [2024-04-18 12:05:33.215539] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.890 [2024-04-18 12:05:33.215551] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.890 [2024-04-18 12:05:33.218340] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.890 [2024-04-18 12:05:33.226386] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.890 [2024-04-18 12:05:33.226891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.890 [2024-04-18 12:05:33.227264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.890 [2024-04-18 12:05:33.227317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.890 [2024-04-18 12:05:33.227359] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.890 [2024-04-18 12:05:33.228025] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.890 [2024-04-18 12:05:33.228500] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.890 [2024-04-18 12:05:33.228514] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.890 [2024-04-18 12:05:33.228525] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.890 [2024-04-18 12:05:33.231277] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.890 [2024-04-18 12:05:33.239368] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.890 [2024-04-18 12:05:33.239891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.890 [2024-04-18 12:05:33.240194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.890 [2024-04-18 12:05:33.240246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.890 [2024-04-18 12:05:33.240289] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.890 [2024-04-18 12:05:33.240559] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.890 [2024-04-18 12:05:33.240821] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.890 [2024-04-18 12:05:33.240839] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.890 [2024-04-18 12:05:33.240855] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.890 [2024-04-18 12:05:33.244936] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.890 [2024-04-18 12:05:33.252715] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.890 [2024-04-18 12:05:33.253310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.890 [2024-04-18 12:05:33.253744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.890 [2024-04-18 12:05:33.253800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.890 [2024-04-18 12:05:33.253842] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.890 [2024-04-18 12:05:33.254428] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.890 [2024-04-18 12:05:33.254617] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.890 [2024-04-18 12:05:33.254631] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.890 [2024-04-18 12:05:33.254642] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.890 [2024-04-18 12:05:33.257395] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.890 [2024-04-18 12:05:33.265686] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.890 [2024-04-18 12:05:33.266282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.890 [2024-04-18 12:05:33.266630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.890 [2024-04-18 12:05:33.266685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.890 [2024-04-18 12:05:33.266726] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.890 [2024-04-18 12:05:33.267353] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.890 [2024-04-18 12:05:33.267551] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.890 [2024-04-18 12:05:33.267564] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.890 [2024-04-18 12:05:33.267574] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.890 [2024-04-18 12:05:33.270253] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.890 [2024-04-18 12:05:33.278656] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.890 [2024-04-18 12:05:33.279251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.890 [2024-04-18 12:05:33.279606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.890 [2024-04-18 12:05:33.279623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.890 [2024-04-18 12:05:33.279638] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.890 [2024-04-18 12:05:33.279826] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.890 [2024-04-18 12:05:33.280009] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.890 [2024-04-18 12:05:33.280023] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.890 [2024-04-18 12:05:33.280033] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.890 [2024-04-18 12:05:33.282757] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.890 [2024-04-18 12:05:33.291688] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.890 [2024-04-18 12:05:33.292254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.890 [2024-04-18 12:05:33.292612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.890 [2024-04-18 12:05:33.292628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.890 [2024-04-18 12:05:33.292640] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.890 [2024-04-18 12:05:33.292825] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.890 [2024-04-18 12:05:33.293009] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.890 [2024-04-18 12:05:33.293022] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.890 [2024-04-18 12:05:33.293033] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.891 [2024-04-18 12:05:33.295789] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.891 [2024-04-18 12:05:33.304616] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.891 [2024-04-18 12:05:33.305200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.891 [2024-04-18 12:05:33.305630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.891 [2024-04-18 12:05:33.305687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.891 [2024-04-18 12:05:33.305729] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.891 [2024-04-18 12:05:33.306348] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.891 [2024-04-18 12:05:33.306535] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.891 [2024-04-18 12:05:33.306549] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.891 [2024-04-18 12:05:33.306560] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.891 [2024-04-18 12:05:33.309250] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.891 [2024-04-18 12:05:33.317577] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.891 [2024-04-18 12:05:33.318162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.891 [2024-04-18 12:05:33.318568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.891 [2024-04-18 12:05:33.318624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.891 [2024-04-18 12:05:33.318665] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.891 [2024-04-18 12:05:33.319326] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.891 [2024-04-18 12:05:33.319672] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.891 [2024-04-18 12:05:33.319684] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.891 [2024-04-18 12:05:33.319695] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.891 [2024-04-18 12:05:33.322375] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.891 [2024-04-18 12:05:33.330467] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.891 [2024-04-18 12:05:33.331037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.891 [2024-04-18 12:05:33.331447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.891 [2024-04-18 12:05:33.331520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.891 [2024-04-18 12:05:33.331551] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.891 [2024-04-18 12:05:33.331821] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.891 [2024-04-18 12:05:33.332082] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.891 [2024-04-18 12:05:33.332100] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.891 [2024-04-18 12:05:33.332115] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.891 [2024-04-18 12:05:33.336188] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.891 [2024-04-18 12:05:33.343776] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.891 [2024-04-18 12:05:33.344353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.891 [2024-04-18 12:05:33.344588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.891 [2024-04-18 12:05:33.344644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.891 [2024-04-18 12:05:33.344685] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.891 [2024-04-18 12:05:33.345336] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.891 [2024-04-18 12:05:33.345548] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.891 [2024-04-18 12:05:33.345562] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.891 [2024-04-18 12:05:33.345573] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.891 [2024-04-18 12:05:33.348366] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.891 [2024-04-18 12:05:33.356990] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.891 [2024-04-18 12:05:33.357521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.891 [2024-04-18 12:05:33.357878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.891 [2024-04-18 12:05:33.357929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.891 [2024-04-18 12:05:33.357971] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.891 [2024-04-18 12:05:33.358442] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.891 [2024-04-18 12:05:33.358637] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.891 [2024-04-18 12:05:33.358651] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.891 [2024-04-18 12:05:33.358662] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.891 [2024-04-18 12:05:33.361556] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.891 [2024-04-18 12:05:33.370038] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.891 [2024-04-18 12:05:33.370553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.891 [2024-04-18 12:05:33.370902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.891 [2024-04-18 12:05:33.370964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.891 [2024-04-18 12:05:33.371005] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.891 [2024-04-18 12:05:33.371527] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.891 [2024-04-18 12:05:33.371712] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.891 [2024-04-18 12:05:33.371725] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.891 [2024-04-18 12:05:33.371736] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.891 [2024-04-18 12:05:33.374492] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.891 [2024-04-18 12:05:33.382969] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.891 [2024-04-18 12:05:33.383516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.891 [2024-04-18 12:05:33.383829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.891 [2024-04-18 12:05:33.383882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.891 [2024-04-18 12:05:33.383922] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.891 [2024-04-18 12:05:33.384598] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.891 [2024-04-18 12:05:33.384963] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.891 [2024-04-18 12:05:33.384976] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.891 [2024-04-18 12:05:33.384987] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.891 [2024-04-18 12:05:33.387745] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.891 [2024-04-18 12:05:33.395880] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.891 [2024-04-18 12:05:33.396471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.891 [2024-04-18 12:05:33.396844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.891 [2024-04-18 12:05:33.396897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.891 [2024-04-18 12:05:33.396939] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.891 [2024-04-18 12:05:33.397501] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.891 [2024-04-18 12:05:33.397687] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.891 [2024-04-18 12:05:33.397700] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.891 [2024-04-18 12:05:33.397711] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.891 [2024-04-18 12:05:33.400465] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.891 [2024-04-18 12:05:33.408926] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.891 [2024-04-18 12:05:33.409474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.891 [2024-04-18 12:05:33.409902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.892 [2024-04-18 12:05:33.409953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.892 [2024-04-18 12:05:33.409994] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.892 [2024-04-18 12:05:33.410522] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.892 [2024-04-18 12:05:33.410705] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.892 [2024-04-18 12:05:33.410718] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.892 [2024-04-18 12:05:33.410729] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.892 [2024-04-18 12:05:33.413485] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.892 [2024-04-18 12:05:33.421916] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.892 [2024-04-18 12:05:33.422476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.892 [2024-04-18 12:05:33.422651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.892 [2024-04-18 12:05:33.422666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:42.892 [2024-04-18 12:05:33.422677] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:42.892 [2024-04-18 12:05:33.422853] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:42.892 [2024-04-18 12:05:33.423025] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.892 [2024-04-18 12:05:33.423038] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.892 [2024-04-18 12:05:33.423048] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.892 [2024-04-18 12:05:33.425817] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.892 [2024-04-18 12:05:33.435189] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.161 [2024-04-18 12:05:33.435766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.161 [2024-04-18 12:05:33.436124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.161 [2024-04-18 12:05:33.436141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.161 [2024-04-18 12:05:33.436153] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.161 [2024-04-18 12:05:33.436343] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.161 [2024-04-18 12:05:33.436539] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.161 [2024-04-18 12:05:33.436556] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.161 [2024-04-18 12:05:33.436567] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.161 [2024-04-18 12:05:33.439477] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.161 [2024-04-18 12:05:33.448423] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.161 [2024-04-18 12:05:33.448999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.161 [2024-04-18 12:05:33.449366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.161 [2024-04-18 12:05:33.449382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.161 [2024-04-18 12:05:33.449394] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.161 [2024-04-18 12:05:33.449590] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.161 [2024-04-18 12:05:33.449777] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.161 [2024-04-18 12:05:33.449790] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.161 [2024-04-18 12:05:33.449802] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.161 [2024-04-18 12:05:33.452711] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.161 [2024-04-18 12:05:33.461500] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.161 [2024-04-18 12:05:33.462114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.161 [2024-04-18 12:05:33.462544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.161 [2024-04-18 12:05:33.462600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.161 [2024-04-18 12:05:33.462642] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.161 [2024-04-18 12:05:33.462840] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.161 [2024-04-18 12:05:33.463023] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.161 [2024-04-18 12:05:33.463037] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.161 [2024-04-18 12:05:33.463054] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.161 [2024-04-18 12:05:33.467088] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.161 [2024-04-18 12:05:33.474933] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.161 [2024-04-18 12:05:33.475494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.161 [2024-04-18 12:05:33.475859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.161 [2024-04-18 12:05:33.475912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.161 [2024-04-18 12:05:33.475954] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.161 [2024-04-18 12:05:33.476624] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.161 [2024-04-18 12:05:33.476995] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.161 [2024-04-18 12:05:33.477008] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.161 [2024-04-18 12:05:33.477022] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.161 [2024-04-18 12:05:33.479780] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.161 [2024-04-18 12:05:33.487916] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.161 [2024-04-18 12:05:33.488481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.161 [2024-04-18 12:05:33.488842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.161 [2024-04-18 12:05:33.488894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.161 [2024-04-18 12:05:33.488935] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.161 [2024-04-18 12:05:33.489606] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.161 [2024-04-18 12:05:33.490068] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.161 [2024-04-18 12:05:33.490081] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.161 [2024-04-18 12:05:33.490092] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.161 [2024-04-18 12:05:33.492882] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.161 [2024-04-18 12:05:33.500910] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.161 [2024-04-18 12:05:33.501390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.161 [2024-04-18 12:05:33.501721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.161 [2024-04-18 12:05:33.501738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.161 [2024-04-18 12:05:33.501751] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.161 [2024-04-18 12:05:33.501936] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.161 [2024-04-18 12:05:33.502119] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.161 [2024-04-18 12:05:33.502132] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.161 [2024-04-18 12:05:33.502143] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.161 [2024-04-18 12:05:33.504901] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.161 [2024-04-18 12:05:33.513912] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.161 [2024-04-18 12:05:33.514470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.161 [2024-04-18 12:05:33.514839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.161 [2024-04-18 12:05:33.514890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.161 [2024-04-18 12:05:33.514930] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.161 [2024-04-18 12:05:33.515599] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.161 [2024-04-18 12:05:33.516102] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.161 [2024-04-18 12:05:33.516115] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.161 [2024-04-18 12:05:33.516130] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.161 [2024-04-18 12:05:33.518888] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.161 [2024-04-18 12:05:33.526866] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.161 [2024-04-18 12:05:33.527426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.161 [2024-04-18 12:05:33.527798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.162 [2024-04-18 12:05:33.527853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.162 [2024-04-18 12:05:33.527894] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.162 [2024-04-18 12:05:33.528274] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.162 [2024-04-18 12:05:33.528462] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.162 [2024-04-18 12:05:33.528476] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.162 [2024-04-18 12:05:33.528487] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.162 [2024-04-18 12:05:33.531241] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.162 [2024-04-18 12:05:33.539803] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.162 [2024-04-18 12:05:33.540382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.162 [2024-04-18 12:05:33.540666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.162 [2024-04-18 12:05:33.540682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.162 [2024-04-18 12:05:33.540694] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.162 [2024-04-18 12:05:33.540879] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.162 [2024-04-18 12:05:33.541061] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.162 [2024-04-18 12:05:33.541074] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.162 [2024-04-18 12:05:33.541085] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.162 [2024-04-18 12:05:33.543850] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.162 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2648871 Killed "${NVMF_APP[@]}" "$@" 00:29:43.162 12:05:33 -- host/bdevperf.sh@36 -- # tgt_init 00:29:43.162 12:05:33 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:43.162 12:05:33 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:29:43.162 12:05:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:43.162 12:05:33 -- common/autotest_common.sh@10 -- # set +x 00:29:43.162 [2024-04-18 12:05:33.552977] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.162 [2024-04-18 12:05:33.553471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.162 [2024-04-18 12:05:33.553741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.162 [2024-04-18 12:05:33.553757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.162 [2024-04-18 12:05:33.553770] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.162 [2024-04-18 12:05:33.553960] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.162 [2024-04-18 12:05:33.554152] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.162 [2024-04-18 12:05:33.554165] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.162 [2024-04-18 12:05:33.554176] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.162 [2024-04-18 12:05:33.557088] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.162 12:05:33 -- nvmf/common.sh@470 -- # nvmfpid=2650506 00:29:43.162 12:05:33 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:43.162 12:05:33 -- nvmf/common.sh@471 -- # waitforlisten 2650506 00:29:43.162 12:05:33 -- common/autotest_common.sh@817 -- # '[' -z 2650506 ']' 00:29:43.162 12:05:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:43.162 12:05:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:43.162 12:05:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:43.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:43.162 12:05:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:43.162 12:05:33 -- common/autotest_common.sh@10 -- # set +x 00:29:43.162 [2024-04-18 12:05:33.566206] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.162 [2024-04-18 12:05:33.566788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.162 [2024-04-18 12:05:33.567066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.162 [2024-04-18 12:05:33.567082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.162 [2024-04-18 12:05:33.567095] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.162 [2024-04-18 12:05:33.567284] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.162 [2024-04-18 12:05:33.567476] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.162 [2024-04-18 12:05:33.567490] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.162 [2024-04-18 12:05:33.567502] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.162 [2024-04-18 12:05:33.570409] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.162 [2024-04-18 12:05:33.579356] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.162 [2024-04-18 12:05:33.579938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.162 [2024-04-18 12:05:33.580270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.162 [2024-04-18 12:05:33.580287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.162 [2024-04-18 12:05:33.580300] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.162 [2024-04-18 12:05:33.580496] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.162 [2024-04-18 12:05:33.580684] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.162 [2024-04-18 12:05:33.580698] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.162 [2024-04-18 12:05:33.580709] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.162 [2024-04-18 12:05:33.583619] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.162 [2024-04-18 12:05:33.592576] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.162 [2024-04-18 12:05:33.593150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.162 [2024-04-18 12:05:33.593460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.162 [2024-04-18 12:05:33.593477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.162 [2024-04-18 12:05:33.593490] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.162 [2024-04-18 12:05:33.593679] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.162 [2024-04-18 12:05:33.593867] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.162 [2024-04-18 12:05:33.593880] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.162 [2024-04-18 12:05:33.593891] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.162 [2024-04-18 12:05:33.596808] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.162 [2024-04-18 12:05:33.605703] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.162 [2024-04-18 12:05:33.606254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.162 [2024-04-18 12:05:33.606521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.162 [2024-04-18 12:05:33.606539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.162 [2024-04-18 12:05:33.606552] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.162 [2024-04-18 12:05:33.606746] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.162 [2024-04-18 12:05:33.606935] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.162 [2024-04-18 12:05:33.606949] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.162 [2024-04-18 12:05:33.606960] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.162 [2024-04-18 12:05:33.609887] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.162 [2024-04-18 12:05:33.618924] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.162 [2024-04-18 12:05:33.619420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.162 [2024-04-18 12:05:33.619615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.162 [2024-04-18 12:05:33.619632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.162 [2024-04-18 12:05:33.619646] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.162 [2024-04-18 12:05:33.619839] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.162 [2024-04-18 12:05:33.620031] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.162 [2024-04-18 12:05:33.620045] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.162 [2024-04-18 12:05:33.620056] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.162 [2024-04-18 12:05:33.622979] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.162 [2024-04-18 12:05:33.632127] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.162 [2024-04-18 12:05:33.632730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.162 [2024-04-18 12:05:33.633083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.162 [2024-04-18 12:05:33.633100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.163 [2024-04-18 12:05:33.633113] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.163 [2024-04-18 12:05:33.633306] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.163 [2024-04-18 12:05:33.633510] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.163 [2024-04-18 12:05:33.633524] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.163 [2024-04-18 12:05:33.633535] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.163 [2024-04-18 12:05:33.636442] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.163 [2024-04-18 12:05:33.645371] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.163 [2024-04-18 12:05:33.645977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.163 [2024-04-18 12:05:33.646313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.163 [2024-04-18 12:05:33.646329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.163 [2024-04-18 12:05:33.646343] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.163 [2024-04-18 12:05:33.646542] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.163 [2024-04-18 12:05:33.646548] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:29:43.163 [2024-04-18 12:05:33.646625] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:43.163 [2024-04-18 12:05:33.646733] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.163 [2024-04-18 12:05:33.646746] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.163 [2024-04-18 12:05:33.646758] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.163 [2024-04-18 12:05:33.649692] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.163 [2024-04-18 12:05:33.658659] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.163 [2024-04-18 12:05:33.659246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.163 [2024-04-18 12:05:33.659554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.163 [2024-04-18 12:05:33.659572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.163 [2024-04-18 12:05:33.659585] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.163 [2024-04-18 12:05:33.659780] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.163 [2024-04-18 12:05:33.659972] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.163 [2024-04-18 12:05:33.659985] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.163 [2024-04-18 12:05:33.659997] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.163 [2024-04-18 12:05:33.662895] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.163 [2024-04-18 12:05:33.671799] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.163 [2024-04-18 12:05:33.672387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.163 [2024-04-18 12:05:33.672738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.163 [2024-04-18 12:05:33.672756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.163 [2024-04-18 12:05:33.672770] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.163 [2024-04-18 12:05:33.672964] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.163 [2024-04-18 12:05:33.673156] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.163 [2024-04-18 12:05:33.673170] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.163 [2024-04-18 12:05:33.673181] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.163 [2024-04-18 12:05:33.676115] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.163 [2024-04-18 12:05:33.685115] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.163 [2024-04-18 12:05:33.685693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.163 [2024-04-18 12:05:33.686028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.163 [2024-04-18 12:05:33.686045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.163 [2024-04-18 12:05:33.686058] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.163 [2024-04-18 12:05:33.686252] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.163 [2024-04-18 12:05:33.686443] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.163 [2024-04-18 12:05:33.686463] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.163 [2024-04-18 12:05:33.686475] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.163 [2024-04-18 12:05:33.689399] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.163 [2024-04-18 12:05:33.698309] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.163 [2024-04-18 12:05:33.698912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.163 [2024-04-18 12:05:33.699242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.163 [2024-04-18 12:05:33.699259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.163 [2024-04-18 12:05:33.699272] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.163 [2024-04-18 12:05:33.699470] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.163 [2024-04-18 12:05:33.699661] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.163 [2024-04-18 12:05:33.699675] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.163 [2024-04-18 12:05:33.699687] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.163 [2024-04-18 12:05:33.702616] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.424 [2024-04-18 12:05:33.711617] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.424 [2024-04-18 12:05:33.712165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.424 [2024-04-18 12:05:33.712519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.424 [2024-04-18 12:05:33.712536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.424 [2024-04-18 12:05:33.712549] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.424 [2024-04-18 12:05:33.712742] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.424 [2024-04-18 12:05:33.712932] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.424 [2024-04-18 12:05:33.712945] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.424 [2024-04-18 12:05:33.712957] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.424 [2024-04-18 12:05:33.715885] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.424 EAL: No free 2048 kB hugepages reported on node 1 00:29:43.424 [2024-04-18 12:05:33.724877] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.424 [2024-04-18 12:05:33.725461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.424 [2024-04-18 12:05:33.725764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.424 [2024-04-18 12:05:33.725781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.424 [2024-04-18 12:05:33.725794] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.424 [2024-04-18 12:05:33.725988] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.424 [2024-04-18 12:05:33.726179] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.424 [2024-04-18 12:05:33.726192] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.424 [2024-04-18 12:05:33.726204] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.424 [2024-04-18 12:05:33.729144] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.424 [2024-04-18 12:05:33.738161] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.424 [2024-04-18 12:05:33.738748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.424 [2024-04-18 12:05:33.739024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.424 [2024-04-18 12:05:33.739041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.424 [2024-04-18 12:05:33.739054] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.424 [2024-04-18 12:05:33.739248] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.424 [2024-04-18 12:05:33.739439] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.424 [2024-04-18 12:05:33.739459] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.424 [2024-04-18 12:05:33.739471] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.425 [2024-04-18 12:05:33.742405] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.425 [2024-04-18 12:05:33.751377] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.425 [2024-04-18 12:05:33.751989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.425 [2024-04-18 12:05:33.752347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.425 [2024-04-18 12:05:33.752364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.425 [2024-04-18 12:05:33.752377] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.425 [2024-04-18 12:05:33.752575] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.425 [2024-04-18 12:05:33.752766] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.425 [2024-04-18 12:05:33.752779] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.425 [2024-04-18 12:05:33.752791] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.425 [2024-04-18 12:05:33.755735] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.425 [2024-04-18 12:05:33.764617] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.425 [2024-04-18 12:05:33.765167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.425 [2024-04-18 12:05:33.765472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.425 [2024-04-18 12:05:33.765489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.425 [2024-04-18 12:05:33.765502] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.425 [2024-04-18 12:05:33.765695] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.425 [2024-04-18 12:05:33.765886] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.425 [2024-04-18 12:05:33.765899] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.425 [2024-04-18 12:05:33.765911] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.425 [2024-04-18 12:05:33.768787] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.425 [2024-04-18 12:05:33.777729] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.425 [2024-04-18 12:05:33.778308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.425 [2024-04-18 12:05:33.778657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.425 [2024-04-18 12:05:33.778674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.425 [2024-04-18 12:05:33.778687] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.425 [2024-04-18 12:05:33.778875] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.425 [2024-04-18 12:05:33.779060] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.425 [2024-04-18 12:05:33.779073] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.425 [2024-04-18 12:05:33.779084] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.425 [2024-04-18 12:05:33.781927] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.425 [2024-04-18 12:05:33.784686] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:43.425 [2024-04-18 12:05:33.790849] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.425 [2024-04-18 12:05:33.791417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.425 [2024-04-18 12:05:33.791778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.425 [2024-04-18 12:05:33.791795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.425 [2024-04-18 12:05:33.791808] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.425 [2024-04-18 12:05:33.791996] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.425 [2024-04-18 12:05:33.792183] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.425 [2024-04-18 12:05:33.792196] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.425 [2024-04-18 12:05:33.792207] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.425 [2024-04-18 12:05:33.795111] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.425 [2024-04-18 12:05:33.804039] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.425 [2024-04-18 12:05:33.804621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.425 [2024-04-18 12:05:33.804908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.425 [2024-04-18 12:05:33.804925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.425 [2024-04-18 12:05:33.804969] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.425 [2024-04-18 12:05:33.805163] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.425 [2024-04-18 12:05:33.805354] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.425 [2024-04-18 12:05:33.805367] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.425 [2024-04-18 12:05:33.805379] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.425 [2024-04-18 12:05:33.808245] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.425 [2024-04-18 12:05:33.817191] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.425 [2024-04-18 12:05:33.817770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.425 [2024-04-18 12:05:33.818104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.425 [2024-04-18 12:05:33.818121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.425 [2024-04-18 12:05:33.818134] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.425 [2024-04-18 12:05:33.818326] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.425 [2024-04-18 12:05:33.818520] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.425 [2024-04-18 12:05:33.818534] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.425 [2024-04-18 12:05:33.818546] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.425 [2024-04-18 12:05:33.821396] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.425 [2024-04-18 12:05:33.830324] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.425 [2024-04-18 12:05:33.830828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.425 [2024-04-18 12:05:33.831200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.425 [2024-04-18 12:05:33.831216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.425 [2024-04-18 12:05:33.831229] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.425 [2024-04-18 12:05:33.831417] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.425 [2024-04-18 12:05:33.831607] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.425 [2024-04-18 12:05:33.831621] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.425 [2024-04-18 12:05:33.831632] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.425 [2024-04-18 12:05:33.834519] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.425 [2024-04-18 12:05:33.843409] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.425 [2024-04-18 12:05:33.843947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.425 [2024-04-18 12:05:33.844239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.425 [2024-04-18 12:05:33.844255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.425 [2024-04-18 12:05:33.844267] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.425 [2024-04-18 12:05:33.844461] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.425 [2024-04-18 12:05:33.844669] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.425 [2024-04-18 12:05:33.844683] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.425 [2024-04-18 12:05:33.844694] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.425 [2024-04-18 12:05:33.847570] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.425 [2024-04-18 12:05:33.856456] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.425 [2024-04-18 12:05:33.856957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.425 [2024-04-18 12:05:33.857287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.425 [2024-04-18 12:05:33.857303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.425 [2024-04-18 12:05:33.857315] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.425 [2024-04-18 12:05:33.857525] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.425 [2024-04-18 12:05:33.857717] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.425 [2024-04-18 12:05:33.857730] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.425 [2024-04-18 12:05:33.857742] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.425 [2024-04-18 12:05:33.860685] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.425 [2024-04-18 12:05:33.869701] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.425 [2024-04-18 12:05:33.870269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.425 [2024-04-18 12:05:33.870562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.426 [2024-04-18 12:05:33.870579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.426 [2024-04-18 12:05:33.870591] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.426 [2024-04-18 12:05:33.870778] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.426 [2024-04-18 12:05:33.870963] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.426 [2024-04-18 12:05:33.870977] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.426 [2024-04-18 12:05:33.870988] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.426 [2024-04-18 12:05:33.873821] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.426 [2024-04-18 12:05:33.882904] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.426 [2024-04-18 12:05:33.883480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.426 [2024-04-18 12:05:33.883813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.426 [2024-04-18 12:05:33.883828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.426 [2024-04-18 12:05:33.883841] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.426 [2024-04-18 12:05:33.884026] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.426 [2024-04-18 12:05:33.884211] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.426 [2024-04-18 12:05:33.884224] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.426 [2024-04-18 12:05:33.884235] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.426 [2024-04-18 12:05:33.887124] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.426 [2024-04-18 12:05:33.895995] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.426 [2024-04-18 12:05:33.896545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.426 [2024-04-18 12:05:33.896938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.426 [2024-04-18 12:05:33.896954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.426 [2024-04-18 12:05:33.896967] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.426 [2024-04-18 12:05:33.897155] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.426 [2024-04-18 12:05:33.897341] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.426 [2024-04-18 12:05:33.897354] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.426 [2024-04-18 12:05:33.897365] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.426 [2024-04-18 12:05:33.900270] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.426 [2024-04-18 12:05:33.909311] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.426 [2024-04-18 12:05:33.909886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.426 [2024-04-18 12:05:33.910246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.426 [2024-04-18 12:05:33.910266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.426 [2024-04-18 12:05:33.910280] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.426 [2024-04-18 12:05:33.910483] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.426 [2024-04-18 12:05:33.910675] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.426 [2024-04-18 12:05:33.910689] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.426 [2024-04-18 12:05:33.910701] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.426 [2024-04-18 12:05:33.913644] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.426 [2024-04-18 12:05:33.922590] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.426 [2024-04-18 12:05:33.923096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.426 [2024-04-18 12:05:33.923455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.426 [2024-04-18 12:05:33.923472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.426 [2024-04-18 12:05:33.923486] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.426 [2024-04-18 12:05:33.923679] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.426 [2024-04-18 12:05:33.923871] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.426 [2024-04-18 12:05:33.923885] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.426 [2024-04-18 12:05:33.923897] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.426 [2024-04-18 12:05:33.926795] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.426 [2024-04-18 12:05:33.935705] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.426 [2024-04-18 12:05:33.936271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.426 [2024-04-18 12:05:33.936633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.426 [2024-04-18 12:05:33.936651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.426 [2024-04-18 12:05:33.936665] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.426 [2024-04-18 12:05:33.936861] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.426 [2024-04-18 12:05:33.937051] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.426 [2024-04-18 12:05:33.937065] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.426 [2024-04-18 12:05:33.937076] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.426 [2024-04-18 12:05:33.939954] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.426 [2024-04-18 12:05:33.948930] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.426 [2024-04-18 12:05:33.949510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.426 [2024-04-18 12:05:33.949866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.426 [2024-04-18 12:05:33.949882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.426 [2024-04-18 12:05:33.949898] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.426 [2024-04-18 12:05:33.950087] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.426 [2024-04-18 12:05:33.950273] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.426 [2024-04-18 12:05:33.950286] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.426 [2024-04-18 12:05:33.950298] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.426 [2024-04-18 12:05:33.953142] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.426 [2024-04-18 12:05:33.962062] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.426 [2024-04-18 12:05:33.962630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.426 [2024-04-18 12:05:33.962982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.426 [2024-04-18 12:05:33.962998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.426 [2024-04-18 12:05:33.963011] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.426 [2024-04-18 12:05:33.963204] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.426 [2024-04-18 12:05:33.963394] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.426 [2024-04-18 12:05:33.963408] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.426 [2024-04-18 12:05:33.963419] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.426 [2024-04-18 12:05:33.966359] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.687 [2024-04-18 12:05:33.975353] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.687 [2024-04-18 12:05:33.975948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.687 [2024-04-18 12:05:33.976375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.687 [2024-04-18 12:05:33.976392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.687 [2024-04-18 12:05:33.976405] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.687 [2024-04-18 12:05:33.976603] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.687 [2024-04-18 12:05:33.976792] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.687 [2024-04-18 12:05:33.976806] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.687 [2024-04-18 12:05:33.976817] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.687 [2024-04-18 12:05:33.979726] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.687 [2024-04-18 12:05:33.988500] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.687 [2024-04-18 12:05:33.989062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.687 [2024-04-18 12:05:33.989417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.687 [2024-04-18 12:05:33.989433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.687 [2024-04-18 12:05:33.989446] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.687 [2024-04-18 12:05:33.989642] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.687 [2024-04-18 12:05:33.989828] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.687 [2024-04-18 12:05:33.989841] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.687 [2024-04-18 12:05:33.989852] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.687 [2024-04-18 12:05:33.992738] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.687 [2024-04-18 12:05:34.001624] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.687 [2024-04-18 12:05:34.002180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.687 [2024-04-18 12:05:34.002501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.687 [2024-04-18 12:05:34.002531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.687 [2024-04-18 12:05:34.002543] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.687 [2024-04-18 12:05:34.002732] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.687 [2024-04-18 12:05:34.002917] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.687 [2024-04-18 12:05:34.002931] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.687 [2024-04-18 12:05:34.002942] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.687 [2024-04-18 12:05:34.005814] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.687 [2024-04-18 12:05:34.006352] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:43.687 [2024-04-18 12:05:34.006386] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:43.687 [2024-04-18 12:05:34.006399] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:43.687 [2024-04-18 12:05:34.006413] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:43.687 [2024-04-18 12:05:34.006425] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:43.687 [2024-04-18 12:05:34.006514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:43.687 [2024-04-18 12:05:34.006576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:43.687 [2024-04-18 12:05:34.006583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:43.687 [2024-04-18 12:05:34.014798] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.687 [2024-04-18 12:05:34.015370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.687 [2024-04-18 12:05:34.015669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.687 [2024-04-18 12:05:34.015687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.687 [2024-04-18 12:05:34.015702] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.687 [2024-04-18 12:05:34.015899] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.687 [2024-04-18 12:05:34.016093] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.687 [2024-04-18 12:05:34.016107] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.687 [2024-04-18 12:05:34.016123] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.687 [2024-04-18 12:05:34.019078] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.687 [2024-04-18 12:05:34.028116] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.687 [2024-04-18 12:05:34.028710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.687 [2024-04-18 12:05:34.028992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.687 [2024-04-18 12:05:34.029009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.687 [2024-04-18 12:05:34.029023] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.687 [2024-04-18 12:05:34.029219] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.687 [2024-04-18 12:05:34.029411] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.687 [2024-04-18 12:05:34.029425] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.687 [2024-04-18 12:05:34.029437] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.687 [2024-04-18 12:05:34.032379] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.687 [2024-04-18 12:05:34.041376] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.687 [2024-04-18 12:05:34.042000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.687 [2024-04-18 12:05:34.042353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.687 [2024-04-18 12:05:34.042369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.687 [2024-04-18 12:05:34.042383] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.687 [2024-04-18 12:05:34.042583] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.687 [2024-04-18 12:05:34.042789] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.687 [2024-04-18 12:05:34.042803] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.687 [2024-04-18 12:05:34.042815] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.687 [2024-04-18 12:05:34.045745] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.687 [2024-04-18 12:05:34.054555] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.687 [2024-04-18 12:05:34.055069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.687 [2024-04-18 12:05:34.055426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.687 [2024-04-18 12:05:34.055444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.687 [2024-04-18 12:05:34.055463] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.687 [2024-04-18 12:05:34.055657] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.687 [2024-04-18 12:05:34.055849] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.687 [2024-04-18 12:05:34.055863] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.687 [2024-04-18 12:05:34.055875] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.687 [2024-04-18 12:05:34.058805] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.687 [2024-04-18 12:05:34.067787] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.687 [2024-04-18 12:05:34.068339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.687 [2024-04-18 12:05:34.068627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.687 [2024-04-18 12:05:34.068644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.687 [2024-04-18 12:05:34.068657] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.687 [2024-04-18 12:05:34.068849] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.687 [2024-04-18 12:05:34.069040] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.688 [2024-04-18 12:05:34.069053] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.688 [2024-04-18 12:05:34.069065] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.688 [2024-04-18 12:05:34.071993] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.688 [2024-04-18 12:05:34.080959] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.688 [2024-04-18 12:05:34.081516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.688 [2024-04-18 12:05:34.081885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.688 [2024-04-18 12:05:34.081901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.688 [2024-04-18 12:05:34.081914] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.688 [2024-04-18 12:05:34.082106] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.688 [2024-04-18 12:05:34.082299] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.688 [2024-04-18 12:05:34.082313] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.688 [2024-04-18 12:05:34.082324] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.688 [2024-04-18 12:05:34.085247] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.688 [2024-04-18 12:05:34.094216] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.688 [2024-04-18 12:05:34.094791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.688 [2024-04-18 12:05:34.095149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.688 [2024-04-18 12:05:34.095166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.688 [2024-04-18 12:05:34.095179] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.688 [2024-04-18 12:05:34.095373] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.688 [2024-04-18 12:05:34.095572] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.688 [2024-04-18 12:05:34.095586] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.688 [2024-04-18 12:05:34.095598] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.688 [2024-04-18 12:05:34.098529] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.688 [2024-04-18 12:05:34.107560] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.688 [2024-04-18 12:05:34.108173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.688 [2024-04-18 12:05:34.108536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.688 [2024-04-18 12:05:34.108555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.688 [2024-04-18 12:05:34.108569] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.688 [2024-04-18 12:05:34.108768] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.688 [2024-04-18 12:05:34.108963] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.688 [2024-04-18 12:05:34.108976] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.688 [2024-04-18 12:05:34.108988] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.688 [2024-04-18 12:05:34.111937] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.688 [2024-04-18 12:05:34.120786] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.688 [2024-04-18 12:05:34.121344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.688 [2024-04-18 12:05:34.121697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.688 [2024-04-18 12:05:34.121714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.688 [2024-04-18 12:05:34.121728] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.688 [2024-04-18 12:05:34.121922] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.688 [2024-04-18 12:05:34.122115] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.688 [2024-04-18 12:05:34.122129] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.688 [2024-04-18 12:05:34.122141] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.688 [2024-04-18 12:05:34.125080] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.688 [2024-04-18 12:05:34.134090] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.688 [2024-04-18 12:05:34.134694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.688 [2024-04-18 12:05:34.135026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.688 [2024-04-18 12:05:34.135043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.688 [2024-04-18 12:05:34.135057] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.688 [2024-04-18 12:05:34.135250] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.688 [2024-04-18 12:05:34.135442] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.688 [2024-04-18 12:05:34.135462] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.688 [2024-04-18 12:05:34.135474] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.688 [2024-04-18 12:05:34.138404] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.688 [2024-04-18 12:05:34.147263] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.688 [2024-04-18 12:05:34.147828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.688 [2024-04-18 12:05:34.148116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.688 [2024-04-18 12:05:34.148133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.688 [2024-04-18 12:05:34.148146] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.688 [2024-04-18 12:05:34.148340] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.688 [2024-04-18 12:05:34.148537] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.688 [2024-04-18 12:05:34.148551] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.688 [2024-04-18 12:05:34.148563] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.688 [2024-04-18 12:05:34.151490] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.688 [2024-04-18 12:05:34.160463] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.688 [2024-04-18 12:05:34.161048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.688 [2024-04-18 12:05:34.161420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.688 [2024-04-18 12:05:34.161436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.688 [2024-04-18 12:05:34.161454] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.688 [2024-04-18 12:05:34.161646] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.688 [2024-04-18 12:05:34.161836] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.688 [2024-04-18 12:05:34.161849] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.688 [2024-04-18 12:05:34.161861] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.688 [2024-04-18 12:05:34.164790] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.688 [2024-04-18 12:05:34.173770] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.688 [2024-04-18 12:05:34.174326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.688 [2024-04-18 12:05:34.174705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.688 [2024-04-18 12:05:34.174722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.688 [2024-04-18 12:05:34.174734] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.688 [2024-04-18 12:05:34.174926] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.688 [2024-04-18 12:05:34.175115] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.688 [2024-04-18 12:05:34.175129] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.688 [2024-04-18 12:05:34.175140] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.688 [2024-04-18 12:05:34.178062] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.688 [2024-04-18 12:05:34.187030] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.688 [2024-04-18 12:05:34.187604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.688 [2024-04-18 12:05:34.187901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.688 [2024-04-18 12:05:34.187919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.688 [2024-04-18 12:05:34.187932] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.688 [2024-04-18 12:05:34.188123] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.688 [2024-04-18 12:05:34.188313] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.688 [2024-04-18 12:05:34.188326] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.688 [2024-04-18 12:05:34.188338] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.689 [2024-04-18 12:05:34.191261] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.689 [2024-04-18 12:05:34.200233] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.689 [2024-04-18 12:05:34.200839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.689 [2024-04-18 12:05:34.201171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.689 [2024-04-18 12:05:34.201188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.689 [2024-04-18 12:05:34.201201] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.689 [2024-04-18 12:05:34.201396] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.689 [2024-04-18 12:05:34.201599] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.689 [2024-04-18 12:05:34.201614] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.689 [2024-04-18 12:05:34.201625] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.689 [2024-04-18 12:05:34.204542] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.689 [2024-04-18 12:05:34.213498] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.689 [2024-04-18 12:05:34.214092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.689 [2024-04-18 12:05:34.214466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.689 [2024-04-18 12:05:34.214483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.689 [2024-04-18 12:05:34.214496] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.689 [2024-04-18 12:05:34.214688] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.689 [2024-04-18 12:05:34.214877] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.689 [2024-04-18 12:05:34.214890] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.689 [2024-04-18 12:05:34.214902] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.689 [2024-04-18 12:05:34.217822] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.689 [2024-04-18 12:05:34.226790] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.689 [2024-04-18 12:05:34.227350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.689 [2024-04-18 12:05:34.227711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.689 [2024-04-18 12:05:34.227732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.689 [2024-04-18 12:05:34.227745] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.689 [2024-04-18 12:05:34.227937] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.689 [2024-04-18 12:05:34.228126] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.689 [2024-04-18 12:05:34.228140] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.689 [2024-04-18 12:05:34.228151] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.689 [2024-04-18 12:05:34.231072] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.949 [2024-04-18 12:05:34.240036] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.949 [2024-04-18 12:05:34.240614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.949 [2024-04-18 12:05:34.240894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.949 [2024-04-18 12:05:34.240911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.949 [2024-04-18 12:05:34.240923] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.949 [2024-04-18 12:05:34.241115] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.949 [2024-04-18 12:05:34.241304] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.949 [2024-04-18 12:05:34.241318] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.949 [2024-04-18 12:05:34.241330] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.949 [2024-04-18 12:05:34.244263] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.949 [2024-04-18 12:05:34.253232] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.949 [2024-04-18 12:05:34.253733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.949 [2024-04-18 12:05:34.253978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.949 [2024-04-18 12:05:34.253995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.949 [2024-04-18 12:05:34.254008] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.949 [2024-04-18 12:05:34.254199] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.949 [2024-04-18 12:05:34.254387] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.949 [2024-04-18 12:05:34.254401] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.949 [2024-04-18 12:05:34.254412] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.949 [2024-04-18 12:05:34.257328] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.949 [2024-04-18 12:05:34.266482] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.949 [2024-04-18 12:05:34.267088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.949 [2024-04-18 12:05:34.267327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.950 [2024-04-18 12:05:34.267343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.950 [2024-04-18 12:05:34.267361] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.950 [2024-04-18 12:05:34.267563] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.950 [2024-04-18 12:05:34.267757] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.950 [2024-04-18 12:05:34.267771] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.950 [2024-04-18 12:05:34.267783] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.950 [2024-04-18 12:05:34.270720] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.950 [2024-04-18 12:05:34.279758] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.950 [2024-04-18 12:05:34.280243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.950 [2024-04-18 12:05:34.280599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.950 [2024-04-18 12:05:34.280616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.950 [2024-04-18 12:05:34.280630] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.950 [2024-04-18 12:05:34.280826] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.950 [2024-04-18 12:05:34.281018] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.950 [2024-04-18 12:05:34.281031] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.950 [2024-04-18 12:05:34.281044] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.950 [2024-04-18 12:05:34.283988] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.950 [2024-04-18 12:05:34.292996] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.950 [2024-04-18 12:05:34.293518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.950 [2024-04-18 12:05:34.293713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.950 [2024-04-18 12:05:34.293730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.950 [2024-04-18 12:05:34.293743] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.950 [2024-04-18 12:05:34.293937] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.950 [2024-04-18 12:05:34.294128] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.950 [2024-04-18 12:05:34.294141] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.950 [2024-04-18 12:05:34.294153] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.950 [2024-04-18 12:05:34.297081] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.950 [2024-04-18 12:05:34.306240] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.950 [2024-04-18 12:05:34.306681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.950 [2024-04-18 12:05:34.306919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.950 [2024-04-18 12:05:34.306936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.950 [2024-04-18 12:05:34.306951] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.950 [2024-04-18 12:05:34.307143] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.950 [2024-04-18 12:05:34.307333] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.950 [2024-04-18 12:05:34.307347] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.950 [2024-04-18 12:05:34.307369] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.950 [2024-04-18 12:05:34.310294] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.950 [2024-04-18 12:05:34.319463] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.950 [2024-04-18 12:05:34.320020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.950 [2024-04-18 12:05:34.320353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.950 [2024-04-18 12:05:34.320369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.950 [2024-04-18 12:05:34.320382] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.950 [2024-04-18 12:05:34.320581] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.950 [2024-04-18 12:05:34.320772] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.950 [2024-04-18 12:05:34.320785] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.950 [2024-04-18 12:05:34.320797] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.950 [2024-04-18 12:05:34.323721] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.950 [2024-04-18 12:05:34.332693] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.950 [2024-04-18 12:05:34.333118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.950 [2024-04-18 12:05:34.333416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.950 [2024-04-18 12:05:34.333432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.950 [2024-04-18 12:05:34.333444] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.950 [2024-04-18 12:05:34.333640] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.950 [2024-04-18 12:05:34.333830] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.950 [2024-04-18 12:05:34.333843] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.950 [2024-04-18 12:05:34.333855] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.950 [2024-04-18 12:05:34.336771] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.950 [2024-04-18 12:05:34.345927] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.950 [2024-04-18 12:05:34.346437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.950 [2024-04-18 12:05:34.346776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.950 [2024-04-18 12:05:34.346793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.950 [2024-04-18 12:05:34.346805] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.950 [2024-04-18 12:05:34.346999] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.950 [2024-04-18 12:05:34.347189] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.950 [2024-04-18 12:05:34.347203] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.950 [2024-04-18 12:05:34.347214] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.950 [2024-04-18 12:05:34.350139] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.950 [2024-04-18 12:05:34.359121] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.950 [2024-04-18 12:05:34.359607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.950 [2024-04-18 12:05:34.359942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.950 [2024-04-18 12:05:34.359958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.950 [2024-04-18 12:05:34.359971] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.950 [2024-04-18 12:05:34.360162] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.950 [2024-04-18 12:05:34.360352] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.950 [2024-04-18 12:05:34.360365] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.950 [2024-04-18 12:05:34.360377] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.950 [2024-04-18 12:05:34.363298] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.950 [2024-04-18 12:05:34.372272] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.950 [2024-04-18 12:05:34.372819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.950 [2024-04-18 12:05:34.373103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.950 [2024-04-18 12:05:34.373120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.950 [2024-04-18 12:05:34.373132] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.950 [2024-04-18 12:05:34.373323] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.950 [2024-04-18 12:05:34.373519] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.950 [2024-04-18 12:05:34.373534] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.950 [2024-04-18 12:05:34.373545] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.950 [2024-04-18 12:05:34.376467] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.950 [2024-04-18 12:05:34.385430] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.950 [2024-04-18 12:05:34.385976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.951 [2024-04-18 12:05:34.386216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.951 [2024-04-18 12:05:34.386233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.951 [2024-04-18 12:05:34.386245] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.951 [2024-04-18 12:05:34.386441] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.951 [2024-04-18 12:05:34.386638] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.951 [2024-04-18 12:05:34.386651] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.951 [2024-04-18 12:05:34.386663] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.951 [2024-04-18 12:05:34.389582] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.951 [2024-04-18 12:05:34.398725] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.951 [2024-04-18 12:05:34.399237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.951 [2024-04-18 12:05:34.399440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.951 [2024-04-18 12:05:34.399461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.951 [2024-04-18 12:05:34.399475] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.951 [2024-04-18 12:05:34.399667] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.951 [2024-04-18 12:05:34.399856] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.951 [2024-04-18 12:05:34.399869] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.951 [2024-04-18 12:05:34.399880] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.951 [2024-04-18 12:05:34.402803] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.951 [2024-04-18 12:05:34.411941] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.951 [2024-04-18 12:05:34.412508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.951 [2024-04-18 12:05:34.412846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.951 [2024-04-18 12:05:34.412862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.951 [2024-04-18 12:05:34.412875] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.951 [2024-04-18 12:05:34.413067] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.951 [2024-04-18 12:05:34.413256] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.951 [2024-04-18 12:05:34.413269] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.951 [2024-04-18 12:05:34.413281] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.951 [2024-04-18 12:05:34.416271] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.951 12:05:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:43.951 12:05:34 -- common/autotest_common.sh@850 -- # return 0 00:29:43.951 12:05:34 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:29:43.951 12:05:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:43.951 12:05:34 -- common/autotest_common.sh@10 -- # set +x 00:29:43.951 [2024-04-18 12:05:34.425255] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.951 [2024-04-18 12:05:34.425821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.951 [2024-04-18 12:05:34.426108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.951 [2024-04-18 12:05:34.426124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.951 [2024-04-18 12:05:34.426140] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.951 [2024-04-18 12:05:34.426332] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.951 [2024-04-18 12:05:34.426528] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.951 [2024-04-18 12:05:34.426543] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.951 [2024-04-18 12:05:34.426554] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.951 [2024-04-18 12:05:34.429474] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.951 [2024-04-18 12:05:34.438432] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.951 [2024-04-18 12:05:34.438866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.951 [2024-04-18 12:05:34.439201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.951 [2024-04-18 12:05:34.439218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.951 [2024-04-18 12:05:34.439230] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.951 [2024-04-18 12:05:34.439420] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.951 [2024-04-18 12:05:34.439617] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.951 [2024-04-18 12:05:34.439631] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.951 [2024-04-18 12:05:34.439642] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.951 [2024-04-18 12:05:34.442572] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.951 [2024-04-18 12:05:34.451711] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.951 [2024-04-18 12:05:34.452152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.951 [2024-04-18 12:05:34.452420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.951 [2024-04-18 12:05:34.452437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.951 [2024-04-18 12:05:34.452456] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.951 [2024-04-18 12:05:34.452655] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.951 [2024-04-18 12:05:34.452844] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.951 [2024-04-18 12:05:34.452858] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.951 [2024-04-18 12:05:34.452869] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.951 [2024-04-18 12:05:34.455798] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.951 [2024-04-18 12:05:34.464946] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.951 12:05:34 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:43.951 12:05:34 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:43.951 [2024-04-18 12:05:34.465528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.951 12:05:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:43.951 [2024-04-18 12:05:34.465766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.951 [2024-04-18 12:05:34.465789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.951 [2024-04-18 12:05:34.465803] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.951 [2024-04-18 12:05:34.465995] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.951 12:05:34 -- common/autotest_common.sh@10 -- # set +x 00:29:43.951 [2024-04-18 12:05:34.466187] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.951 [2024-04-18 12:05:34.466202] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.951 [2024-04-18 12:05:34.466214] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.951 [2024-04-18 12:05:34.469139] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.951 [2024-04-18 12:05:34.469484] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:43.951 [2024-04-18 12:05:34.478115] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.951 [2024-04-18 12:05:34.478629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.951 [2024-04-18 12:05:34.478965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.951 [2024-04-18 12:05:34.478981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.951 [2024-04-18 12:05:34.478994] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.951 [2024-04-18 12:05:34.479186] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.951 [2024-04-18 12:05:34.479384] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.951 [2024-04-18 12:05:34.479398] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.951 [2024-04-18 12:05:34.479409] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.951 [2024-04-18 12:05:34.482330] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.951 [2024-04-18 12:05:34.491300] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.951 [2024-04-18 12:05:34.491859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.951 12:05:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:43.951 [2024-04-18 12:05:34.492093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.951 [2024-04-18 12:05:34.492109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:43.951 [2024-04-18 12:05:34.492122] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:43.951 [2024-04-18 12:05:34.492313] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:43.951 12:05:34 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:43.951 [2024-04-18 12:05:34.492510] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.952 [2024-04-18 12:05:34.492524] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.952 [2024-04-18 12:05:34.492535] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.952 12:05:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:43.952 12:05:34 -- common/autotest_common.sh@10 -- # set +x 00:29:43.952 [2024-04-18 12:05:34.495449] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.211 [2024-04-18 12:05:34.504615] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.211 [2024-04-18 12:05:34.505122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.211 [2024-04-18 12:05:34.505460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.211 [2024-04-18 12:05:34.505477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:44.211 [2024-04-18 12:05:34.505491] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:44.211 [2024-04-18 12:05:34.505685] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:44.211 [2024-04-18 12:05:34.505876] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.211 [2024-04-18 12:05:34.505890] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.211 [2024-04-18 12:05:34.505901] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.211 [2024-04-18 12:05:34.508833] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.211 [2024-04-18 12:05:34.517864] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.211 [2024-04-18 12:05:34.518414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.211 [2024-04-18 12:05:34.518701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.211 [2024-04-18 12:05:34.518719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:44.211 [2024-04-18 12:05:34.518733] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:44.211 [2024-04-18 12:05:34.518926] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:44.211 [2024-04-18 12:05:34.519117] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.211 [2024-04-18 12:05:34.519130] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.211 [2024-04-18 12:05:34.519142] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.211 [2024-04-18 12:05:34.522082] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.211 [2024-04-18 12:05:34.531088] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.211 [2024-04-18 12:05:34.531667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.211 [2024-04-18 12:05:34.531936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.211 [2024-04-18 12:05:34.531952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:44.211 [2024-04-18 12:05:34.531965] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:44.211 [2024-04-18 12:05:34.532159] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:44.211 [2024-04-18 12:05:34.532351] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.211 [2024-04-18 12:05:34.532364] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.211 [2024-04-18 12:05:34.532376] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.211 [2024-04-18 12:05:34.535302] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.211 [2024-04-18 12:05:34.544307] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.211 [2024-04-18 12:05:34.544870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.211 [2024-04-18 12:05:34.545256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.211 [2024-04-18 12:05:34.545272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:44.211 [2024-04-18 12:05:34.545285] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:44.211 [2024-04-18 12:05:34.545482] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:44.211 [2024-04-18 12:05:34.545671] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.211 [2024-04-18 12:05:34.545685] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.211 [2024-04-18 12:05:34.545697] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.211 [2024-04-18 12:05:34.548633] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.211 [2024-04-18 12:05:34.557630] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.211 [2024-04-18 12:05:34.558190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.211 [2024-04-18 12:05:34.558496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.211 [2024-04-18 12:05:34.558514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:44.211 [2024-04-18 12:05:34.558526] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:44.211 [2024-04-18 12:05:34.558718] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:44.211 [2024-04-18 12:05:34.558909] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.211 [2024-04-18 12:05:34.558923] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.211 [2024-04-18 12:05:34.558934] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.211 [2024-04-18 12:05:34.561859] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.211 [2024-04-18 12:05:34.570832] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.211 [2024-04-18 12:05:34.571431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.211 [2024-04-18 12:05:34.571776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.211 [2024-04-18 12:05:34.571793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:44.211 [2024-04-18 12:05:34.571806] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:44.211 [2024-04-18 12:05:34.571998] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:44.211 [2024-04-18 12:05:34.572188] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.211 [2024-04-18 12:05:34.572201] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.211 [2024-04-18 12:05:34.572212] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.211 [2024-04-18 12:05:34.575136] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.211 Malloc0 00:29:44.211 12:05:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:44.211 12:05:34 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:44.211 12:05:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:44.211 12:05:34 -- common/autotest_common.sh@10 -- # set +x 00:29:44.211 [2024-04-18 12:05:34.584117] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.212 [2024-04-18 12:05:34.584685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.212 [2024-04-18 12:05:34.585018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.212 [2024-04-18 12:05:34.585035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:44.212 [2024-04-18 12:05:34.585048] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:44.212 [2024-04-18 12:05:34.585240] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:44.212 [2024-04-18 12:05:34.585430] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.212 [2024-04-18 12:05:34.585443] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.212 [2024-04-18 12:05:34.585461] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.212 [2024-04-18 12:05:34.588377] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.212 12:05:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:44.212 12:05:34 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:44.212 12:05:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:44.212 12:05:34 -- common/autotest_common.sh@10 -- # set +x 00:29:44.212 [2024-04-18 12:05:34.597341] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.212 [2024-04-18 12:05:34.597905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.212 [2024-04-18 12:05:34.598262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.212 [2024-04-18 12:05:34.598278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:29:44.212 [2024-04-18 12:05:34.598291] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:29:44.212 [2024-04-18 12:05:34.598487] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:29:44.212 [2024-04-18 12:05:34.598678] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.212 [2024-04-18 12:05:34.598691] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.212 [2024-04-18 12:05:34.598703] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.212 12:05:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:44.212 12:05:34 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:44.212 12:05:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:44.212 [2024-04-18 12:05:34.601622] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.212 12:05:34 -- common/autotest_common.sh@10 -- # set +x 00:29:44.212 [2024-04-18 12:05:34.604419] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:44.212 12:05:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:44.212 12:05:34 -- host/bdevperf.sh@38 -- # wait 2649443 00:29:44.212 [2024-04-18 12:05:34.610595] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.470 [2024-04-18 12:05:34.817633] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:54.451 00:29:54.451 Latency(us) 00:29:54.451 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:54.451 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:54.451 Verification LBA range: start 0x0 length 0x4000 00:29:54.451 Nvme1n1 : 15.01 7556.43 29.52 12255.84 0.00 6439.43 910.95 30828.13 00:29:54.451 =================================================================================================================== 00:29:54.451 Total : 7556.43 29.52 12255.84 0.00 6439.43 910.95 30828.13 00:29:54.451 12:05:44 -- host/bdevperf.sh@39 -- # sync 00:29:54.451 12:05:44 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:54.451 12:05:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:54.451 12:05:44 -- common/autotest_common.sh@10 -- # set +x 00:29:54.451 12:05:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:54.451 12:05:44 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:54.451 12:05:44 -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:54.451 12:05:44 -- nvmf/common.sh@477 -- # nvmfcleanup 00:29:54.451 12:05:44 -- nvmf/common.sh@117 -- # sync 00:29:54.451 12:05:44 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:54.451 12:05:44 -- nvmf/common.sh@120 -- # set +e 00:29:54.451 12:05:44 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:54.451 12:05:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:54.451 rmmod nvme_tcp 00:29:54.451 rmmod nvme_fabrics 00:29:54.451 rmmod nvme_keyring 00:29:54.451 12:05:44 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:54.451 12:05:44 -- nvmf/common.sh@124 -- # set -e 00:29:54.451 12:05:44 -- nvmf/common.sh@125 -- # return 0 00:29:54.451 12:05:44 -- nvmf/common.sh@478 -- # '[' -n 2650506 ']' 00:29:54.451 12:05:44 -- nvmf/common.sh@479 -- # killprocess 2650506 00:29:54.451 12:05:44 -- common/autotest_common.sh@936 -- # '[' -z 2650506 ']' 00:29:54.451 12:05:44 -- common/autotest_common.sh@940 -- # kill -0 2650506 00:29:54.451 12:05:44 -- common/autotest_common.sh@941 -- # uname 00:29:54.451 12:05:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:54.451 12:05:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2650506 00:29:54.451 12:05:44 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:54.451 12:05:44 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:54.451 12:05:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2650506' 00:29:54.451 killing process with pid 2650506 00:29:54.451 12:05:44 -- common/autotest_common.sh@955 -- # kill 2650506 00:29:54.451 12:05:44 -- common/autotest_common.sh@960 -- # wait 2650506 00:29:55.831 12:05:46 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:29:55.831 12:05:46 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:29:55.831 12:05:46 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:29:55.831 12:05:46 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:55.831 12:05:46 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:55.831 12:05:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:55.831 12:05:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:55.831 12:05:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:57.738 12:05:48 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:57.738 00:29:57.738 real 0m31.669s 00:29:57.738 user 1m14.995s 00:29:57.738 sys 0m8.543s 00:29:57.738 12:05:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:57.738 12:05:48 -- common/autotest_common.sh@10 -- # set +x 00:29:57.738 ************************************ 00:29:57.738 END TEST nvmf_bdevperf 00:29:57.738 ************************************ 00:29:57.738 12:05:48 -- nvmf/nvmf.sh@120 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:57.738 12:05:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:57.738 12:05:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:57.738 12:05:48 -- common/autotest_common.sh@10 -- # set +x 00:29:57.997 ************************************ 00:29:57.997 START TEST nvmf_target_disconnect 00:29:57.997 ************************************ 00:29:57.997 12:05:48 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:57.997 * Looking for test storage... 00:29:57.997 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:57.997 12:05:48 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:57.997 12:05:48 -- nvmf/common.sh@7 -- # uname -s 00:29:58.256 12:05:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:58.256 12:05:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:58.256 12:05:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:58.256 12:05:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:58.256 12:05:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:58.256 12:05:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:58.256 12:05:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:58.256 12:05:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:58.256 12:05:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:58.256 12:05:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:58.256 12:05:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:29:58.256 12:05:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:29:58.256 12:05:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:58.256 12:05:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:58.256 12:05:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:58.256 12:05:48 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:58.256 12:05:48 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:58.256 12:05:48 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:58.256 12:05:48 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:58.256 12:05:48 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:58.256 12:05:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.256 12:05:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.256 12:05:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.256 12:05:48 -- paths/export.sh@5 -- # export PATH 00:29:58.257 12:05:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.257 12:05:48 -- nvmf/common.sh@47 -- # : 0 00:29:58.257 12:05:48 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:58.257 12:05:48 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:58.257 12:05:48 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:58.257 12:05:48 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:58.257 12:05:48 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:58.257 12:05:48 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:58.257 12:05:48 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:58.257 12:05:48 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:58.257 12:05:48 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:58.257 12:05:48 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:58.257 12:05:48 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:58.257 12:05:48 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:29:58.257 12:05:48 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:29:58.257 12:05:48 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:58.257 12:05:48 -- nvmf/common.sh@437 -- # prepare_net_devs 00:29:58.257 12:05:48 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:29:58.257 12:05:48 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:29:58.257 12:05:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:58.257 12:05:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:58.257 12:05:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:58.257 12:05:48 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:29:58.257 12:05:48 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:29:58.257 12:05:48 -- nvmf/common.sh@285 -- # xtrace_disable 00:29:58.257 12:05:48 -- common/autotest_common.sh@10 -- # set +x 00:30:04.827 12:05:55 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:04.827 12:05:55 -- nvmf/common.sh@291 -- # pci_devs=() 00:30:04.827 12:05:55 -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:04.827 12:05:55 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:04.827 12:05:55 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:04.827 12:05:55 -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:04.827 12:05:55 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:04.827 12:05:55 -- nvmf/common.sh@295 -- # net_devs=() 00:30:04.827 12:05:55 -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:04.827 12:05:55 -- nvmf/common.sh@296 -- # e810=() 00:30:04.828 12:05:55 -- nvmf/common.sh@296 -- # local -ga e810 00:30:04.828 12:05:55 -- nvmf/common.sh@297 -- # x722=() 00:30:04.828 12:05:55 -- nvmf/common.sh@297 -- # local -ga x722 00:30:04.828 12:05:55 -- nvmf/common.sh@298 -- # mlx=() 00:30:04.828 12:05:55 -- nvmf/common.sh@298 -- # local -ga mlx 00:30:04.828 12:05:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:04.828 12:05:55 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:04.828 12:05:55 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:04.828 12:05:55 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:04.828 12:05:55 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:04.828 12:05:55 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:04.828 12:05:55 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:04.828 12:05:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:04.828 12:05:55 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:04.828 12:05:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:04.828 12:05:55 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:04.828 12:05:55 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:04.828 12:05:55 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:04.828 12:05:55 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:04.828 12:05:55 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:04.828 12:05:55 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:04.828 12:05:55 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:04.828 12:05:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:04.828 12:05:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:04.828 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:04.828 12:05:55 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:04.828 12:05:55 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:04.828 12:05:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:04.828 12:05:55 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:04.828 12:05:55 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:04.828 12:05:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:04.828 12:05:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:04.828 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:04.828 12:05:55 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:04.828 12:05:55 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:04.828 12:05:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:04.828 12:05:55 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:04.828 12:05:55 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:04.828 12:05:55 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:04.828 12:05:55 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:04.828 12:05:55 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:04.828 12:05:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:04.828 12:05:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:04.828 12:05:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:30:04.828 12:05:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:04.828 12:05:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:04.828 Found net devices under 0000:af:00.0: cvl_0_0 00:30:04.828 12:05:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:30:04.828 12:05:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:04.828 12:05:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:04.828 12:05:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:30:04.828 12:05:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:04.828 12:05:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:04.828 Found net devices under 0000:af:00.1: cvl_0_1 00:30:04.828 12:05:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:30:04.828 12:05:55 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:30:04.828 12:05:55 -- nvmf/common.sh@403 -- # is_hw=yes 00:30:04.828 12:05:55 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:30:04.828 12:05:55 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:30:04.828 12:05:55 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:30:04.828 12:05:55 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:04.828 12:05:55 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:04.828 12:05:55 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:04.828 12:05:55 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:04.828 12:05:55 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:04.828 12:05:55 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:04.828 12:05:55 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:04.828 12:05:55 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:04.828 12:05:55 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:04.828 12:05:55 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:04.828 12:05:55 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:04.828 12:05:55 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:04.828 12:05:55 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:04.828 12:05:55 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:04.828 12:05:55 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:04.828 12:05:55 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:04.828 12:05:55 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:05.087 12:05:55 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:05.087 12:05:55 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:05.087 12:05:55 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:05.087 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:05.087 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:30:05.087 00:30:05.087 --- 10.0.0.2 ping statistics --- 00:30:05.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:05.087 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:30:05.087 12:05:55 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:05.087 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:05.087 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:30:05.087 00:30:05.087 --- 10.0.0.1 ping statistics --- 00:30:05.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:05.087 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:30:05.087 12:05:55 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:05.087 12:05:55 -- nvmf/common.sh@411 -- # return 0 00:30:05.087 12:05:55 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:30:05.087 12:05:55 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:05.087 12:05:55 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:30:05.087 12:05:55 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:30:05.087 12:05:55 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:05.087 12:05:55 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:30:05.087 12:05:55 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:30:05.087 12:05:55 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:30:05.087 12:05:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:05.087 12:05:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:05.087 12:05:55 -- common/autotest_common.sh@10 -- # set +x 00:30:05.087 ************************************ 00:30:05.087 START TEST nvmf_target_disconnect_tc1 00:30:05.087 ************************************ 00:30:05.088 12:05:55 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc1 00:30:05.088 12:05:55 -- host/target_disconnect.sh@32 -- # set +e 00:30:05.088 12:05:55 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:05.347 EAL: No free 2048 kB hugepages reported on node 1 00:30:05.347 [2024-04-18 12:05:55.761419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.347 [2024-04-18 12:05:55.761841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.347 [2024-04-18 12:05:55.761868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002040 with addr=10.0.0.2, port=4420 00:30:05.347 [2024-04-18 12:05:55.761929] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:05.347 [2024-04-18 12:05:55.761951] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:05.347 [2024-04-18 12:05:55.761966] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:30:05.347 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:30:05.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:30:05.347 Initializing NVMe Controllers 00:30:05.347 12:05:55 -- host/target_disconnect.sh@33 -- # trap - ERR 00:30:05.347 12:05:55 -- host/target_disconnect.sh@33 -- # print_backtrace 00:30:05.347 12:05:55 -- common/autotest_common.sh@1139 -- # [[ hxBET =~ e ]] 00:30:05.347 12:05:55 -- common/autotest_common.sh@1139 -- # return 0 00:30:05.347 12:05:55 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:30:05.347 12:05:55 -- host/target_disconnect.sh@41 -- # set -e 00:30:05.347 00:30:05.347 real 0m0.189s 00:30:05.347 user 0m0.072s 00:30:05.347 sys 0m0.116s 00:30:05.347 12:05:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:05.347 12:05:55 -- common/autotest_common.sh@10 -- # set +x 00:30:05.347 ************************************ 00:30:05.347 END TEST nvmf_target_disconnect_tc1 00:30:05.347 ************************************ 00:30:05.347 12:05:55 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:30:05.347 12:05:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:05.347 12:05:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:05.347 12:05:55 -- common/autotest_common.sh@10 -- # set +x 00:30:05.606 ************************************ 00:30:05.606 START TEST nvmf_target_disconnect_tc2 00:30:05.606 ************************************ 00:30:05.606 12:05:55 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc2 00:30:05.606 12:05:55 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:30:05.606 12:05:55 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:05.606 12:05:55 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:30:05.606 12:05:55 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:05.606 12:05:55 -- common/autotest_common.sh@10 -- # set +x 00:30:05.606 12:05:55 -- nvmf/common.sh@470 -- # nvmfpid=2656381 00:30:05.606 12:05:55 -- nvmf/common.sh@471 -- # waitforlisten 2656381 00:30:05.606 12:05:55 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:05.606 12:05:55 -- common/autotest_common.sh@817 -- # '[' -z 2656381 ']' 00:30:05.606 12:05:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:05.606 12:05:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:05.606 12:05:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:05.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:05.606 12:05:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:05.606 12:05:55 -- common/autotest_common.sh@10 -- # set +x 00:30:05.606 [2024-04-18 12:05:56.067513] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:30:05.606 [2024-04-18 12:05:56.067601] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:05.606 EAL: No free 2048 kB hugepages reported on node 1 00:30:05.865 [2024-04-18 12:05:56.213956] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:06.122 [2024-04-18 12:05:56.426876] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:06.122 [2024-04-18 12:05:56.426923] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:06.122 [2024-04-18 12:05:56.426935] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:06.122 [2024-04-18 12:05:56.426948] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:06.122 [2024-04-18 12:05:56.426957] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:06.122 [2024-04-18 12:05:56.427128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:30:06.122 [2024-04-18 12:05:56.427207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:30:06.122 [2024-04-18 12:05:56.427220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:30:06.122 [2024-04-18 12:05:56.427248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:30:06.378 12:05:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:06.378 12:05:56 -- common/autotest_common.sh@850 -- # return 0 00:30:06.378 12:05:56 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:30:06.378 12:05:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:06.378 12:05:56 -- common/autotest_common.sh@10 -- # set +x 00:30:06.378 12:05:56 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:06.378 12:05:56 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:06.378 12:05:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:06.378 12:05:56 -- common/autotest_common.sh@10 -- # set +x 00:30:06.634 Malloc0 00:30:06.634 12:05:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:06.634 12:05:56 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:06.634 12:05:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:06.634 12:05:56 -- common/autotest_common.sh@10 -- # set +x 00:30:06.634 [2024-04-18 12:05:56.992389] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:06.634 12:05:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:06.634 12:05:57 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:06.634 12:05:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:06.634 12:05:57 -- common/autotest_common.sh@10 -- # set +x 00:30:06.634 12:05:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:06.634 12:05:57 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:06.634 12:05:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:06.634 12:05:57 -- common/autotest_common.sh@10 -- # set +x 00:30:06.635 12:05:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:06.635 12:05:57 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:06.635 12:05:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:06.635 12:05:57 -- common/autotest_common.sh@10 -- # set +x 00:30:06.635 [2024-04-18 12:05:57.028826] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:06.635 12:05:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:06.635 12:05:57 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:06.635 12:05:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:06.635 12:05:57 -- common/autotest_common.sh@10 -- # set +x 00:30:06.635 12:05:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:06.635 12:05:57 -- host/target_disconnect.sh@50 -- # reconnectpid=2656463 00:30:06.635 12:05:57 -- host/target_disconnect.sh@52 -- # sleep 2 00:30:06.635 12:05:57 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:06.635 EAL: No free 2048 kB hugepages reported on node 1 00:30:08.543 12:05:59 -- host/target_disconnect.sh@53 -- # kill -9 2656381 00:30:08.543 12:05:59 -- host/target_disconnect.sh@55 -- # sleep 2 00:30:08.543 Read completed with error (sct=0, sc=8) 00:30:08.543 starting I/O failed 00:30:08.543 Read completed with error (sct=0, sc=8) 00:30:08.543 starting I/O failed 00:30:08.543 Read completed with error (sct=0, sc=8) 00:30:08.543 starting I/O failed 00:30:08.543 Read completed with error (sct=0, sc=8) 00:30:08.543 starting I/O failed 00:30:08.543 Write completed with error (sct=0, sc=8) 00:30:08.543 starting I/O failed 00:30:08.543 Read completed with error (sct=0, sc=8) 00:30:08.543 starting I/O failed 00:30:08.543 Read completed with error (sct=0, sc=8) 00:30:08.543 starting I/O failed 00:30:08.543 Write completed with error (sct=0, sc=8) 00:30:08.543 starting I/O failed 00:30:08.543 Write completed with error (sct=0, sc=8) 00:30:08.543 starting I/O failed 00:30:08.543 Write completed with error (sct=0, sc=8) 00:30:08.543 starting I/O failed 00:30:08.543 Write completed with error (sct=0, sc=8) 00:30:08.543 starting I/O failed 00:30:08.543 Read completed with error (sct=0, sc=8) 00:30:08.543 starting I/O failed 00:30:08.543 Read completed with error (sct=0, sc=8) 00:30:08.543 starting I/O failed 00:30:08.543 Read completed with error (sct=0, sc=8) 00:30:08.543 starting I/O failed 00:30:08.543 Write completed with error (sct=0, sc=8) 00:30:08.543 starting I/O failed 00:30:08.543 Write completed with error (sct=0, sc=8) 00:30:08.543 starting I/O failed 00:30:08.543 Write completed with error (sct=0, sc=8) 00:30:08.543 starting I/O failed 00:30:08.543 Read completed with error (sct=0, sc=8) 00:30:08.543 starting I/O failed 00:30:08.543 Read completed with error (sct=0, sc=8) 00:30:08.543 starting I/O failed 00:30:08.543 Read completed with error (sct=0, sc=8) 00:30:08.543 starting I/O failed 00:30:08.543 Write completed with error (sct=0, sc=8) 00:30:08.543 starting I/O failed 00:30:08.543 Read completed with error (sct=0, sc=8) 00:30:08.543 starting I/O failed 00:30:08.543 Write completed with error (sct=0, sc=8) 00:30:08.543 starting I/O failed 00:30:08.543 Write completed with error (sct=0, sc=8) 00:30:08.543 starting I/O failed 00:30:08.543 Read completed with error (sct=0, sc=8) 00:30:08.543 starting I/O failed 00:30:08.543 Read completed with error (sct=0, sc=8) 00:30:08.543 starting I/O failed 00:30:08.543 Read completed with error (sct=0, sc=8) 00:30:08.543 starting I/O failed 00:30:08.543 Write completed with error (sct=0, sc=8) 00:30:08.543 starting I/O failed 00:30:08.543 Write completed with error (sct=0, sc=8) 00:30:08.543 starting I/O failed 00:30:08.543 Write completed with error (sct=0, sc=8) 00:30:08.543 starting I/O failed 00:30:08.543 Read completed with error (sct=0, sc=8) 00:30:08.543 starting I/O failed 00:30:08.543 Read completed with error (sct=0, sc=8) 00:30:08.543 starting I/O failed 00:30:08.543 Read completed with error (sct=0, sc=8) 00:30:08.543 starting I/O failed 00:30:08.543 Read completed with error (sct=0, sc=8) 00:30:08.543 starting I/O failed 00:30:08.543 Read completed with error (sct=0, sc=8) 00:30:08.543 starting I/O failed 00:30:08.543 Read completed with error (sct=0, sc=8) 00:30:08.543 starting I/O failed 00:30:08.543 Read completed with error (sct=0, sc=8) 00:30:08.543 starting I/O failed 00:30:08.543 Read completed with error (sct=0, sc=8) 00:30:08.543 starting I/O failed 00:30:08.543 Read completed with error (sct=0, sc=8) 00:30:08.543 starting I/O failed 00:30:08.543 Read completed with error (sct=0, sc=8) 00:30:08.543 starting I/O failed 00:30:08.543 Read completed with error (sct=0, sc=8) 00:30:08.543 starting I/O failed 00:30:08.543 [2024-04-18 12:05:59.071847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.543 Write completed with error (sct=0, sc=8) 00:30:08.543 starting I/O failed 00:30:08.544 Read completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Read completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Read completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Write completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Read completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Read completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Read completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Read completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Write completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Read completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Read completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Write completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Read completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Write completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Write completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Read completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Write completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Write completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Write completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Write completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Read completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Read completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 [2024-04-18 12:05:59.072229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.544 Read completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Read completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Read completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Read completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Write completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Write completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Write completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Read completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Read completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Write completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Read completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Read completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Write completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Read completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Read completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Read completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Write completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Read completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Write completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Write completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Write completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Read completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Read completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Read completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Read completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Read completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Write completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Read completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Read completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Read completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Read completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Read completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 [2024-04-18 12:05:59.072608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:08.544 Read completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Read completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Read completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Read completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Read completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Read completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Read completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Read completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Read completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Write completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Read completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Write completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Read completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Write completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Write completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Write completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Read completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Write completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Write completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Write completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Write completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Read completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Write completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Read completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Read completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Write completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Read completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Write completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Read completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Write completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Write completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 Read completed with error (sct=0, sc=8) 00:30:08.544 starting I/O failed 00:30:08.544 [2024-04-18 12:05:59.072991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.544 [2024-04-18 12:05:59.073413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.544 [2024-04-18 12:05:59.073728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.544 [2024-04-18 12:05:59.073785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.544 qpair failed and we were unable to recover it. 00:30:08.544 [2024-04-18 12:05:59.074168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.544 [2024-04-18 12:05:59.074533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.544 [2024-04-18 12:05:59.074552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.544 qpair failed and we were unable to recover it. 00:30:08.544 [2024-04-18 12:05:59.074802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.544 [2024-04-18 12:05:59.075150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.544 [2024-04-18 12:05:59.075167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.544 qpair failed and we were unable to recover it. 00:30:08.544 [2024-04-18 12:05:59.075516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.544 [2024-04-18 12:05:59.075886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.544 [2024-04-18 12:05:59.075902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.544 qpair failed and we were unable to recover it. 00:30:08.544 [2024-04-18 12:05:59.076129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.544 [2024-04-18 12:05:59.076461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.544 [2024-04-18 12:05:59.076493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.544 qpair failed and we were unable to recover it. 00:30:08.544 [2024-04-18 12:05:59.076756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.544 [2024-04-18 12:05:59.077024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.544 [2024-04-18 12:05:59.077051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.544 qpair failed and we were unable to recover it. 00:30:08.544 [2024-04-18 12:05:59.077387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.544 [2024-04-18 12:05:59.077707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.545 [2024-04-18 12:05:59.077756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.545 qpair failed and we were unable to recover it. 00:30:08.545 [2024-04-18 12:05:59.078199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.545 [2024-04-18 12:05:59.078556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.545 [2024-04-18 12:05:59.078572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.545 qpair failed and we were unable to recover it. 00:30:08.545 [2024-04-18 12:05:59.078835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.545 [2024-04-18 12:05:59.079113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.545 [2024-04-18 12:05:59.079129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.545 qpair failed and we were unable to recover it. 00:30:08.545 [2024-04-18 12:05:59.079407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.545 [2024-04-18 12:05:59.079715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.545 [2024-04-18 12:05:59.079731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.545 qpair failed and we were unable to recover it. 00:30:08.545 [2024-04-18 12:05:59.079954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.545 [2024-04-18 12:05:59.080315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.545 [2024-04-18 12:05:59.080330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.545 qpair failed and we were unable to recover it. 00:30:08.545 [2024-04-18 12:05:59.080606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.545 [2024-04-18 12:05:59.080817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.545 [2024-04-18 12:05:59.080834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.545 qpair failed and we were unable to recover it. 00:30:08.545 [2024-04-18 12:05:59.081043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.545 [2024-04-18 12:05:59.081255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.545 [2024-04-18 12:05:59.081271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.545 qpair failed and we were unable to recover it. 00:30:08.545 [2024-04-18 12:05:59.081647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.545 [2024-04-18 12:05:59.081937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.545 [2024-04-18 12:05:59.081953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.545 qpair failed and we were unable to recover it. 00:30:08.545 [2024-04-18 12:05:59.082301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.545 [2024-04-18 12:05:59.082694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.545 [2024-04-18 12:05:59.082710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.545 qpair failed and we were unable to recover it. 00:30:08.545 [2024-04-18 12:05:59.082918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.545 [2024-04-18 12:05:59.083188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.545 [2024-04-18 12:05:59.083203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.545 qpair failed and we were unable to recover it. 00:30:08.545 [2024-04-18 12:05:59.083552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.545 [2024-04-18 12:05:59.083815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.545 [2024-04-18 12:05:59.083831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.545 qpair failed and we were unable to recover it. 00:30:08.545 [2024-04-18 12:05:59.084163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.545 [2024-04-18 12:05:59.084455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.545 [2024-04-18 12:05:59.084471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.545 qpair failed and we were unable to recover it. 00:30:08.545 [2024-04-18 12:05:59.084796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.545 [2024-04-18 12:05:59.085080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.545 [2024-04-18 12:05:59.085096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.545 qpair failed and we were unable to recover it. 00:30:08.545 [2024-04-18 12:05:59.085385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.545 [2024-04-18 12:05:59.085667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.545 [2024-04-18 12:05:59.085683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.545 qpair failed and we were unable to recover it. 00:30:08.545 [2024-04-18 12:05:59.086018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.545 [2024-04-18 12:05:59.086367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.545 [2024-04-18 12:05:59.086382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.545 qpair failed and we were unable to recover it. 00:30:08.545 [2024-04-18 12:05:59.086645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.545 [2024-04-18 12:05:59.086872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.545 [2024-04-18 12:05:59.086888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.545 qpair failed and we were unable to recover it. 00:30:08.545 [2024-04-18 12:05:59.087121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.545 [2024-04-18 12:05:59.087415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.545 [2024-04-18 12:05:59.087431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.545 qpair failed and we were unable to recover it. 00:30:08.545 [2024-04-18 12:05:59.087712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.545 [2024-04-18 12:05:59.088006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.545 [2024-04-18 12:05:59.088021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.545 qpair failed and we were unable to recover it. 00:30:08.545 [2024-04-18 12:05:59.088379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.545 [2024-04-18 12:05:59.088646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.545 [2024-04-18 12:05:59.088662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.545 qpair failed and we were unable to recover it. 00:30:08.545 [2024-04-18 12:05:59.088988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.545 [2024-04-18 12:05:59.089377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.545 [2024-04-18 12:05:59.089393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.545 qpair failed and we were unable to recover it. 00:30:08.545 [2024-04-18 12:05:59.089748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.545 [2024-04-18 12:05:59.090083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.545 [2024-04-18 12:05:59.090131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.545 qpair failed and we were unable to recover it. 00:30:08.545 [2024-04-18 12:05:59.090570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.545 [2024-04-18 12:05:59.090787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.545 [2024-04-18 12:05:59.090803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.545 qpair failed and we were unable to recover it. 00:30:08.812 [2024-04-18 12:05:59.091079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.812 [2024-04-18 12:05:59.091438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.812 [2024-04-18 12:05:59.091459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.812 qpair failed and we were unable to recover it. 00:30:08.812 [2024-04-18 12:05:59.092303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.812 [2024-04-18 12:05:59.092692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.812 [2024-04-18 12:05:59.092713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.812 qpair failed and we were unable to recover it. 00:30:08.812 [2024-04-18 12:05:59.092964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.812 [2024-04-18 12:05:59.093326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.812 [2024-04-18 12:05:59.093375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.812 qpair failed and we were unable to recover it. 00:30:08.812 [2024-04-18 12:05:59.093785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.812 [2024-04-18 12:05:59.094061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.812 [2024-04-18 12:05:59.094077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.812 qpair failed and we were unable to recover it. 00:30:08.812 [2024-04-18 12:05:59.094388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.812 [2024-04-18 12:05:59.094619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.812 [2024-04-18 12:05:59.094635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.812 qpair failed and we were unable to recover it. 00:30:08.812 [2024-04-18 12:05:59.094865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.812 [2024-04-18 12:05:59.095130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.812 [2024-04-18 12:05:59.095146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.812 qpair failed and we were unable to recover it. 00:30:08.812 [2024-04-18 12:05:59.095426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.812 [2024-04-18 12:05:59.095677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.812 [2024-04-18 12:05:59.095694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.812 qpair failed and we were unable to recover it. 00:30:08.812 [2024-04-18 12:05:59.095894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.812 [2024-04-18 12:05:59.096223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.812 [2024-04-18 12:05:59.096239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.812 qpair failed and we were unable to recover it. 00:30:08.812 [2024-04-18 12:05:59.096559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.812 [2024-04-18 12:05:59.096932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.812 [2024-04-18 12:05:59.096948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.812 qpair failed and we were unable to recover it. 00:30:08.812 [2024-04-18 12:05:59.097267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.812 [2024-04-18 12:05:59.097703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.812 [2024-04-18 12:05:59.097752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.812 qpair failed and we were unable to recover it. 00:30:08.812 [2024-04-18 12:05:59.098080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.812 [2024-04-18 12:05:59.098421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.812 [2024-04-18 12:05:59.098507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.812 qpair failed and we were unable to recover it. 00:30:08.812 [2024-04-18 12:05:59.098804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-04-18 12:05:59.099192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-04-18 12:05:59.099242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-04-18 12:05:59.099703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-04-18 12:05:59.100040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-04-18 12:05:59.100090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-04-18 12:05:59.100481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-04-18 12:05:59.100770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-04-18 12:05:59.100820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-04-18 12:05:59.101142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-04-18 12:05:59.101560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-04-18 12:05:59.101609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-04-18 12:05:59.101987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-04-18 12:05:59.102391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-04-18 12:05:59.102406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-04-18 12:05:59.102758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-04-18 12:05:59.103121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-04-18 12:05:59.103170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-04-18 12:05:59.103579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-04-18 12:05:59.103912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-04-18 12:05:59.103961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-04-18 12:05:59.104336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-04-18 12:05:59.104627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-04-18 12:05:59.104677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-04-18 12:05:59.105030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-04-18 12:05:59.105420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-04-18 12:05:59.105482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-04-18 12:05:59.105788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-04-18 12:05:59.106108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-04-18 12:05:59.106158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-04-18 12:05:59.106502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-04-18 12:05:59.106726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-04-18 12:05:59.106741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-04-18 12:05:59.107081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-04-18 12:05:59.107358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-04-18 12:05:59.107373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-04-18 12:05:59.107711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-04-18 12:05:59.107960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-04-18 12:05:59.107976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-04-18 12:05:59.108291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-04-18 12:05:59.108623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-04-18 12:05:59.108673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-04-18 12:05:59.109087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-04-18 12:05:59.109439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-04-18 12:05:59.109498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-04-18 12:05:59.109916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-04-18 12:05:59.110172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-04-18 12:05:59.110222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-04-18 12:05:59.110567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-04-18 12:05:59.110999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-04-18 12:05:59.111049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-04-18 12:05:59.111471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-04-18 12:05:59.111734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-04-18 12:05:59.111784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-04-18 12:05:59.112135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-04-18 12:05:59.112479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-04-18 12:05:59.112529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-04-18 12:05:59.112900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-04-18 12:05:59.113301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-04-18 12:05:59.113349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-04-18 12:05:59.113722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-04-18 12:05:59.114087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-04-18 12:05:59.114137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-04-18 12:05:59.114585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-04-18 12:05:59.114863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-04-18 12:05:59.114913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-04-18 12:05:59.115331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-04-18 12:05:59.115740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-04-18 12:05:59.115790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-04-18 12:05:59.116142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-04-18 12:05:59.116488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-04-18 12:05:59.116561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-04-18 12:05:59.116905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-04-18 12:05:59.117263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-04-18 12:05:59.117312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-04-18 12:05:59.117677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-04-18 12:05:59.118013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-04-18 12:05:59.118062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-04-18 12:05:59.118481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-04-18 12:05:59.118822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-04-18 12:05:59.118872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-04-18 12:05:59.119246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-04-18 12:05:59.119604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-04-18 12:05:59.119654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-04-18 12:05:59.120004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-04-18 12:05:59.120448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-04-18 12:05:59.120530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.814 [2024-04-18 12:05:59.120826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-04-18 12:05:59.121218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-04-18 12:05:59.121267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-04-18 12:05:59.121515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-04-18 12:05:59.122295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-04-18 12:05:59.122322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-04-18 12:05:59.122708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-04-18 12:05:59.123011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-04-18 12:05:59.123061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-04-18 12:05:59.123440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-04-18 12:05:59.123847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-04-18 12:05:59.123897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-04-18 12:05:59.124285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-04-18 12:05:59.124624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-04-18 12:05:59.124674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-04-18 12:05:59.124971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-04-18 12:05:59.125319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-04-18 12:05:59.125368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-04-18 12:05:59.125760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-04-18 12:05:59.126121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-04-18 12:05:59.126171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-04-18 12:05:59.126637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-04-18 12:05:59.126979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-04-18 12:05:59.127029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-04-18 12:05:59.127475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-04-18 12:05:59.127895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-04-18 12:05:59.127944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-04-18 12:05:59.128409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-04-18 12:05:59.128766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-04-18 12:05:59.128817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-04-18 12:05:59.129176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-04-18 12:05:59.129513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-04-18 12:05:59.129564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-04-18 12:05:59.129856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-04-18 12:05:59.130129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-04-18 12:05:59.130181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-04-18 12:05:59.130512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-04-18 12:05:59.130932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-04-18 12:05:59.130981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-04-18 12:05:59.131367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-04-18 12:05:59.131721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-04-18 12:05:59.131772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-04-18 12:05:59.132133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-04-18 12:05:59.132546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-04-18 12:05:59.132596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-04-18 12:05:59.132994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-04-18 12:05:59.133273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-04-18 12:05:59.133322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-04-18 12:05:59.133695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-04-18 12:05:59.133957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-04-18 12:05:59.134007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-04-18 12:05:59.134445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-04-18 12:05:59.134721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-04-18 12:05:59.134769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-04-18 12:05:59.135479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-04-18 12:05:59.135792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-04-18 12:05:59.135817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-04-18 12:05:59.136117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-04-18 12:05:59.136474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-04-18 12:05:59.136510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-04-18 12:05:59.136855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-04-18 12:05:59.137124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-04-18 12:05:59.137139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-04-18 12:05:59.137497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-04-18 12:05:59.137942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-04-18 12:05:59.137991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-04-18 12:05:59.138314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-04-18 12:05:59.138682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-04-18 12:05:59.138733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-04-18 12:05:59.139206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-04-18 12:05:59.139575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-04-18 12:05:59.139624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-04-18 12:05:59.140022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-04-18 12:05:59.140409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-04-18 12:05:59.140472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-04-18 12:05:59.140873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-04-18 12:05:59.141223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-04-18 12:05:59.141272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-04-18 12:05:59.141583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-04-18 12:05:59.141924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-04-18 12:05:59.141973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-04-18 12:05:59.142346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-04-18 12:05:59.142684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-04-18 12:05:59.142700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-04-18 12:05:59.142915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-04-18 12:05:59.143198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-04-18 12:05:59.143247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-04-18 12:05:59.143524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-04-18 12:05:59.143889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-04-18 12:05:59.143938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-04-18 12:05:59.144341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-04-18 12:05:59.144730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-04-18 12:05:59.144780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-04-18 12:05:59.145182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-04-18 12:05:59.145518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-04-18 12:05:59.145567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-04-18 12:05:59.145992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-04-18 12:05:59.146389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-04-18 12:05:59.146439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-04-18 12:05:59.146806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-04-18 12:05:59.147131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-04-18 12:05:59.147181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-04-18 12:05:59.147428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-04-18 12:05:59.147615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-04-18 12:05:59.147631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-04-18 12:05:59.147903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-04-18 12:05:59.148169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-04-18 12:05:59.148218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-04-18 12:05:59.148608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-04-18 12:05:59.148867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-04-18 12:05:59.148916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-04-18 12:05:59.149361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-04-18 12:05:59.149698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-04-18 12:05:59.149749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-04-18 12:05:59.150126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-04-18 12:05:59.150495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-04-18 12:05:59.150544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-04-18 12:05:59.150957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-04-18 12:05:59.151383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-04-18 12:05:59.151432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-04-18 12:05:59.151755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-04-18 12:05:59.152075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-04-18 12:05:59.152123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-04-18 12:05:59.152513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-04-18 12:05:59.152849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-04-18 12:05:59.152898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-04-18 12:05:59.153170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-04-18 12:05:59.153438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-04-18 12:05:59.153504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-04-18 12:05:59.153892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-04-18 12:05:59.154116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-04-18 12:05:59.154130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-04-18 12:05:59.154259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-04-18 12:05:59.154497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-04-18 12:05:59.154548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-04-18 12:05:59.154898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-04-18 12:05:59.155159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-04-18 12:05:59.155208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-04-18 12:05:59.155614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-04-18 12:05:59.155988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-04-18 12:05:59.156036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-04-18 12:05:59.156421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-04-18 12:05:59.156826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-04-18 12:05:59.156876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-04-18 12:05:59.157178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-04-18 12:05:59.157515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-04-18 12:05:59.157547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-04-18 12:05:59.157814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-04-18 12:05:59.158175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-04-18 12:05:59.158224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-04-18 12:05:59.158572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-04-18 12:05:59.158870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-04-18 12:05:59.158886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-04-18 12:05:59.159170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-04-18 12:05:59.159505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-04-18 12:05:59.159557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-04-18 12:05:59.159913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-04-18 12:05:59.160273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-04-18 12:05:59.160335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-04-18 12:05:59.160615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-04-18 12:05:59.160958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-04-18 12:05:59.161007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-04-18 12:05:59.161401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-04-18 12:05:59.161666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-04-18 12:05:59.161682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.816 qpair failed and we were unable to recover it. 00:30:08.816 [2024-04-18 12:05:59.161952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-04-18 12:05:59.162245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-04-18 12:05:59.162261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.816 qpair failed and we were unable to recover it. 00:30:08.816 [2024-04-18 12:05:59.162614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-04-18 12:05:59.162882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-04-18 12:05:59.162932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.816 qpair failed and we were unable to recover it. 00:30:08.816 [2024-04-18 12:05:59.163282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-04-18 12:05:59.163612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-04-18 12:05:59.163662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.816 qpair failed and we were unable to recover it. 00:30:08.816 [2024-04-18 12:05:59.164601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-04-18 12:05:59.164999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-04-18 12:05:59.165057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.816 qpair failed and we were unable to recover it. 00:30:08.816 [2024-04-18 12:05:59.165403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-04-18 12:05:59.165728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-04-18 12:05:59.165769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.816 qpair failed and we were unable to recover it. 00:30:08.816 [2024-04-18 12:05:59.166040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-04-18 12:05:59.166250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-04-18 12:05:59.166265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.816 qpair failed and we were unable to recover it. 00:30:08.816 [2024-04-18 12:05:59.166575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-04-18 12:05:59.166845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-04-18 12:05:59.166861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.816 qpair failed and we were unable to recover it. 00:30:08.816 [2024-04-18 12:05:59.167076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-04-18 12:05:59.167333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-04-18 12:05:59.167352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.816 qpair failed and we were unable to recover it. 00:30:08.816 [2024-04-18 12:05:59.167623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-04-18 12:05:59.167923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-04-18 12:05:59.167939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.816 qpair failed and we were unable to recover it. 00:30:08.816 [2024-04-18 12:05:59.168201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-04-18 12:05:59.168469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-04-18 12:05:59.168486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.816 qpair failed and we were unable to recover it. 00:30:08.816 [2024-04-18 12:05:59.168745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-04-18 12:05:59.168961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-04-18 12:05:59.168977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.816 qpair failed and we were unable to recover it. 00:30:08.816 [2024-04-18 12:05:59.169250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-04-18 12:05:59.169523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-04-18 12:05:59.169540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.816 qpair failed and we were unable to recover it. 00:30:08.816 [2024-04-18 12:05:59.169806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-04-18 12:05:59.170004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-04-18 12:05:59.170020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.816 qpair failed and we were unable to recover it. 00:30:08.816 [2024-04-18 12:05:59.170355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-04-18 12:05:59.170571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-04-18 12:05:59.170588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.816 qpair failed and we were unable to recover it. 00:30:08.816 [2024-04-18 12:05:59.170873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-04-18 12:05:59.171084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-04-18 12:05:59.171100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.816 qpair failed and we were unable to recover it. 00:30:08.816 [2024-04-18 12:05:59.171380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-04-18 12:05:59.171584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-04-18 12:05:59.171600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.816 qpair failed and we were unable to recover it. 00:30:08.816 [2024-04-18 12:05:59.171879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-04-18 12:05:59.172138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-04-18 12:05:59.172154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.816 qpair failed and we were unable to recover it. 00:30:08.816 [2024-04-18 12:05:59.172421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-04-18 12:05:59.172639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-04-18 12:05:59.172657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.816 qpair failed and we were unable to recover it. 00:30:08.816 [2024-04-18 12:05:59.172863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-04-18 12:05:59.173076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-04-18 12:05:59.173091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.816 qpair failed and we were unable to recover it. 00:30:08.816 [2024-04-18 12:05:59.173373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-04-18 12:05:59.173629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-04-18 12:05:59.173646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.816 qpair failed and we were unable to recover it. 00:30:08.816 [2024-04-18 12:05:59.173864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-04-18 12:05:59.174120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-04-18 12:05:59.174136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.816 qpair failed and we were unable to recover it. 00:30:08.816 [2024-04-18 12:05:59.174340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-04-18 12:05:59.174607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-04-18 12:05:59.174623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.816 qpair failed and we were unable to recover it. 00:30:08.816 [2024-04-18 12:05:59.174949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-04-18 12:05:59.175139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-04-18 12:05:59.175155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.816 qpair failed and we were unable to recover it. 00:30:08.816 [2024-04-18 12:05:59.175456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-04-18 12:05:59.175724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-04-18 12:05:59.175741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.816 qpair failed and we were unable to recover it. 00:30:08.816 [2024-04-18 12:05:59.176005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-04-18 12:05:59.176294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-04-18 12:05:59.176342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.816 qpair failed and we were unable to recover it. 00:30:08.816 [2024-04-18 12:05:59.176624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-04-18 12:05:59.176904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-04-18 12:05:59.176954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.816 qpair failed and we were unable to recover it. 00:30:08.816 [2024-04-18 12:05:59.177214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-04-18 12:05:59.177476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-04-18 12:05:59.177527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.816 qpair failed and we were unable to recover it. 00:30:08.816 [2024-04-18 12:05:59.177802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-04-18 12:05:59.178053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-04-18 12:05:59.178119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.816 qpair failed and we were unable to recover it. 00:30:08.817 [2024-04-18 12:05:59.178478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-04-18 12:05:59.178826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-04-18 12:05:59.178876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.817 qpair failed and we were unable to recover it. 00:30:08.817 [2024-04-18 12:05:59.179158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-04-18 12:05:59.179424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-04-18 12:05:59.179514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.817 qpair failed and we were unable to recover it. 00:30:08.817 [2024-04-18 12:05:59.179709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-04-18 12:05:59.180073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-04-18 12:05:59.180123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.817 qpair failed and we were unable to recover it. 00:30:08.817 [2024-04-18 12:05:59.180541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-04-18 12:05:59.180872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-04-18 12:05:59.180921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.817 qpair failed and we were unable to recover it. 00:30:08.817 [2024-04-18 12:05:59.181266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-04-18 12:05:59.181555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-04-18 12:05:59.181604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.817 qpair failed and we were unable to recover it. 00:30:08.817 [2024-04-18 12:05:59.181884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-04-18 12:05:59.182205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-04-18 12:05:59.182253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.817 qpair failed and we were unable to recover it. 00:30:08.817 [2024-04-18 12:05:59.182528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-04-18 12:05:59.182915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-04-18 12:05:59.182964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.817 qpair failed and we were unable to recover it. 00:30:08.817 [2024-04-18 12:05:59.183344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-04-18 12:05:59.183689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-04-18 12:05:59.183738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.817 qpair failed and we were unable to recover it. 00:30:08.817 [2024-04-18 12:05:59.184001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-04-18 12:05:59.184408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-04-18 12:05:59.184472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.817 qpair failed and we were unable to recover it. 00:30:08.817 [2024-04-18 12:05:59.184843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-04-18 12:05:59.185231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-04-18 12:05:59.185279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.817 qpair failed and we were unable to recover it. 00:30:08.817 [2024-04-18 12:05:59.185622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-04-18 12:05:59.186013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-04-18 12:05:59.186072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.817 qpair failed and we were unable to recover it. 00:30:08.817 [2024-04-18 12:05:59.186443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-04-18 12:05:59.186755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-04-18 12:05:59.186771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.817 qpair failed and we were unable to recover it. 00:30:08.817 [2024-04-18 12:05:59.187067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-04-18 12:05:59.187313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-04-18 12:05:59.187361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.817 qpair failed and we were unable to recover it. 00:30:08.817 [2024-04-18 12:05:59.187701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-04-18 12:05:59.188109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-04-18 12:05:59.188159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.817 qpair failed and we were unable to recover it. 00:30:08.817 [2024-04-18 12:05:59.188515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-04-18 12:05:59.188854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-04-18 12:05:59.188903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.817 qpair failed and we were unable to recover it. 00:30:08.817 [2024-04-18 12:05:59.189240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-04-18 12:05:59.189471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-04-18 12:05:59.189503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.817 qpair failed and we were unable to recover it. 00:30:08.817 [2024-04-18 12:05:59.189743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-04-18 12:05:59.190011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-04-18 12:05:59.190059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.817 qpair failed and we were unable to recover it. 00:30:08.817 [2024-04-18 12:05:59.190477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-04-18 12:05:59.190750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-04-18 12:05:59.190766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.817 qpair failed and we were unable to recover it. 00:30:08.817 [2024-04-18 12:05:59.191024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-04-18 12:05:59.191306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-04-18 12:05:59.191355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.817 qpair failed and we were unable to recover it. 00:30:08.817 [2024-04-18 12:05:59.191715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-04-18 12:05:59.192054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-04-18 12:05:59.192103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.817 qpair failed and we were unable to recover it. 00:30:08.817 [2024-04-18 12:05:59.192448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-04-18 12:05:59.192852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-04-18 12:05:59.192902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.817 qpair failed and we were unable to recover it. 00:30:08.817 [2024-04-18 12:05:59.193266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-04-18 12:05:59.193670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-04-18 12:05:59.193719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.817 qpair failed and we were unable to recover it. 00:30:08.817 [2024-04-18 12:05:59.194060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-04-18 12:05:59.194470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-04-18 12:05:59.194519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.818 qpair failed and we were unable to recover it. 00:30:08.818 [2024-04-18 12:05:59.194796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-04-18 12:05:59.195158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-04-18 12:05:59.195208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.818 qpair failed and we were unable to recover it. 00:30:08.818 [2024-04-18 12:05:59.195551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-04-18 12:05:59.195840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-04-18 12:05:59.195889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.818 qpair failed and we were unable to recover it. 00:30:08.818 [2024-04-18 12:05:59.196286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-04-18 12:05:59.196676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-04-18 12:05:59.196725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.818 qpair failed and we were unable to recover it. 00:30:08.818 [2024-04-18 12:05:59.196999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-04-18 12:05:59.197351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-04-18 12:05:59.197399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.818 qpair failed and we were unable to recover it. 00:30:08.818 [2024-04-18 12:05:59.197788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-04-18 12:05:59.198057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-04-18 12:05:59.198106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.818 qpair failed and we were unable to recover it. 00:30:08.818 [2024-04-18 12:05:59.198441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-04-18 12:05:59.198693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-04-18 12:05:59.198709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.818 qpair failed and we were unable to recover it. 00:30:08.818 [2024-04-18 12:05:59.198973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-04-18 12:05:59.199382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-04-18 12:05:59.199431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.818 qpair failed and we were unable to recover it. 00:30:08.818 [2024-04-18 12:05:59.199885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-04-18 12:05:59.200205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-04-18 12:05:59.200254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.818 qpair failed and we were unable to recover it. 00:30:08.818 [2024-04-18 12:05:59.200683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-04-18 12:05:59.201064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-04-18 12:05:59.201113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.818 qpair failed and we were unable to recover it. 00:30:08.818 [2024-04-18 12:05:59.201533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-04-18 12:05:59.201926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-04-18 12:05:59.201977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.818 qpair failed and we were unable to recover it. 00:30:08.818 [2024-04-18 12:05:59.202349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-04-18 12:05:59.202748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-04-18 12:05:59.202764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.818 qpair failed and we were unable to recover it. 00:30:08.818 [2024-04-18 12:05:59.203020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-04-18 12:05:59.203284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-04-18 12:05:59.203333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.818 qpair failed and we were unable to recover it. 00:30:08.818 [2024-04-18 12:05:59.203611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-04-18 12:05:59.203779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-04-18 12:05:59.203828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.818 qpair failed and we were unable to recover it. 00:30:08.818 [2024-04-18 12:05:59.204180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-04-18 12:05:59.204360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-04-18 12:05:59.204409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.818 qpair failed and we were unable to recover it. 00:30:08.818 [2024-04-18 12:05:59.204754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-04-18 12:05:59.205162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-04-18 12:05:59.205211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.818 qpair failed and we were unable to recover it. 00:30:08.818 [2024-04-18 12:05:59.205529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-04-18 12:05:59.205862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-04-18 12:05:59.205911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.818 qpair failed and we were unable to recover it. 00:30:08.818 [2024-04-18 12:05:59.206352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-04-18 12:05:59.206578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-04-18 12:05:59.206593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.818 qpair failed and we were unable to recover it. 00:30:08.818 [2024-04-18 12:05:59.206828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-04-18 12:05:59.207161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-04-18 12:05:59.207210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.818 qpair failed and we were unable to recover it. 00:30:08.818 [2024-04-18 12:05:59.207639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-04-18 12:05:59.207972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-04-18 12:05:59.208021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.818 qpair failed and we were unable to recover it. 00:30:08.818 [2024-04-18 12:05:59.208419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-04-18 12:05:59.208786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-04-18 12:05:59.208836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.818 qpair failed and we were unable to recover it. 00:30:08.818 [2024-04-18 12:05:59.209201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-04-18 12:05:59.209562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-04-18 12:05:59.209612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.818 qpair failed and we were unable to recover it. 00:30:08.819 [2024-04-18 12:05:59.210032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-04-18 12:05:59.210298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-04-18 12:05:59.210347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-04-18 12:05:59.210766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-04-18 12:05:59.211151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-04-18 12:05:59.211199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-04-18 12:05:59.211556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-04-18 12:05:59.211966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-04-18 12:05:59.212014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-04-18 12:05:59.212433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-04-18 12:05:59.212859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-04-18 12:05:59.212909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-04-18 12:05:59.213352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-04-18 12:05:59.213718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-04-18 12:05:59.213768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-04-18 12:05:59.214179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-04-18 12:05:59.214345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-04-18 12:05:59.214360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-04-18 12:05:59.214558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-04-18 12:05:59.214810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-04-18 12:05:59.214869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-04-18 12:05:59.215269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-04-18 12:05:59.215639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-04-18 12:05:59.215688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-04-18 12:05:59.215963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-04-18 12:05:59.216369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-04-18 12:05:59.216419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-04-18 12:05:59.216634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-04-18 12:05:59.217065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-04-18 12:05:59.217114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-04-18 12:05:59.217472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-04-18 12:05:59.217822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-04-18 12:05:59.217872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-04-18 12:05:59.218230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-04-18 12:05:59.218512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-04-18 12:05:59.218562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-04-18 12:05:59.218921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-04-18 12:05:59.219237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-04-18 12:05:59.219287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-04-18 12:05:59.219555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-04-18 12:05:59.219809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-04-18 12:05:59.219825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-04-18 12:05:59.220109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-04-18 12:05:59.220437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-04-18 12:05:59.220515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-04-18 12:05:59.220803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-04-18 12:05:59.221124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-04-18 12:05:59.221139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-04-18 12:05:59.221421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-04-18 12:05:59.221709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-04-18 12:05:59.221759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-04-18 12:05:59.222227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-04-18 12:05:59.222596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-04-18 12:05:59.222612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-04-18 12:05:59.222886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-04-18 12:05:59.223270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-04-18 12:05:59.223318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-04-18 12:05:59.223674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-04-18 12:05:59.224007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-04-18 12:05:59.224056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-04-18 12:05:59.224388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-04-18 12:05:59.224632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-04-18 12:05:59.224682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-04-18 12:05:59.225047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-04-18 12:05:59.225318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-04-18 12:05:59.225367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-04-18 12:05:59.225725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-04-18 12:05:59.226133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-04-18 12:05:59.226180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-04-18 12:05:59.226533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-04-18 12:05:59.226783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-04-18 12:05:59.226836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-04-18 12:05:59.227275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-04-18 12:05:59.227623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-04-18 12:05:59.227672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-04-18 12:05:59.228005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-04-18 12:05:59.228196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-04-18 12:05:59.228245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-04-18 12:05:59.228665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-04-18 12:05:59.229009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-04-18 12:05:59.229057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-04-18 12:05:59.229502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-04-18 12:05:59.229849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-04-18 12:05:59.229897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.820 qpair failed and we were unable to recover it. 00:30:08.820 [2024-04-18 12:05:59.230194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-04-18 12:05:59.230588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-04-18 12:05:59.230638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.820 qpair failed and we were unable to recover it. 00:30:08.820 [2024-04-18 12:05:59.231037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-04-18 12:05:59.231363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-04-18 12:05:59.231412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.820 qpair failed and we were unable to recover it. 00:30:08.820 [2024-04-18 12:05:59.231770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-04-18 12:05:59.232049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-04-18 12:05:59.232098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.820 qpair failed and we were unable to recover it. 00:30:08.820 [2024-04-18 12:05:59.232472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-04-18 12:05:59.232792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-04-18 12:05:59.232841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.820 qpair failed and we were unable to recover it. 00:30:08.820 [2024-04-18 12:05:59.233239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-04-18 12:05:59.233553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-04-18 12:05:59.233603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.820 qpair failed and we were unable to recover it. 00:30:08.820 [2024-04-18 12:05:59.233936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-04-18 12:05:59.234254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-04-18 12:05:59.234303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.820 qpair failed and we were unable to recover it. 00:30:08.820 [2024-04-18 12:05:59.234728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-04-18 12:05:59.235049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-04-18 12:05:59.235098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.820 qpair failed and we were unable to recover it. 00:30:08.820 [2024-04-18 12:05:59.235478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-04-18 12:05:59.235864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-04-18 12:05:59.235913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.820 qpair failed and we were unable to recover it. 00:30:08.820 [2024-04-18 12:05:59.236317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-04-18 12:05:59.236727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-04-18 12:05:59.236777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.820 qpair failed and we were unable to recover it. 00:30:08.820 [2024-04-18 12:05:59.237143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-04-18 12:05:59.237490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-04-18 12:05:59.237540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.820 qpair failed and we were unable to recover it. 00:30:08.820 [2024-04-18 12:05:59.237983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-04-18 12:05:59.238331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-04-18 12:05:59.238380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.820 qpair failed and we were unable to recover it. 00:30:08.820 [2024-04-18 12:05:59.238606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-04-18 12:05:59.238886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-04-18 12:05:59.238935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.820 qpair failed and we were unable to recover it. 00:30:08.820 [2024-04-18 12:05:59.239213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-04-18 12:05:59.239555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-04-18 12:05:59.239604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.820 qpair failed and we were unable to recover it. 00:30:08.820 [2024-04-18 12:05:59.239946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-04-18 12:05:59.240337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-04-18 12:05:59.240385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.820 qpair failed and we were unable to recover it. 00:30:08.820 [2024-04-18 12:05:59.240805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-04-18 12:05:59.241145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-04-18 12:05:59.241193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.820 qpair failed and we were unable to recover it. 00:30:08.820 [2024-04-18 12:05:59.241464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-04-18 12:05:59.241765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-04-18 12:05:59.241814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.820 qpair failed and we were unable to recover it. 00:30:08.820 [2024-04-18 12:05:59.242185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-04-18 12:05:59.242538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-04-18 12:05:59.242555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.820 qpair failed and we were unable to recover it. 00:30:08.820 [2024-04-18 12:05:59.242803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-04-18 12:05:59.243091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-04-18 12:05:59.243141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.820 qpair failed and we were unable to recover it. 00:30:08.820 [2024-04-18 12:05:59.243583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-04-18 12:05:59.243908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-04-18 12:05:59.243956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.820 qpair failed and we were unable to recover it. 00:30:08.820 [2024-04-18 12:05:59.244385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-04-18 12:05:59.244661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-04-18 12:05:59.244712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.820 qpair failed and we were unable to recover it. 00:30:08.820 [2024-04-18 12:05:59.245055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-04-18 12:05:59.245324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-04-18 12:05:59.245373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.820 qpair failed and we were unable to recover it. 00:30:08.820 [2024-04-18 12:05:59.245703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-04-18 12:05:59.246055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-04-18 12:05:59.246103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.820 qpair failed and we were unable to recover it. 00:30:08.820 [2024-04-18 12:05:59.246546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-04-18 12:05:59.246958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-04-18 12:05:59.247007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.821 qpair failed and we were unable to recover it. 00:30:08.821 [2024-04-18 12:05:59.247412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-04-18 12:05:59.247707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-04-18 12:05:59.247725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.821 qpair failed and we were unable to recover it. 00:30:08.821 [2024-04-18 12:05:59.248061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-04-18 12:05:59.248412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-04-18 12:05:59.248470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.821 qpair failed and we were unable to recover it. 00:30:08.821 [2024-04-18 12:05:59.248811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-04-18 12:05:59.249159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-04-18 12:05:59.249208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.821 qpair failed and we were unable to recover it. 00:30:08.821 [2024-04-18 12:05:59.249613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-04-18 12:05:59.249959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-04-18 12:05:59.250008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.821 qpair failed and we were unable to recover it. 00:30:08.821 [2024-04-18 12:05:59.250444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-04-18 12:05:59.250819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-04-18 12:05:59.250869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.821 qpair failed and we were unable to recover it. 00:30:08.821 [2024-04-18 12:05:59.251289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-04-18 12:05:59.251649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-04-18 12:05:59.251699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.821 qpair failed and we were unable to recover it. 00:30:08.821 [2024-04-18 12:05:59.252122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-04-18 12:05:59.252440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-04-18 12:05:59.252501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.821 qpair failed and we were unable to recover it. 00:30:08.821 [2024-04-18 12:05:59.252860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-04-18 12:05:59.253206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-04-18 12:05:59.253255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.821 qpair failed and we were unable to recover it. 00:30:08.821 [2024-04-18 12:05:59.253709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-04-18 12:05:59.254052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-04-18 12:05:59.254102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.821 qpair failed and we were unable to recover it. 00:30:08.821 [2024-04-18 12:05:59.254457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-04-18 12:05:59.254812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-04-18 12:05:59.254833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.821 qpair failed and we were unable to recover it. 00:30:08.821 [2024-04-18 12:05:59.255579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-04-18 12:05:59.255894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-04-18 12:05:59.255950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.821 qpair failed and we were unable to recover it. 00:30:08.821 [2024-04-18 12:05:59.256297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-04-18 12:05:59.256626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-04-18 12:05:59.256642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.821 qpair failed and we were unable to recover it. 00:30:08.821 [2024-04-18 12:05:59.256902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-04-18 12:05:59.257168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-04-18 12:05:59.257184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.821 qpair failed and we were unable to recover it. 00:30:08.821 [2024-04-18 12:05:59.257526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-04-18 12:05:59.257788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-04-18 12:05:59.257838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.821 qpair failed and we were unable to recover it. 00:30:08.821 [2024-04-18 12:05:59.258190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-04-18 12:05:59.258570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-04-18 12:05:59.258620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.821 qpair failed and we were unable to recover it. 00:30:08.821 [2024-04-18 12:05:59.258993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-04-18 12:05:59.259345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-04-18 12:05:59.259394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.821 qpair failed and we were unable to recover it. 00:30:08.821 [2024-04-18 12:05:59.259721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-04-18 12:05:59.260111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-04-18 12:05:59.260160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.821 qpair failed and we were unable to recover it. 00:30:08.821 [2024-04-18 12:05:59.260479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-04-18 12:05:59.260675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-04-18 12:05:59.260691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.821 qpair failed and we were unable to recover it. 00:30:08.821 [2024-04-18 12:05:59.260901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-04-18 12:05:59.261230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-04-18 12:05:59.261279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.821 qpair failed and we were unable to recover it. 00:30:08.821 [2024-04-18 12:05:59.261619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-04-18 12:05:59.262005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-04-18 12:05:59.262054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.821 qpair failed and we were unable to recover it. 00:30:08.821 [2024-04-18 12:05:59.262331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-04-18 12:05:59.262615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-04-18 12:05:59.262665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.821 qpair failed and we were unable to recover it. 00:30:08.821 [2024-04-18 12:05:59.263062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-04-18 12:05:59.263393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-04-18 12:05:59.263442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.821 qpair failed and we were unable to recover it. 00:30:08.821 [2024-04-18 12:05:59.263740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-04-18 12:05:59.264129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-04-18 12:05:59.264178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.821 qpair failed and we were unable to recover it. 00:30:08.821 [2024-04-18 12:05:59.264479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-04-18 12:05:59.264810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-04-18 12:05:59.264858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.822 [2024-04-18 12:05:59.265312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-04-18 12:05:59.265700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-04-18 12:05:59.265749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.822 [2024-04-18 12:05:59.266088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-04-18 12:05:59.266480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-04-18 12:05:59.266530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.822 [2024-04-18 12:05:59.266875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-04-18 12:05:59.267131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-04-18 12:05:59.267180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.822 [2024-04-18 12:05:59.267598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-04-18 12:05:59.267952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-04-18 12:05:59.268000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.822 [2024-04-18 12:05:59.268426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-04-18 12:05:59.268763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-04-18 12:05:59.268778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.822 [2024-04-18 12:05:59.269080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-04-18 12:05:59.269305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-04-18 12:05:59.269339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.822 [2024-04-18 12:05:59.269685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-04-18 12:05:59.269950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-04-18 12:05:59.269999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.822 [2024-04-18 12:05:59.270271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-04-18 12:05:59.270589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-04-18 12:05:59.270634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.822 [2024-04-18 12:05:59.270921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-04-18 12:05:59.271180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-04-18 12:05:59.271229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.822 [2024-04-18 12:05:59.271617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-04-18 12:05:59.271895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-04-18 12:05:59.271944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.822 [2024-04-18 12:05:59.272359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-04-18 12:05:59.272699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-04-18 12:05:59.272748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.822 [2024-04-18 12:05:59.273152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-04-18 12:05:59.273474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-04-18 12:05:59.273535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.822 [2024-04-18 12:05:59.273864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-04-18 12:05:59.274202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-04-18 12:05:59.274251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.822 [2024-04-18 12:05:59.274627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-04-18 12:05:59.274945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-04-18 12:05:59.274960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.822 [2024-04-18 12:05:59.275253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-04-18 12:05:59.275591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-04-18 12:05:59.275640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.822 [2024-04-18 12:05:59.275837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-04-18 12:05:59.276091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-04-18 12:05:59.276106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.822 [2024-04-18 12:05:59.276411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-04-18 12:05:59.276703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-04-18 12:05:59.276753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.822 [2024-04-18 12:05:59.277081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-04-18 12:05:59.277405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-04-18 12:05:59.277469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.822 [2024-04-18 12:05:59.277757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-04-18 12:05:59.278078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-04-18 12:05:59.278127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.822 [2024-04-18 12:05:59.278527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-04-18 12:05:59.278790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-04-18 12:05:59.278806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.822 [2024-04-18 12:05:59.279102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-04-18 12:05:59.279447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-04-18 12:05:59.279524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.822 [2024-04-18 12:05:59.279866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-04-18 12:05:59.280218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-04-18 12:05:59.280274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.823 [2024-04-18 12:05:59.280607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.823 [2024-04-18 12:05:59.280876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.823 [2024-04-18 12:05:59.280891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.823 qpair failed and we were unable to recover it. 00:30:08.823 [2024-04-18 12:05:59.281248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.823 [2024-04-18 12:05:59.281585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.823 [2024-04-18 12:05:59.281634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.823 qpair failed and we were unable to recover it. 00:30:08.823 [2024-04-18 12:05:59.282074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.823 [2024-04-18 12:05:59.282393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.823 [2024-04-18 12:05:59.282442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.823 qpair failed and we were unable to recover it. 00:30:08.823 [2024-04-18 12:05:59.282787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.823 [2024-04-18 12:05:59.283070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.823 [2024-04-18 12:05:59.283085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.823 qpair failed and we were unable to recover it. 00:30:08.823 [2024-04-18 12:05:59.283360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.823 [2024-04-18 12:05:59.283699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.823 [2024-04-18 12:05:59.283749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.823 qpair failed and we were unable to recover it. 00:30:08.823 [2024-04-18 12:05:59.284102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.823 [2024-04-18 12:05:59.284378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.823 [2024-04-18 12:05:59.284427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.823 qpair failed and we were unable to recover it. 00:30:08.823 [2024-04-18 12:05:59.284778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.823 [2024-04-18 12:05:59.285039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.823 [2024-04-18 12:05:59.285087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.823 qpair failed and we were unable to recover it. 00:30:08.823 [2024-04-18 12:05:59.285478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.823 [2024-04-18 12:05:59.285789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.823 [2024-04-18 12:05:59.285804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.823 qpair failed and we were unable to recover it. 00:30:08.823 [2024-04-18 12:05:59.286057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.823 [2024-04-18 12:05:59.286328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.823 [2024-04-18 12:05:59.286376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.823 qpair failed and we were unable to recover it. 00:30:08.823 [2024-04-18 12:05:59.286574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.823 [2024-04-18 12:05:59.286961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.823 [2024-04-18 12:05:59.287016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.823 qpair failed and we were unable to recover it. 00:30:08.823 [2024-04-18 12:05:59.287447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.823 [2024-04-18 12:05:59.287896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.823 [2024-04-18 12:05:59.287944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.823 qpair failed and we were unable to recover it. 00:30:08.823 [2024-04-18 12:05:59.288203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.823 [2024-04-18 12:05:59.288517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.823 [2024-04-18 12:05:59.288579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.823 qpair failed and we were unable to recover it. 00:30:08.823 [2024-04-18 12:05:59.288925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.823 [2024-04-18 12:05:59.289268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.823 [2024-04-18 12:05:59.289316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.823 qpair failed and we were unable to recover it. 00:30:08.823 [2024-04-18 12:05:59.289678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.823 [2024-04-18 12:05:59.289948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.823 [2024-04-18 12:05:59.289997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.823 qpair failed and we were unable to recover it. 00:30:08.823 [2024-04-18 12:05:59.290403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.823 [2024-04-18 12:05:59.290670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.823 [2024-04-18 12:05:59.290720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.823 qpair failed and we were unable to recover it. 00:30:08.823 [2024-04-18 12:05:59.291127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.823 [2024-04-18 12:05:59.291471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.823 [2024-04-18 12:05:59.291520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.823 qpair failed and we were unable to recover it. 00:30:08.823 [2024-04-18 12:05:59.291892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.823 [2024-04-18 12:05:59.292226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.823 [2024-04-18 12:05:59.292275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.823 qpair failed and we were unable to recover it. 00:30:08.823 [2024-04-18 12:05:59.292673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.823 [2024-04-18 12:05:59.293084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.823 [2024-04-18 12:05:59.293133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.823 qpair failed and we were unable to recover it. 00:30:08.823 [2024-04-18 12:05:59.293538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.823 [2024-04-18 12:05:59.293742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.823 [2024-04-18 12:05:59.293757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.823 qpair failed and we were unable to recover it. 00:30:08.823 [2024-04-18 12:05:59.294041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.823 [2024-04-18 12:05:59.294473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.823 [2024-04-18 12:05:59.294534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.823 qpair failed and we were unable to recover it. 00:30:08.823 [2024-04-18 12:05:59.294861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.823 [2024-04-18 12:05:59.295111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.823 [2024-04-18 12:05:59.295159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.823 qpair failed and we were unable to recover it. 00:30:08.823 [2024-04-18 12:05:59.295496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.823 [2024-04-18 12:05:59.295836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.824 [2024-04-18 12:05:59.295885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.824 qpair failed and we were unable to recover it. 00:30:08.824 [2024-04-18 12:05:59.296242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.824 [2024-04-18 12:05:59.296570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.824 [2024-04-18 12:05:59.296585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.824 qpair failed and we were unable to recover it. 00:30:08.824 [2024-04-18 12:05:59.296854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.824 [2024-04-18 12:05:59.297132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.824 [2024-04-18 12:05:59.297181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.824 qpair failed and we were unable to recover it. 00:30:08.824 [2024-04-18 12:05:59.297588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.824 [2024-04-18 12:05:59.297918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.824 [2024-04-18 12:05:59.297933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.824 qpair failed and we were unable to recover it. 00:30:08.824 [2024-04-18 12:05:59.298031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.824 [2024-04-18 12:05:59.298302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.824 [2024-04-18 12:05:59.298350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.824 qpair failed and we were unable to recover it. 00:30:08.824 [2024-04-18 12:05:59.298720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.824 [2024-04-18 12:05:59.299044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.824 [2024-04-18 12:05:59.299092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.824 qpair failed and we were unable to recover it. 00:30:08.824 [2024-04-18 12:05:59.299349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.824 [2024-04-18 12:05:59.299598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.824 [2024-04-18 12:05:59.299648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.824 qpair failed and we were unable to recover it. 00:30:08.824 [2024-04-18 12:05:59.300069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.824 [2024-04-18 12:05:59.300422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.824 [2024-04-18 12:05:59.300487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.824 qpair failed and we were unable to recover it. 00:30:08.824 [2024-04-18 12:05:59.300915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.824 [2024-04-18 12:05:59.301236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.824 [2024-04-18 12:05:59.301285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.824 qpair failed and we were unable to recover it. 00:30:08.824 [2024-04-18 12:05:59.301718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.824 [2024-04-18 12:05:59.302051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.824 [2024-04-18 12:05:59.302100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.824 qpair failed and we were unable to recover it. 00:30:08.824 [2024-04-18 12:05:59.302522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.824 [2024-04-18 12:05:59.302756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.824 [2024-04-18 12:05:59.302806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.824 qpair failed and we were unable to recover it. 00:30:08.824 [2024-04-18 12:05:59.303226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.824 [2024-04-18 12:05:59.303576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.824 [2024-04-18 12:05:59.303639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.824 qpair failed and we were unable to recover it. 00:30:08.824 [2024-04-18 12:05:59.304068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.824 [2024-04-18 12:05:59.304311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.824 [2024-04-18 12:05:59.304359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.824 qpair failed and we were unable to recover it. 00:30:08.824 [2024-04-18 12:05:59.304777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.824 [2024-04-18 12:05:59.305106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.824 [2024-04-18 12:05:59.305155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.824 qpair failed and we were unable to recover it. 00:30:08.824 [2024-04-18 12:05:59.305555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.824 [2024-04-18 12:05:59.305965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.824 [2024-04-18 12:05:59.306015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.824 qpair failed and we were unable to recover it. 00:30:08.824 [2024-04-18 12:05:59.306295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.824 [2024-04-18 12:05:59.306637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.824 [2024-04-18 12:05:59.306687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.824 qpair failed and we were unable to recover it. 00:30:08.824 [2024-04-18 12:05:59.307088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.824 [2024-04-18 12:05:59.307490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.824 [2024-04-18 12:05:59.307540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.824 qpair failed and we were unable to recover it. 00:30:08.824 [2024-04-18 12:05:59.307964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.824 [2024-04-18 12:05:59.308303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.824 [2024-04-18 12:05:59.308352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.824 qpair failed and we were unable to recover it. 00:30:08.824 [2024-04-18 12:05:59.308744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.824 [2024-04-18 12:05:59.309038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.824 [2024-04-18 12:05:59.309086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.824 qpair failed and we were unable to recover it. 00:30:08.824 [2024-04-18 12:05:59.309365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.824 [2024-04-18 12:05:59.309697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.824 [2024-04-18 12:05:59.309712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.824 qpair failed and we were unable to recover it. 00:30:08.824 [2024-04-18 12:05:59.310045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.824 [2024-04-18 12:05:59.310468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.824 [2024-04-18 12:05:59.310518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.824 qpair failed and we were unable to recover it. 00:30:08.824 [2024-04-18 12:05:59.310876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.824 [2024-04-18 12:05:59.311193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.824 [2024-04-18 12:05:59.311242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.824 qpair failed and we were unable to recover it. 00:30:08.824 [2024-04-18 12:05:59.311500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.824 [2024-04-18 12:05:59.311919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.824 [2024-04-18 12:05:59.311968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.824 qpair failed and we were unable to recover it. 00:30:08.824 [2024-04-18 12:05:59.312372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.824 [2024-04-18 12:05:59.312769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.824 [2024-04-18 12:05:59.312820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.824 qpair failed and we were unable to recover it. 00:30:08.824 [2024-04-18 12:05:59.313219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.824 [2024-04-18 12:05:59.313487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.824 [2024-04-18 12:05:59.313536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.824 qpair failed and we were unable to recover it. 00:30:08.824 [2024-04-18 12:05:59.313964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.824 [2024-04-18 12:05:59.314379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.824 [2024-04-18 12:05:59.314428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.824 qpair failed and we were unable to recover it. 00:30:08.824 [2024-04-18 12:05:59.314797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.824 [2024-04-18 12:05:59.315185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.825 [2024-04-18 12:05:59.315234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.825 qpair failed and we were unable to recover it. 00:30:08.825 [2024-04-18 12:05:59.315660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.825 [2024-04-18 12:05:59.316054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.825 [2024-04-18 12:05:59.316102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.825 qpair failed and we were unable to recover it. 00:30:08.825 [2024-04-18 12:05:59.316535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.825 [2024-04-18 12:05:59.316872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.825 [2024-04-18 12:05:59.316920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.825 qpair failed and we were unable to recover it. 00:30:08.825 [2024-04-18 12:05:59.317336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.825 [2024-04-18 12:05:59.317721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.825 [2024-04-18 12:05:59.317780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.825 qpair failed and we were unable to recover it. 00:30:08.825 [2024-04-18 12:05:59.318065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.825 [2024-04-18 12:05:59.318394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.825 [2024-04-18 12:05:59.318443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.825 qpair failed and we were unable to recover it. 00:30:08.825 [2024-04-18 12:05:59.318732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.825 [2024-04-18 12:05:59.319079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.825 [2024-04-18 12:05:59.319129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.825 qpair failed and we were unable to recover it. 00:30:08.825 [2024-04-18 12:05:59.319529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.825 [2024-04-18 12:05:59.319857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.825 [2024-04-18 12:05:59.319906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.825 qpair failed and we were unable to recover it. 00:30:08.825 [2024-04-18 12:05:59.320291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.825 [2024-04-18 12:05:59.320647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.825 [2024-04-18 12:05:59.320696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.825 qpair failed and we were unable to recover it. 00:30:08.825 [2024-04-18 12:05:59.321033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.825 [2024-04-18 12:05:59.321377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.825 [2024-04-18 12:05:59.321426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.825 qpair failed and we were unable to recover it. 00:30:08.825 [2024-04-18 12:05:59.321851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.825 [2024-04-18 12:05:59.322241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.825 [2024-04-18 12:05:59.322291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.825 qpair failed and we were unable to recover it. 00:30:08.825 [2024-04-18 12:05:59.322643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.825 [2024-04-18 12:05:59.322931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.825 [2024-04-18 12:05:59.322980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.825 qpair failed and we were unable to recover it. 00:30:08.825 [2024-04-18 12:05:59.323317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.825 [2024-04-18 12:05:59.323704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.825 [2024-04-18 12:05:59.323754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.825 qpair failed and we were unable to recover it. 00:30:08.825 [2024-04-18 12:05:59.324029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.825 [2024-04-18 12:05:59.324438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.825 [2024-04-18 12:05:59.324459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.825 qpair failed and we were unable to recover it. 00:30:08.825 [2024-04-18 12:05:59.324801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.825 [2024-04-18 12:05:59.325065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.825 [2024-04-18 12:05:59.325114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.825 qpair failed and we were unable to recover it. 00:30:08.825 [2024-04-18 12:05:59.325473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.825 [2024-04-18 12:05:59.325737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.825 [2024-04-18 12:05:59.325795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.825 qpair failed and we were unable to recover it. 00:30:08.825 [2024-04-18 12:05:59.326160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.825 [2024-04-18 12:05:59.326539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.825 [2024-04-18 12:05:59.326555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.825 qpair failed and we were unable to recover it. 00:30:08.825 [2024-04-18 12:05:59.326904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.825 [2024-04-18 12:05:59.327208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.825 [2024-04-18 12:05:59.327257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.825 qpair failed and we were unable to recover it. 00:30:08.825 [2024-04-18 12:05:59.327646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.825 [2024-04-18 12:05:59.328031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.825 [2024-04-18 12:05:59.328079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.825 qpair failed and we were unable to recover it. 00:30:08.825 [2024-04-18 12:05:59.328410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.825 [2024-04-18 12:05:59.328778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.825 [2024-04-18 12:05:59.328794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.825 qpair failed and we were unable to recover it. 00:30:08.825 [2024-04-18 12:05:59.329079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.825 [2024-04-18 12:05:59.329272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.825 [2024-04-18 12:05:59.329288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.825 qpair failed and we were unable to recover it. 00:30:08.825 [2024-04-18 12:05:59.329557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.825 [2024-04-18 12:05:59.329847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.825 [2024-04-18 12:05:59.329895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.825 qpair failed and we were unable to recover it. 00:30:08.825 [2024-04-18 12:05:59.330238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.825 [2024-04-18 12:05:59.330501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.825 [2024-04-18 12:05:59.330551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.825 qpair failed and we were unable to recover it. 00:30:08.825 [2024-04-18 12:05:59.330942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.825 [2024-04-18 12:05:59.331347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.825 [2024-04-18 12:05:59.331376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.825 qpair failed and we were unable to recover it. 00:30:08.825 [2024-04-18 12:05:59.331671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.825 [2024-04-18 12:05:59.332044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.825 [2024-04-18 12:05:59.332094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.825 qpair failed and we were unable to recover it. 00:30:08.825 [2024-04-18 12:05:59.332518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.825 [2024-04-18 12:05:59.332858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.825 [2024-04-18 12:05:59.332907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.825 qpair failed and we were unable to recover it. 00:30:08.825 [2024-04-18 12:05:59.333268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.825 [2024-04-18 12:05:59.333625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.825 [2024-04-18 12:05:59.333674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.825 qpair failed and we were unable to recover it. 00:30:08.825 [2024-04-18 12:05:59.333926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.825 [2024-04-18 12:05:59.334247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.826 [2024-04-18 12:05:59.334296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.826 qpair failed and we were unable to recover it. 00:30:08.826 [2024-04-18 12:05:59.334636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.826 [2024-04-18 12:05:59.334894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.826 [2024-04-18 12:05:59.334909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.826 qpair failed and we were unable to recover it. 00:30:08.826 [2024-04-18 12:05:59.335206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.826 [2024-04-18 12:05:59.335527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.826 [2024-04-18 12:05:59.335543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.826 qpair failed and we were unable to recover it. 00:30:08.826 [2024-04-18 12:05:59.335817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.826 [2024-04-18 12:05:59.336116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.826 [2024-04-18 12:05:59.336165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.826 qpair failed and we were unable to recover it. 00:30:08.826 [2024-04-18 12:05:59.336509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.826 [2024-04-18 12:05:59.336915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.826 [2024-04-18 12:05:59.336963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.826 qpair failed and we were unable to recover it. 00:30:08.826 [2024-04-18 12:05:59.337377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.826 [2024-04-18 12:05:59.337766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.826 [2024-04-18 12:05:59.337782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.826 qpair failed and we were unable to recover it. 00:30:08.826 [2024-04-18 12:05:59.338036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.826 [2024-04-18 12:05:59.338296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.826 [2024-04-18 12:05:59.338345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.826 qpair failed and we were unable to recover it. 00:30:08.826 [2024-04-18 12:05:59.338754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.826 [2024-04-18 12:05:59.339106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.826 [2024-04-18 12:05:59.339154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.826 qpair failed and we were unable to recover it. 00:30:08.826 [2024-04-18 12:05:59.339573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.826 [2024-04-18 12:05:59.339833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.826 [2024-04-18 12:05:59.339849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.826 qpair failed and we were unable to recover it. 00:30:08.826 [2024-04-18 12:05:59.340194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.826 [2024-04-18 12:05:59.340532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.826 [2024-04-18 12:05:59.340548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.826 qpair failed and we were unable to recover it. 00:30:08.826 [2024-04-18 12:05:59.340836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.826 [2024-04-18 12:05:59.341109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.826 [2024-04-18 12:05:59.341158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.826 qpair failed and we were unable to recover it. 00:30:08.826 [2024-04-18 12:05:59.341512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.826 [2024-04-18 12:05:59.341797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.826 [2024-04-18 12:05:59.341846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.826 qpair failed and we were unable to recover it. 00:30:08.826 [2024-04-18 12:05:59.342266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.826 [2024-04-18 12:05:59.342591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.826 [2024-04-18 12:05:59.342628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.826 qpair failed and we were unable to recover it. 00:30:08.826 [2024-04-18 12:05:59.343065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.826 [2024-04-18 12:05:59.343402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.826 [2024-04-18 12:05:59.343464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.826 qpair failed and we were unable to recover it. 00:30:08.826 [2024-04-18 12:05:59.343862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.826 [2024-04-18 12:05:59.344211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.826 [2024-04-18 12:05:59.344225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.826 qpair failed and we were unable to recover it. 00:30:08.826 [2024-04-18 12:05:59.344501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.826 [2024-04-18 12:05:59.344880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.826 [2024-04-18 12:05:59.344929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.826 qpair failed and we were unable to recover it. 00:30:08.826 [2024-04-18 12:05:59.345261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.826 [2024-04-18 12:05:59.345648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.826 [2024-04-18 12:05:59.345663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.826 qpair failed and we were unable to recover it. 00:30:08.826 [2024-04-18 12:05:59.346031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.826 [2024-04-18 12:05:59.346306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.826 [2024-04-18 12:05:59.346356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.826 qpair failed and we were unable to recover it. 00:30:08.826 [2024-04-18 12:05:59.346720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.826 [2024-04-18 12:05:59.346929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.826 [2024-04-18 12:05:59.346950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.826 qpair failed and we were unable to recover it. 00:30:08.826 [2024-04-18 12:05:59.347147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.826 [2024-04-18 12:05:59.347472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.826 [2024-04-18 12:05:59.347522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.826 qpair failed and we were unable to recover it. 00:30:08.826 [2024-04-18 12:05:59.347852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.826 [2024-04-18 12:05:59.348240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.826 [2024-04-18 12:05:59.348288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.826 qpair failed and we were unable to recover it. 00:30:08.826 [2024-04-18 12:05:59.348720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.826 [2024-04-18 12:05:59.348891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.826 [2024-04-18 12:05:59.348907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.826 qpair failed and we were unable to recover it. 00:30:08.827 [2024-04-18 12:05:59.349183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.827 [2024-04-18 12:05:59.349483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.827 [2024-04-18 12:05:59.349499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.827 qpair failed and we were unable to recover it. 00:30:08.827 [2024-04-18 12:05:59.349795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.827 [2024-04-18 12:05:59.350126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.827 [2024-04-18 12:05:59.350176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.827 qpair failed and we were unable to recover it. 00:30:08.827 [2024-04-18 12:05:59.350569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.827 [2024-04-18 12:05:59.350976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.827 [2024-04-18 12:05:59.350992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.827 qpair failed and we were unable to recover it. 00:30:08.827 [2024-04-18 12:05:59.351322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.827 [2024-04-18 12:05:59.351628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.827 [2024-04-18 12:05:59.351643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.827 qpair failed and we were unable to recover it. 00:30:08.827 [2024-04-18 12:05:59.351910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.827 [2024-04-18 12:05:59.352278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.827 [2024-04-18 12:05:59.352326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:08.827 qpair failed and we were unable to recover it. 00:30:08.827 [2024-04-18 12:05:59.352738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.827 [2024-04-18 12:05:59.353133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.827 [2024-04-18 12:05:59.353161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:30:08.827 qpair failed and we were unable to recover it. 00:30:09.095 [2024-04-18 12:05:59.353331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-04-18 12:05:59.353689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-04-18 12:05:59.353717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-04-18 12:05:59.354075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-04-18 12:05:59.354360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-04-18 12:05:59.354382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-04-18 12:05:59.354745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-04-18 12:05:59.355081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-04-18 12:05:59.355102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-04-18 12:05:59.355378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-04-18 12:05:59.355678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-04-18 12:05:59.355727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-04-18 12:05:59.356143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-04-18 12:05:59.356471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-04-18 12:05:59.356519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-04-18 12:05:59.356803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-04-18 12:05:59.357176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-04-18 12:05:59.357223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-04-18 12:05:59.357646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-04-18 12:05:59.358060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-04-18 12:05:59.358081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-04-18 12:05:59.358437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-04-18 12:05:59.358870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-04-18 12:05:59.358891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-04-18 12:05:59.359173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-04-18 12:05:59.359529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-04-18 12:05:59.359578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-04-18 12:05:59.359972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-04-18 12:05:59.360326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-04-18 12:05:59.360374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-04-18 12:05:59.360668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-04-18 12:05:59.360997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-04-18 12:05:59.361018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-04-18 12:05:59.361246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-04-18 12:05:59.361610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-04-18 12:05:59.361659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-04-18 12:05:59.361940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-04-18 12:05:59.362159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-04-18 12:05:59.362180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-04-18 12:05:59.362518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-04-18 12:05:59.362877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-04-18 12:05:59.362925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-04-18 12:05:59.363331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-04-18 12:05:59.363663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-04-18 12:05:59.363711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-04-18 12:05:59.364050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-04-18 12:05:59.364384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-04-18 12:05:59.364432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-04-18 12:05:59.364837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-04-18 12:05:59.365153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-04-18 12:05:59.365201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-04-18 12:05:59.365554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-04-18 12:05:59.365966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-04-18 12:05:59.366014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-04-18 12:05:59.366268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-04-18 12:05:59.366636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-04-18 12:05:59.366685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-04-18 12:05:59.367048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-04-18 12:05:59.367466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-04-18 12:05:59.367516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-04-18 12:05:59.367897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-04-18 12:05:59.368307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-04-18 12:05:59.368355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-04-18 12:05:59.368694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-04-18 12:05:59.369011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-04-18 12:05:59.369059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-04-18 12:05:59.369500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-04-18 12:05:59.369842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-04-18 12:05:59.369891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-04-18 12:05:59.370223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-04-18 12:05:59.370574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-04-18 12:05:59.370595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-04-18 12:05:59.370898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-04-18 12:05:59.371238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-04-18 12:05:59.371286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-04-18 12:05:59.371707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-04-18 12:05:59.372025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-04-18 12:05:59.372073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-04-18 12:05:59.372340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-04-18 12:05:59.372608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-04-18 12:05:59.372657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-04-18 12:05:59.373083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-04-18 12:05:59.373413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-04-18 12:05:59.373474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-04-18 12:05:59.373892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-04-18 12:05:59.374209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-04-18 12:05:59.374258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-04-18 12:05:59.374536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-04-18 12:05:59.374825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-04-18 12:05:59.374877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-04-18 12:05:59.375220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-04-18 12:05:59.375582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-04-18 12:05:59.375630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-04-18 12:05:59.376074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-04-18 12:05:59.376395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-04-18 12:05:59.376442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-04-18 12:05:59.376832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-04-18 12:05:59.377130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-04-18 12:05:59.377151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-04-18 12:05:59.377446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-04-18 12:05:59.377716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-04-18 12:05:59.377737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-04-18 12:05:59.377964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-04-18 12:05:59.378266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-04-18 12:05:59.378288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-04-18 12:05:59.378585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-04-18 12:05:59.378820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-04-18 12:05:59.378868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-04-18 12:05:59.379263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-04-18 12:05:59.379660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-04-18 12:05:59.379709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-04-18 12:05:59.380129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-04-18 12:05:59.380490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-04-18 12:05:59.380539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-04-18 12:05:59.380949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-04-18 12:05:59.381302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-04-18 12:05:59.381350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-04-18 12:05:59.381769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-04-18 12:05:59.382094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-04-18 12:05:59.382148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-04-18 12:05:59.382560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-04-18 12:05:59.382968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-04-18 12:05:59.383015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-04-18 12:05:59.383360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-04-18 12:05:59.383697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-04-18 12:05:59.383745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-04-18 12:05:59.384165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-04-18 12:05:59.384523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-04-18 12:05:59.384573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-04-18 12:05:59.384852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-04-18 12:05:59.385149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-04-18 12:05:59.385169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-04-18 12:05:59.385460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-04-18 12:05:59.385858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-04-18 12:05:59.385879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-04-18 12:05:59.386106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-04-18 12:05:59.386377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-04-18 12:05:59.386425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-04-18 12:05:59.386692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-04-18 12:05:59.386902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-04-18 12:05:59.386950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-04-18 12:05:59.387334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-04-18 12:05:59.387669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-04-18 12:05:59.387718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-04-18 12:05:59.388141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-04-18 12:05:59.388486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-04-18 12:05:59.388535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-04-18 12:05:59.388878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-04-18 12:05:59.389216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-04-18 12:05:59.389270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-04-18 12:05:59.389627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-04-18 12:05:59.390017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-04-18 12:05:59.390064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-04-18 12:05:59.390467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-04-18 12:05:59.390800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-04-18 12:05:59.390848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-04-18 12:05:59.391187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-04-18 12:05:59.391521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-04-18 12:05:59.391569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-04-18 12:05:59.391988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-04-18 12:05:59.392322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-04-18 12:05:59.392370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-04-18 12:05:59.392770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-04-18 12:05:59.393106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-04-18 12:05:59.393127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-04-18 12:05:59.393354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-04-18 12:05:59.393737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-04-18 12:05:59.393786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-04-18 12:05:59.394184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-04-18 12:05:59.394532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-04-18 12:05:59.394553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-04-18 12:05:59.394896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-04-18 12:05:59.395212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-04-18 12:05:59.395260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-04-18 12:05:59.395521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-04-18 12:05:59.395862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-04-18 12:05:59.395909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-04-18 12:05:59.396206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-04-18 12:05:59.396580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-04-18 12:05:59.396604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-04-18 12:05:59.396980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-04-18 12:05:59.397327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-04-18 12:05:59.397374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-04-18 12:05:59.397733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-04-18 12:05:59.398078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-04-18 12:05:59.398125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-04-18 12:05:59.398486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-04-18 12:05:59.398871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-04-18 12:05:59.398918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-04-18 12:05:59.399261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-04-18 12:05:59.399537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-04-18 12:05:59.399586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-04-18 12:05:59.399923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-04-18 12:05:59.400239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-04-18 12:05:59.400288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-04-18 12:05:59.400668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-04-18 12:05:59.400889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-04-18 12:05:59.400910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-04-18 12:05:59.401275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-04-18 12:05:59.401660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-04-18 12:05:59.401709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-04-18 12:05:59.402131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-04-18 12:05:59.402532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-04-18 12:05:59.402580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-04-18 12:05:59.402950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-04-18 12:05:59.403278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-04-18 12:05:59.403327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-04-18 12:05:59.403728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-04-18 12:05:59.404053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-04-18 12:05:59.404077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-04-18 12:05:59.404472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-04-18 12:05:59.404820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-04-18 12:05:59.404868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-04-18 12:05:59.405315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-04-18 12:05:59.405597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-04-18 12:05:59.405646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-04-18 12:05:59.405990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-04-18 12:05:59.406269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-04-18 12:05:59.406290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-04-18 12:05:59.406586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-04-18 12:05:59.406861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-04-18 12:05:59.406882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-04-18 12:05:59.407179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-04-18 12:05:59.407345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-04-18 12:05:59.407393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-04-18 12:05:59.407742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-04-18 12:05:59.408081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-04-18 12:05:59.408128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-04-18 12:05:59.408495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-04-18 12:05:59.408881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-04-18 12:05:59.408907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-04-18 12:05:59.409105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-04-18 12:05:59.409395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-04-18 12:05:59.409444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-04-18 12:05:59.409710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-04-18 12:05:59.410047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-04-18 12:05:59.410094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-04-18 12:05:59.410482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-04-18 12:05:59.410818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-04-18 12:05:59.410877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-04-18 12:05:59.411302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-04-18 12:05:59.411633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-04-18 12:05:59.411681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-04-18 12:05:59.412021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-04-18 12:05:59.412297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-04-18 12:05:59.412318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-04-18 12:05:59.412615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-04-18 12:05:59.412918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-04-18 12:05:59.412939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-04-18 12:05:59.413243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-04-18 12:05:59.413574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-04-18 12:05:59.413622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-04-18 12:05:59.414009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-04-18 12:05:59.414362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-04-18 12:05:59.414383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-04-18 12:05:59.414728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-04-18 12:05:59.415014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-04-18 12:05:59.415062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-04-18 12:05:59.415399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-04-18 12:05:59.415644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-04-18 12:05:59.415664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-04-18 12:05:59.415950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-04-18 12:05:59.416233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-04-18 12:05:59.416281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-04-18 12:05:59.416525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-04-18 12:05:59.416943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-04-18 12:05:59.416964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-04-18 12:05:59.417207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-04-18 12:05:59.417561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-04-18 12:05:59.417582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-04-18 12:05:59.417892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-04-18 12:05:59.418238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-04-18 12:05:59.418286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-04-18 12:05:59.418612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-04-18 12:05:59.419003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-04-18 12:05:59.419052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-04-18 12:05:59.419417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-04-18 12:05:59.419761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-04-18 12:05:59.419809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-04-18 12:05:59.420211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-04-18 12:05:59.420548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-04-18 12:05:59.420596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-04-18 12:05:59.420982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-04-18 12:05:59.421303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-04-18 12:05:59.421352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-04-18 12:05:59.421682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-04-18 12:05:59.422084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-04-18 12:05:59.422105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-04-18 12:05:59.422441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-04-18 12:05:59.422836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-04-18 12:05:59.422884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-04-18 12:05:59.423160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-04-18 12:05:59.423429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-04-18 12:05:59.423484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-04-18 12:05:59.423858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-04-18 12:05:59.424149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-04-18 12:05:59.424196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-04-18 12:05:59.424530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-04-18 12:05:59.424942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-04-18 12:05:59.424962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-04-18 12:05:59.425260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-04-18 12:05:59.425667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-04-18 12:05:59.425715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-04-18 12:05:59.426069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-04-18 12:05:59.426476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-04-18 12:05:59.426525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-04-18 12:05:59.426955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-04-18 12:05:59.427340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-04-18 12:05:59.427389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-04-18 12:05:59.427748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-04-18 12:05:59.428078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-04-18 12:05:59.428125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-04-18 12:05:59.428544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-04-18 12:05:59.428813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-04-18 12:05:59.428834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-04-18 12:05:59.429192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-04-18 12:05:59.429623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-04-18 12:05:59.429671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-04-18 12:05:59.430032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-04-18 12:05:59.430351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-04-18 12:05:59.430399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-04-18 12:05:59.430824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-04-18 12:05:59.431094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-04-18 12:05:59.431114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.099 [2024-04-18 12:05:59.431400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-04-18 12:05:59.431796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-04-18 12:05:59.431845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-04-18 12:05:59.432199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-04-18 12:05:59.432501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-04-18 12:05:59.432551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-04-18 12:05:59.432899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-04-18 12:05:59.433216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-04-18 12:05:59.433265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-04-18 12:05:59.433683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-04-18 12:05:59.434015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-04-18 12:05:59.434063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-04-18 12:05:59.434425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-04-18 12:05:59.434836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-04-18 12:05:59.434856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-04-18 12:05:59.435190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-04-18 12:05:59.435534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-04-18 12:05:59.435583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-04-18 12:05:59.436009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-04-18 12:05:59.436393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-04-18 12:05:59.436441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-04-18 12:05:59.436806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-04-18 12:05:59.437049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-04-18 12:05:59.437069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-04-18 12:05:59.437248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-04-18 12:05:59.437643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-04-18 12:05:59.437692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-04-18 12:05:59.438032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-04-18 12:05:59.438403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-04-18 12:05:59.438485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-04-18 12:05:59.438831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-04-18 12:05:59.439246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-04-18 12:05:59.439295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-04-18 12:05:59.439691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-04-18 12:05:59.440002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-04-18 12:05:59.440022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-04-18 12:05:59.440391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-04-18 12:05:59.440744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-04-18 12:05:59.440792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-04-18 12:05:59.441162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-04-18 12:05:59.441514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-04-18 12:05:59.441563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-04-18 12:05:59.442003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-04-18 12:05:59.442338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-04-18 12:05:59.442359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-04-18 12:05:59.442673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-04-18 12:05:59.443081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-04-18 12:05:59.443127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-04-18 12:05:59.443546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-04-18 12:05:59.443809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-04-18 12:05:59.443829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-04-18 12:05:59.444102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-04-18 12:05:59.444401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-04-18 12:05:59.444449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-04-18 12:05:59.444829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-04-18 12:05:59.445172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-04-18 12:05:59.445221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-04-18 12:05:59.445562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-04-18 12:05:59.445799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-04-18 12:05:59.445826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-04-18 12:05:59.446182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-04-18 12:05:59.446527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-04-18 12:05:59.446547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-04-18 12:05:59.446780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-04-18 12:05:59.447146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-04-18 12:05:59.447194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-04-18 12:05:59.447498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-04-18 12:05:59.447760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-04-18 12:05:59.447807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.100 [2024-04-18 12:05:59.448150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-04-18 12:05:59.448439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-04-18 12:05:59.448507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-04-18 12:05:59.448920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-04-18 12:05:59.449271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-04-18 12:05:59.449292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-04-18 12:05:59.449599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-04-18 12:05:59.449939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-04-18 12:05:59.449959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-04-18 12:05:59.450342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-04-18 12:05:59.450626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-04-18 12:05:59.450647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-04-18 12:05:59.450859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-04-18 12:05:59.451175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-04-18 12:05:59.451196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-04-18 12:05:59.451431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-04-18 12:05:59.451774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-04-18 12:05:59.451823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-04-18 12:05:59.452110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-04-18 12:05:59.452377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-04-18 12:05:59.452424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-04-18 12:05:59.452700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-04-18 12:05:59.453082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-04-18 12:05:59.453131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-04-18 12:05:59.453413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-04-18 12:05:59.453743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-04-18 12:05:59.453791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-04-18 12:05:59.454100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-04-18 12:05:59.454376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-04-18 12:05:59.454397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-04-18 12:05:59.454772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-04-18 12:05:59.455088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-04-18 12:05:59.455136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-04-18 12:05:59.455570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-04-18 12:05:59.455958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-04-18 12:05:59.456006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-04-18 12:05:59.456250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-04-18 12:05:59.456619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-04-18 12:05:59.456667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-04-18 12:05:59.456932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-04-18 12:05:59.457212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-04-18 12:05:59.457260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-04-18 12:05:59.457523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-04-18 12:05:59.457906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-04-18 12:05:59.457958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-04-18 12:05:59.458320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-04-18 12:05:59.458653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-04-18 12:05:59.458702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-04-18 12:05:59.459095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-04-18 12:05:59.459419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-04-18 12:05:59.459475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-04-18 12:05:59.459897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-04-18 12:05:59.460248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-04-18 12:05:59.460296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-04-18 12:05:59.460633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-04-18 12:05:59.460952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-04-18 12:05:59.460999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-04-18 12:05:59.461439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-04-18 12:05:59.461859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-04-18 12:05:59.461908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-04-18 12:05:59.462272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-04-18 12:05:59.462605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-04-18 12:05:59.462654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-04-18 12:05:59.463051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-04-18 12:05:59.463386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-04-18 12:05:59.463434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-04-18 12:05:59.463862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-04-18 12:05:59.464181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-04-18 12:05:59.464229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-04-18 12:05:59.464601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-04-18 12:05:59.465011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-04-18 12:05:59.465059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-04-18 12:05:59.465346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-04-18 12:05:59.465757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-04-18 12:05:59.465806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-04-18 12:05:59.466175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-04-18 12:05:59.466521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-04-18 12:05:59.466570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-04-18 12:05:59.466849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-04-18 12:05:59.467198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-04-18 12:05:59.467246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-04-18 12:05:59.467509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-04-18 12:05:59.467827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-04-18 12:05:59.467874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-04-18 12:05:59.468282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-04-18 12:05:59.468598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-04-18 12:05:59.468646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.101 [2024-04-18 12:05:59.469039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-04-18 12:05:59.469463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-04-18 12:05:59.469511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-04-18 12:05:59.469855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-04-18 12:05:59.470179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-04-18 12:05:59.470199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-04-18 12:05:59.470587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-04-18 12:05:59.470877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-04-18 12:05:59.470925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-04-18 12:05:59.471292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-04-18 12:05:59.471625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-04-18 12:05:59.471673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-04-18 12:05:59.472066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-04-18 12:05:59.472498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-04-18 12:05:59.472547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-04-18 12:05:59.472887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-04-18 12:05:59.473239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-04-18 12:05:59.473297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-04-18 12:05:59.473723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-04-18 12:05:59.474144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-04-18 12:05:59.474193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-04-18 12:05:59.474547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-04-18 12:05:59.474932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-04-18 12:05:59.474980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-04-18 12:05:59.475329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-04-18 12:05:59.475598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-04-18 12:05:59.475647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-04-18 12:05:59.476098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-04-18 12:05:59.476485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-04-18 12:05:59.476506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-04-18 12:05:59.476868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-04-18 12:05:59.477078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-04-18 12:05:59.477125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-04-18 12:05:59.477536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-04-18 12:05:59.477862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-04-18 12:05:59.477910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-04-18 12:05:59.478184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-04-18 12:05:59.478604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-04-18 12:05:59.478653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-04-18 12:05:59.479084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-04-18 12:05:59.479441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-04-18 12:05:59.479497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-04-18 12:05:59.479840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-04-18 12:05:59.480179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-04-18 12:05:59.480226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-04-18 12:05:59.480492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-04-18 12:05:59.480762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-04-18 12:05:59.480810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-04-18 12:05:59.481220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-04-18 12:05:59.481553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-04-18 12:05:59.481574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-04-18 12:05:59.481815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-04-18 12:05:59.482111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-04-18 12:05:59.482158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-04-18 12:05:59.482522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-04-18 12:05:59.482788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-04-18 12:05:59.482836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-04-18 12:05:59.483242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-04-18 12:05:59.483587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-04-18 12:05:59.483636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-04-18 12:05:59.483925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-04-18 12:05:59.484276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-04-18 12:05:59.484323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-04-18 12:05:59.484675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-04-18 12:05:59.484971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-04-18 12:05:59.485027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-04-18 12:05:59.485482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-04-18 12:05:59.485834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-04-18 12:05:59.485882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-04-18 12:05:59.486175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-04-18 12:05:59.486513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-04-18 12:05:59.486581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-04-18 12:05:59.487018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-04-18 12:05:59.487344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-04-18 12:05:59.487392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-04-18 12:05:59.487753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-04-18 12:05:59.488160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-04-18 12:05:59.488207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-04-18 12:05:59.488570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-04-18 12:05:59.488890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-04-18 12:05:59.488911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-04-18 12:05:59.489197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-04-18 12:05:59.489550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-04-18 12:05:59.489571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-04-18 12:05:59.489909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-04-18 12:05:59.490221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-04-18 12:05:59.490269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.102 [2024-04-18 12:05:59.490688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-04-18 12:05:59.490972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-04-18 12:05:59.490993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-04-18 12:05:59.491354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-04-18 12:05:59.491721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-04-18 12:05:59.491770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-04-18 12:05:59.492015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-04-18 12:05:59.492309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-04-18 12:05:59.492366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-04-18 12:05:59.492725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-04-18 12:05:59.493071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-04-18 12:05:59.493092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-04-18 12:05:59.493459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-04-18 12:05:59.493744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-04-18 12:05:59.493792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-04-18 12:05:59.494196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-04-18 12:05:59.494523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-04-18 12:05:59.494545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-04-18 12:05:59.494853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-04-18 12:05:59.495033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-04-18 12:05:59.495081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-04-18 12:05:59.495422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-04-18 12:05:59.495758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-04-18 12:05:59.495816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-04-18 12:05:59.496106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-04-18 12:05:59.496407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-04-18 12:05:59.496462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-04-18 12:05:59.496820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-04-18 12:05:59.497148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-04-18 12:05:59.497196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-04-18 12:05:59.497617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-04-18 12:05:59.497884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-04-18 12:05:59.497931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-04-18 12:05:59.498287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-04-18 12:05:59.498623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-04-18 12:05:59.498683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-04-18 12:05:59.499004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-04-18 12:05:59.499363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-04-18 12:05:59.499412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-04-18 12:05:59.499783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-04-18 12:05:59.500168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-04-18 12:05:59.500215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-04-18 12:05:59.500623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-04-18 12:05:59.500957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-04-18 12:05:59.501006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-04-18 12:05:59.501326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-04-18 12:05:59.501434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-04-18 12:05:59.501460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-04-18 12:05:59.501727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-04-18 12:05:59.502011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-04-18 12:05:59.502058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-04-18 12:05:59.502481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-04-18 12:05:59.502816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-04-18 12:05:59.502865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-04-18 12:05:59.503247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-04-18 12:05:59.503557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-04-18 12:05:59.503605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-04-18 12:05:59.503955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-04-18 12:05:59.504382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-04-18 12:05:59.504429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-04-18 12:05:59.504808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-04-18 12:05:59.505142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-04-18 12:05:59.505190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-04-18 12:05:59.505592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-04-18 12:05:59.505927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-04-18 12:05:59.505982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-04-18 12:05:59.506420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-04-18 12:05:59.506768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-04-18 12:05:59.506815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-04-18 12:05:59.507098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-04-18 12:05:59.507439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-04-18 12:05:59.507498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-04-18 12:05:59.507864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-04-18 12:05:59.508155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-04-18 12:05:59.508203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-04-18 12:05:59.508562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-04-18 12:05:59.508897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-04-18 12:05:59.508945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-04-18 12:05:59.509396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-04-18 12:05:59.509664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-04-18 12:05:59.509713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-04-18 12:05:59.510054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-04-18 12:05:59.510321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-04-18 12:05:59.510341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-04-18 12:05:59.510700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-04-18 12:05:59.510966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-04-18 12:05:59.511014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-04-18 12:05:59.511316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-04-18 12:05:59.511670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-04-18 12:05:59.511718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-04-18 12:05:59.512137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-04-18 12:05:59.512464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-04-18 12:05:59.512512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-04-18 12:05:59.512884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-04-18 12:05:59.513273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-04-18 12:05:59.513328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-04-18 12:05:59.513750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-04-18 12:05:59.514083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-04-18 12:05:59.514131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-04-18 12:05:59.514535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-04-18 12:05:59.514926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-04-18 12:05:59.514974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-04-18 12:05:59.515377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-04-18 12:05:59.515793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-04-18 12:05:59.515842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-04-18 12:05:59.516164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-04-18 12:05:59.516448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-04-18 12:05:59.516480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-04-18 12:05:59.516861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-04-18 12:05:59.517131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-04-18 12:05:59.517151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-04-18 12:05:59.517435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-04-18 12:05:59.517813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-04-18 12:05:59.517862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-04-18 12:05:59.518209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-04-18 12:05:59.518462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-04-18 12:05:59.518483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-04-18 12:05:59.518609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-04-18 12:05:59.518912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-04-18 12:05:59.518959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-04-18 12:05:59.519134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-04-18 12:05:59.519542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-04-18 12:05:59.519591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-04-18 12:05:59.519925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-04-18 12:05:59.520238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-04-18 12:05:59.520298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-04-18 12:05:59.520647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-04-18 12:05:59.520859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-04-18 12:05:59.520880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-04-18 12:05:59.521214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-04-18 12:05:59.521566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-04-18 12:05:59.521587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-04-18 12:05:59.521826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-04-18 12:05:59.522123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-04-18 12:05:59.522170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-04-18 12:05:59.522526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-04-18 12:05:59.522909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-04-18 12:05:59.522958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-04-18 12:05:59.523304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-04-18 12:05:59.523572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-04-18 12:05:59.523620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-04-18 12:05:59.523901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-04-18 12:05:59.524283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-04-18 12:05:59.524330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-04-18 12:05:59.524688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-04-18 12:05:59.525074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-04-18 12:05:59.525095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-04-18 12:05:59.525318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-04-18 12:05:59.525704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-04-18 12:05:59.525753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-04-18 12:05:59.526152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-04-18 12:05:59.526422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-04-18 12:05:59.526442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-04-18 12:05:59.526752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-04-18 12:05:59.527048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-04-18 12:05:59.527069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-04-18 12:05:59.527359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-04-18 12:05:59.527766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-04-18 12:05:59.527815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-04-18 12:05:59.528019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-04-18 12:05:59.528356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-04-18 12:05:59.528405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-04-18 12:05:59.528786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-04-18 12:05:59.529109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-04-18 12:05:59.529130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-04-18 12:05:59.529463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-04-18 12:05:59.529882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-04-18 12:05:59.529931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-04-18 12:05:59.530268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-04-18 12:05:59.530622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-04-18 12:05:59.530670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-04-18 12:05:59.531043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-04-18 12:05:59.531392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-04-18 12:05:59.531448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.104 qpair failed and we were unable to recover it. 00:30:09.104 [2024-04-18 12:05:59.531793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-04-18 12:05:59.532129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-04-18 12:05:59.532182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.104 qpair failed and we were unable to recover it. 00:30:09.104 [2024-04-18 12:05:59.532394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-04-18 12:05:59.532672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-04-18 12:05:59.532721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.104 qpair failed and we were unable to recover it. 00:30:09.104 [2024-04-18 12:05:59.533135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-04-18 12:05:59.533467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-04-18 12:05:59.533516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.104 qpair failed and we were unable to recover it. 00:30:09.104 [2024-04-18 12:05:59.533932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-04-18 12:05:59.534120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-04-18 12:05:59.534168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.104 qpair failed and we were unable to recover it. 00:30:09.104 [2024-04-18 12:05:59.534513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-04-18 12:05:59.534836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-04-18 12:05:59.534884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.104 qpair failed and we were unable to recover it. 00:30:09.104 [2024-04-18 12:05:59.535222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-04-18 12:05:59.535575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-04-18 12:05:59.535596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.104 qpair failed and we were unable to recover it. 00:30:09.104 [2024-04-18 12:05:59.535880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-04-18 12:05:59.536090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-04-18 12:05:59.536111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.104 qpair failed and we were unable to recover it. 00:30:09.104 [2024-04-18 12:05:59.536456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-04-18 12:05:59.536694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-04-18 12:05:59.536749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.104 qpair failed and we were unable to recover it. 00:30:09.104 [2024-04-18 12:05:59.537092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-04-18 12:05:59.537415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-04-18 12:05:59.537441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.104 qpair failed and we were unable to recover it. 00:30:09.104 [2024-04-18 12:05:59.537811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-04-18 12:05:59.538130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-04-18 12:05:59.538177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.104 qpair failed and we were unable to recover it. 00:30:09.104 [2024-04-18 12:05:59.538525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-04-18 12:05:59.538862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-04-18 12:05:59.538909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.104 qpair failed and we were unable to recover it. 00:30:09.104 [2024-04-18 12:05:59.539353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-04-18 12:05:59.539615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-04-18 12:05:59.539664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.104 qpair failed and we were unable to recover it. 00:30:09.104 [2024-04-18 12:05:59.540006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-04-18 12:05:59.540280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-04-18 12:05:59.540339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.104 qpair failed and we were unable to recover it. 00:30:09.104 [2024-04-18 12:05:59.540619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-04-18 12:05:59.540903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-04-18 12:05:59.540924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.104 qpair failed and we were unable to recover it. 00:30:09.104 [2024-04-18 12:05:59.541246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-04-18 12:05:59.541537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-04-18 12:05:59.541585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.104 qpair failed and we were unable to recover it. 00:30:09.104 [2024-04-18 12:05:59.541956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-04-18 12:05:59.542309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-04-18 12:05:59.542330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.104 qpair failed and we were unable to recover it. 00:30:09.104 [2024-04-18 12:05:59.542648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-04-18 12:05:59.543040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-04-18 12:05:59.543089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.104 qpair failed and we were unable to recover it. 00:30:09.104 [2024-04-18 12:05:59.543505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-04-18 12:05:59.543826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-04-18 12:05:59.543874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.104 qpair failed and we were unable to recover it. 00:30:09.104 [2024-04-18 12:05:59.544205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-04-18 12:05:59.544537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-04-18 12:05:59.544558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.104 qpair failed and we were unable to recover it. 00:30:09.104 [2024-04-18 12:05:59.544932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-04-18 12:05:59.545278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-04-18 12:05:59.545326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.104 qpair failed and we were unable to recover it. 00:30:09.104 [2024-04-18 12:05:59.545742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-04-18 12:05:59.546078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-04-18 12:05:59.546125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.104 qpair failed and we were unable to recover it. 00:30:09.104 [2024-04-18 12:05:59.546483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-04-18 12:05:59.546934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-04-18 12:05:59.546994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.104 qpair failed and we were unable to recover it. 00:30:09.104 [2024-04-18 12:05:59.547354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-04-18 12:05:59.547633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-04-18 12:05:59.547681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.104 qpair failed and we were unable to recover it. 00:30:09.104 [2024-04-18 12:05:59.548098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-04-18 12:05:59.548437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-04-18 12:05:59.548512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.104 qpair failed and we were unable to recover it. 00:30:09.104 [2024-04-18 12:05:59.548846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-04-18 12:05:59.549265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-04-18 12:05:59.549313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-04-18 12:05:59.549557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-04-18 12:05:59.549991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-04-18 12:05:59.550039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-04-18 12:05:59.550312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-04-18 12:05:59.550647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-04-18 12:05:59.550669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-04-18 12:05:59.551033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-04-18 12:05:59.551369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-04-18 12:05:59.551416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-04-18 12:05:59.551828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-04-18 12:05:59.552158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-04-18 12:05:59.552204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-04-18 12:05:59.552574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-04-18 12:05:59.552986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-04-18 12:05:59.553033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-04-18 12:05:59.553448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-04-18 12:05:59.553768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-04-18 12:05:59.553816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-04-18 12:05:59.554151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-04-18 12:05:59.554513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-04-18 12:05:59.554562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-04-18 12:05:59.554929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-04-18 12:05:59.555337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-04-18 12:05:59.555385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-04-18 12:05:59.555724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-04-18 12:05:59.556108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-04-18 12:05:59.556155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-04-18 12:05:59.556560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-04-18 12:05:59.556971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-04-18 12:05:59.557020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-04-18 12:05:59.557392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-04-18 12:05:59.557703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-04-18 12:05:59.557725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-04-18 12:05:59.557998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-04-18 12:05:59.558294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-04-18 12:05:59.558342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-04-18 12:05:59.558742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-04-18 12:05:59.559141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-04-18 12:05:59.559162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-04-18 12:05:59.559506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-04-18 12:05:59.559913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-04-18 12:05:59.559960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-04-18 12:05:59.560367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-04-18 12:05:59.560767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-04-18 12:05:59.560816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-04-18 12:05:59.561158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-04-18 12:05:59.561555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-04-18 12:05:59.561575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-04-18 12:05:59.561862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-04-18 12:05:59.562127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-04-18 12:05:59.562148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-04-18 12:05:59.562438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-04-18 12:05:59.562803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-04-18 12:05:59.562824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-04-18 12:05:59.563175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-04-18 12:05:59.563555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-04-18 12:05:59.563576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-04-18 12:05:59.563922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-04-18 12:05:59.564257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-04-18 12:05:59.564277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-04-18 12:05:59.564617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-04-18 12:05:59.564897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-04-18 12:05:59.564946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-04-18 12:05:59.565361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-04-18 12:05:59.565495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-04-18 12:05:59.565516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-04-18 12:05:59.565789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-04-18 12:05:59.566012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-04-18 12:05:59.566033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-04-18 12:05:59.566313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-04-18 12:05:59.566675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-04-18 12:05:59.566723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-04-18 12:05:59.567109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-04-18 12:05:59.567443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-04-18 12:05:59.567469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-04-18 12:05:59.567743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-04-18 12:05:59.568107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-04-18 12:05:59.568155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-04-18 12:05:59.568561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-04-18 12:05:59.568907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-04-18 12:05:59.568955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-04-18 12:05:59.569369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-04-18 12:05:59.569633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-04-18 12:05:59.569682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.106 [2024-04-18 12:05:59.570102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-04-18 12:05:59.570370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-04-18 12:05:59.570417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.106 qpair failed and we were unable to recover it. 00:30:09.106 [2024-04-18 12:05:59.570871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-04-18 12:05:59.571212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-04-18 12:05:59.571259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.106 qpair failed and we were unable to recover it. 00:30:09.106 [2024-04-18 12:05:59.571688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-04-18 12:05:59.572052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-04-18 12:05:59.572099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.106 qpair failed and we were unable to recover it. 00:30:09.106 [2024-04-18 12:05:59.572500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-04-18 12:05:59.572773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-04-18 12:05:59.572822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.106 qpair failed and we were unable to recover it. 00:30:09.106 [2024-04-18 12:05:59.573114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-04-18 12:05:59.573435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-04-18 12:05:59.573479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.106 qpair failed and we were unable to recover it. 00:30:09.106 [2024-04-18 12:05:59.573760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-04-18 12:05:59.574112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-04-18 12:05:59.574132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.106 qpair failed and we were unable to recover it. 00:30:09.106 [2024-04-18 12:05:59.574545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-04-18 12:05:59.574966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-04-18 12:05:59.575014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.106 qpair failed and we were unable to recover it. 00:30:09.106 [2024-04-18 12:05:59.575383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-04-18 12:05:59.575755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-04-18 12:05:59.575776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.106 qpair failed and we were unable to recover it. 00:30:09.106 [2024-04-18 12:05:59.575963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-04-18 12:05:59.576200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-04-18 12:05:59.576220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.106 qpair failed and we were unable to recover it. 00:30:09.106 [2024-04-18 12:05:59.576521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-04-18 12:05:59.576855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-04-18 12:05:59.576876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.106 qpair failed and we were unable to recover it. 00:30:09.106 [2024-04-18 12:05:59.577258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-04-18 12:05:59.577520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-04-18 12:05:59.577542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.106 qpair failed and we were unable to recover it. 00:30:09.106 [2024-04-18 12:05:59.577859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-04-18 12:05:59.578127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-04-18 12:05:59.578175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.106 qpair failed and we were unable to recover it. 00:30:09.106 [2024-04-18 12:05:59.578488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-04-18 12:05:59.578826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-04-18 12:05:59.578872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.106 qpair failed and we were unable to recover it. 00:30:09.106 [2024-04-18 12:05:59.579230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-04-18 12:05:59.579638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-04-18 12:05:59.579686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.106 qpair failed and we were unable to recover it. 00:30:09.106 [2024-04-18 12:05:59.580116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-04-18 12:05:59.580373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-04-18 12:05:59.580394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.106 qpair failed and we were unable to recover it. 00:30:09.106 [2024-04-18 12:05:59.580767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-04-18 12:05:59.581031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-04-18 12:05:59.581078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.106 qpair failed and we were unable to recover it. 00:30:09.106 [2024-04-18 12:05:59.581487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-04-18 12:05:59.581805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-04-18 12:05:59.581851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.106 qpair failed and we were unable to recover it. 00:30:09.106 [2024-04-18 12:05:59.582240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-04-18 12:05:59.582600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-04-18 12:05:59.582649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.106 qpair failed and we were unable to recover it. 00:30:09.106 [2024-04-18 12:05:59.583089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-04-18 12:05:59.583354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-04-18 12:05:59.583375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.106 qpair failed and we were unable to recover it. 00:30:09.106 [2024-04-18 12:05:59.583642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-04-18 12:05:59.584010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-04-18 12:05:59.584058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.106 qpair failed and we were unable to recover it. 00:30:09.106 [2024-04-18 12:05:59.584486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-04-18 12:05:59.584898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-04-18 12:05:59.584943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.106 qpair failed and we were unable to recover it. 00:30:09.106 [2024-04-18 12:05:59.585278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-04-18 12:05:59.585611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-04-18 12:05:59.585632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.106 qpair failed and we were unable to recover it. 00:30:09.106 [2024-04-18 12:05:59.585927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-04-18 12:05:59.586288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-04-18 12:05:59.586336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.106 qpair failed and we were unable to recover it. 00:30:09.106 [2024-04-18 12:05:59.586732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-04-18 12:05:59.587063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-04-18 12:05:59.587112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.106 qpair failed and we were unable to recover it. 00:30:09.106 [2024-04-18 12:05:59.587520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-04-18 12:05:59.587855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-04-18 12:05:59.587902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.106 qpair failed and we were unable to recover it. 00:30:09.106 [2024-04-18 12:05:59.588237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-04-18 12:05:59.588661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-04-18 12:05:59.588710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.106 qpair failed and we were unable to recover it. 00:30:09.106 [2024-04-18 12:05:59.589029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-04-18 12:05:59.589367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-04-18 12:05:59.589415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.106 qpair failed and we were unable to recover it. 00:30:09.106 [2024-04-18 12:05:59.589765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-04-18 12:05:59.590146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-04-18 12:05:59.590193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.106 qpair failed and we were unable to recover it. 00:30:09.106 [2024-04-18 12:05:59.590531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-04-18 12:05:59.590863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-04-18 12:05:59.590911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.107 qpair failed and we were unable to recover it. 00:30:09.107 [2024-04-18 12:05:59.591350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-04-18 12:05:59.591527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-04-18 12:05:59.591576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.107 qpair failed and we were unable to recover it. 00:30:09.107 [2024-04-18 12:05:59.592030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-04-18 12:05:59.592313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-04-18 12:05:59.592333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.107 qpair failed and we were unable to recover it. 00:30:09.107 [2024-04-18 12:05:59.592687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-04-18 12:05:59.592957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-04-18 12:05:59.592978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.107 qpair failed and we were unable to recover it. 00:30:09.107 [2024-04-18 12:05:59.593374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-04-18 12:05:59.593656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-04-18 12:05:59.593705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.107 qpair failed and we were unable to recover it. 00:30:09.107 [2024-04-18 12:05:59.594034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-04-18 12:05:59.594464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-04-18 12:05:59.594512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.107 qpair failed and we were unable to recover it. 00:30:09.107 [2024-04-18 12:05:59.594891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-04-18 12:05:59.595293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-04-18 12:05:59.595340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.107 qpair failed and we were unable to recover it. 00:30:09.107 [2024-04-18 12:05:59.595706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-04-18 12:05:59.596140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-04-18 12:05:59.596198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.107 qpair failed and we were unable to recover it. 00:30:09.107 [2024-04-18 12:05:59.596534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-04-18 12:05:59.596824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-04-18 12:05:59.596871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.107 qpair failed and we were unable to recover it. 00:30:09.107 [2024-04-18 12:05:59.597153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-04-18 12:05:59.597527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-04-18 12:05:59.597549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.107 qpair failed and we were unable to recover it. 00:30:09.107 [2024-04-18 12:05:59.597849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-04-18 12:05:59.598167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-04-18 12:05:59.598215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.107 qpair failed and we were unable to recover it. 00:30:09.107 [2024-04-18 12:05:59.598463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-04-18 12:05:59.598715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-04-18 12:05:59.598736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.107 qpair failed and we were unable to recover it. 00:30:09.107 [2024-04-18 12:05:59.599042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-04-18 12:05:59.599425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-04-18 12:05:59.599486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.107 qpair failed and we were unable to recover it. 00:30:09.107 [2024-04-18 12:05:59.599954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-04-18 12:05:59.600284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-04-18 12:05:59.600305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.107 qpair failed and we were unable to recover it. 00:30:09.107 [2024-04-18 12:05:59.600646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-04-18 12:05:59.600948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-04-18 12:05:59.600968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.107 qpair failed and we were unable to recover it. 00:30:09.107 [2024-04-18 12:05:59.601178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-04-18 12:05:59.601456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-04-18 12:05:59.601477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.107 qpair failed and we were unable to recover it. 00:30:09.107 [2024-04-18 12:05:59.601700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-04-18 12:05:59.602061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-04-18 12:05:59.602107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.107 qpair failed and we were unable to recover it. 00:30:09.107 [2024-04-18 12:05:59.602439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-04-18 12:05:59.602832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-04-18 12:05:59.602879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.107 qpair failed and we were unable to recover it. 00:30:09.107 [2024-04-18 12:05:59.603237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-04-18 12:05:59.603584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-04-18 12:05:59.603633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.107 qpair failed and we were unable to recover it. 00:30:09.107 [2024-04-18 12:05:59.604029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-04-18 12:05:59.604408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-04-18 12:05:59.604466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.107 qpair failed and we were unable to recover it. 00:30:09.107 [2024-04-18 12:05:59.604810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-04-18 12:05:59.605193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-04-18 12:05:59.605241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.107 qpair failed and we were unable to recover it. 00:30:09.107 [2024-04-18 12:05:59.605634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-04-18 12:05:59.605965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-04-18 12:05:59.606013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.107 qpair failed and we were unable to recover it. 00:30:09.107 [2024-04-18 12:05:59.606388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-04-18 12:05:59.606670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-04-18 12:05:59.606720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.107 qpair failed and we were unable to recover it. 00:30:09.107 [2024-04-18 12:05:59.607115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-04-18 12:05:59.607480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-04-18 12:05:59.607529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.107 qpair failed and we were unable to recover it. 00:30:09.107 [2024-04-18 12:05:59.607882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-04-18 12:05:59.608204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-04-18 12:05:59.608251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.107 qpair failed and we were unable to recover it. 00:30:09.107 [2024-04-18 12:05:59.608564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-04-18 12:05:59.608903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-04-18 12:05:59.608952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.107 qpair failed and we were unable to recover it. 00:30:09.107 [2024-04-18 12:05:59.609304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-04-18 12:05:59.609719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-04-18 12:05:59.609767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.107 qpair failed and we were unable to recover it. 00:30:09.107 [2024-04-18 12:05:59.610116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-04-18 12:05:59.610266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-04-18 12:05:59.610287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.107 qpair failed and we were unable to recover it. 00:30:09.107 [2024-04-18 12:05:59.610583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-04-18 12:05:59.610903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-04-18 12:05:59.610950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.107 qpair failed and we were unable to recover it. 00:30:09.107 [2024-04-18 12:05:59.611299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-04-18 12:05:59.611628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-04-18 12:05:59.611648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.108 qpair failed and we were unable to recover it. 00:30:09.108 [2024-04-18 12:05:59.611924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-04-18 12:05:59.612213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-04-18 12:05:59.612260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.108 qpair failed and we were unable to recover it. 00:30:09.108 [2024-04-18 12:05:59.612596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-04-18 12:05:59.612947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-04-18 12:05:59.612994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.108 qpair failed and we were unable to recover it. 00:30:09.108 [2024-04-18 12:05:59.613348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-04-18 12:05:59.613628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-04-18 12:05:59.613678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.108 qpair failed and we were unable to recover it. 00:30:09.108 [2024-04-18 12:05:59.613949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-04-18 12:05:59.614376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-04-18 12:05:59.614430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.108 qpair failed and we were unable to recover it. 00:30:09.108 [2024-04-18 12:05:59.614775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-04-18 12:05:59.615100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-04-18 12:05:59.615148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.108 qpair failed and we were unable to recover it. 00:30:09.108 [2024-04-18 12:05:59.615421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-04-18 12:05:59.615745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-04-18 12:05:59.615793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.108 qpair failed and we were unable to recover it. 00:30:09.108 [2024-04-18 12:05:59.616189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-04-18 12:05:59.616588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-04-18 12:05:59.616637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.108 qpair failed and we were unable to recover it. 00:30:09.108 [2024-04-18 12:05:59.616994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-04-18 12:05:59.617313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-04-18 12:05:59.617361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.108 qpair failed and we were unable to recover it. 00:30:09.108 [2024-04-18 12:05:59.617743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-04-18 12:05:59.618073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-04-18 12:05:59.618121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.108 qpair failed and we were unable to recover it. 00:30:09.108 [2024-04-18 12:05:59.618448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-04-18 12:05:59.618830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-04-18 12:05:59.618878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.108 qpair failed and we were unable to recover it. 00:30:09.108 [2024-04-18 12:05:59.619208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-04-18 12:05:59.619621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-04-18 12:05:59.619670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.108 qpair failed and we were unable to recover it. 00:30:09.108 [2024-04-18 12:05:59.620042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-04-18 12:05:59.620287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-04-18 12:05:59.620334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.108 qpair failed and we were unable to recover it. 00:30:09.108 [2024-04-18 12:05:59.620622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-04-18 12:05:59.620961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-04-18 12:05:59.621009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.108 qpair failed and we were unable to recover it. 00:30:09.108 [2024-04-18 12:05:59.621331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-04-18 12:05:59.621550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-04-18 12:05:59.621574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.108 qpair failed and we were unable to recover it. 00:30:09.108 [2024-04-18 12:05:59.621844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-04-18 12:05:59.622208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-04-18 12:05:59.622256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.108 qpair failed and we were unable to recover it. 00:30:09.108 [2024-04-18 12:05:59.622623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-04-18 12:05:59.622943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-04-18 12:05:59.622991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.108 qpair failed and we were unable to recover it. 00:30:09.108 [2024-04-18 12:05:59.623419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-04-18 12:05:59.623818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-04-18 12:05:59.623867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.108 qpair failed and we were unable to recover it. 00:30:09.108 [2024-04-18 12:05:59.624133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-04-18 12:05:59.624406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-04-18 12:05:59.624426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.108 qpair failed and we were unable to recover it. 00:30:09.108 [2024-04-18 12:05:59.624620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-04-18 12:05:59.624930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-04-18 12:05:59.624977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.108 qpair failed and we were unable to recover it. 00:30:09.108 [2024-04-18 12:05:59.625377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-04-18 12:05:59.625659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-04-18 12:05:59.625707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.108 qpair failed and we were unable to recover it. 00:30:09.108 [2024-04-18 12:05:59.626078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-04-18 12:05:59.626491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-04-18 12:05:59.626539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.108 qpair failed and we were unable to recover it. 00:30:09.108 [2024-04-18 12:05:59.626892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-04-18 12:05:59.627319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-04-18 12:05:59.627366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.108 qpair failed and we were unable to recover it. 00:30:09.108 [2024-04-18 12:05:59.627786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-04-18 12:05:59.628145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-04-18 12:05:59.628191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.108 qpair failed and we were unable to recover it. 00:30:09.108 [2024-04-18 12:05:59.628584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-04-18 12:05:59.628908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-04-18 12:05:59.628967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.109 qpair failed and we were unable to recover it. 00:30:09.109 [2024-04-18 12:05:59.629375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.109 [2024-04-18 12:05:59.629786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.109 [2024-04-18 12:05:59.629808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.109 qpair failed and we were unable to recover it. 00:30:09.109 [2024-04-18 12:05:59.630036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.109 [2024-04-18 12:05:59.630301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.109 [2024-04-18 12:05:59.630328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.109 qpair failed and we were unable to recover it. 00:30:09.109 [2024-04-18 12:05:59.630635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.109 [2024-04-18 12:05:59.630920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.109 [2024-04-18 12:05:59.630941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.109 qpair failed and we were unable to recover it. 00:30:09.109 [2024-04-18 12:05:59.631238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.109 [2024-04-18 12:05:59.631585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.109 [2024-04-18 12:05:59.631633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.109 qpair failed and we were unable to recover it. 00:30:09.109 [2024-04-18 12:05:59.631990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.109 [2024-04-18 12:05:59.632314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.109 [2024-04-18 12:05:59.632362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.109 qpair failed and we were unable to recover it. 00:30:09.109 [2024-04-18 12:05:59.632831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.109 [2024-04-18 12:05:59.633101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.109 [2024-04-18 12:05:59.633148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.109 qpair failed and we were unable to recover it. 00:30:09.109 [2024-04-18 12:05:59.633543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.109 [2024-04-18 12:05:59.633820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.109 [2024-04-18 12:05:59.633866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.109 qpair failed and we were unable to recover it. 00:30:09.109 [2024-04-18 12:05:59.634281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.109 [2024-04-18 12:05:59.634574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.109 [2024-04-18 12:05:59.634595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.109 qpair failed and we were unable to recover it. 00:30:09.109 [2024-04-18 12:05:59.634811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-04-18 12:05:59.635085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-04-18 12:05:59.635107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.377 qpair failed and we were unable to recover it. 00:30:09.377 [2024-04-18 12:05:59.635302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-04-18 12:05:59.635653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-04-18 12:05:59.635677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.377 qpair failed and we were unable to recover it. 00:30:09.377 [2024-04-18 12:05:59.635966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-04-18 12:05:59.636231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-04-18 12:05:59.636251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.377 qpair failed and we were unable to recover it. 00:30:09.377 [2024-04-18 12:05:59.636526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-04-18 12:05:59.636904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-04-18 12:05:59.636925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.377 qpair failed and we were unable to recover it. 00:30:09.377 [2024-04-18 12:05:59.637211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-04-18 12:05:59.637564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-04-18 12:05:59.637585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.377 qpair failed and we were unable to recover it. 00:30:09.377 [2024-04-18 12:05:59.637926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-04-18 12:05:59.638260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-04-18 12:05:59.638308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.377 qpair failed and we were unable to recover it. 00:30:09.377 [2024-04-18 12:05:59.638657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-04-18 12:05:59.638836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-04-18 12:05:59.638883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.377 qpair failed and we were unable to recover it. 00:30:09.377 [2024-04-18 12:05:59.639173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-04-18 12:05:59.639497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-04-18 12:05:59.639518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.377 qpair failed and we were unable to recover it. 00:30:09.377 [2024-04-18 12:05:59.639862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-04-18 12:05:59.640198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-04-18 12:05:59.640246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.377 qpair failed and we were unable to recover it. 00:30:09.377 [2024-04-18 12:05:59.640664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-04-18 12:05:59.641051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-04-18 12:05:59.641100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.377 qpair failed and we were unable to recover it. 00:30:09.377 [2024-04-18 12:05:59.641441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-04-18 12:05:59.641828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-04-18 12:05:59.641876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.377 qpair failed and we were unable to recover it. 00:30:09.377 [2024-04-18 12:05:59.642216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-04-18 12:05:59.642600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-04-18 12:05:59.642649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.377 qpair failed and we were unable to recover it. 00:30:09.377 [2024-04-18 12:05:59.643060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-04-18 12:05:59.643318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-04-18 12:05:59.643365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.377 qpair failed and we were unable to recover it. 00:30:09.377 [2024-04-18 12:05:59.643711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-04-18 12:05:59.643958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-04-18 12:05:59.644006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.377 qpair failed and we were unable to recover it. 00:30:09.377 [2024-04-18 12:05:59.644356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-04-18 12:05:59.644737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-04-18 12:05:59.644786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.377 qpair failed and we were unable to recover it. 00:30:09.377 [2024-04-18 12:05:59.645042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-04-18 12:05:59.645461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-04-18 12:05:59.645510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.377 qpair failed and we were unable to recover it. 00:30:09.377 [2024-04-18 12:05:59.645796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-04-18 12:05:59.646054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-04-18 12:05:59.646101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.377 qpair failed and we were unable to recover it. 00:30:09.377 [2024-04-18 12:05:59.646479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-04-18 12:05:59.646815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-04-18 12:05:59.646862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.377 qpair failed and we were unable to recover it. 00:30:09.377 [2024-04-18 12:05:59.647281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-04-18 12:05:59.647689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-04-18 12:05:59.647737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.377 qpair failed and we were unable to recover it. 00:30:09.377 [2024-04-18 12:05:59.648122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-04-18 12:05:59.648539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-04-18 12:05:59.648587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.377 qpair failed and we were unable to recover it. 00:30:09.377 [2024-04-18 12:05:59.649020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-04-18 12:05:59.649424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-04-18 12:05:59.649482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.377 qpair failed and we were unable to recover it. 00:30:09.377 [2024-04-18 12:05:59.649916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-04-18 12:05:59.650173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-04-18 12:05:59.650220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.377 qpair failed and we were unable to recover it. 00:30:09.378 [2024-04-18 12:05:59.650563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-04-18 12:05:59.650992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-04-18 12:05:59.651040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.378 qpair failed and we were unable to recover it. 00:30:09.378 [2024-04-18 12:05:59.651324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-04-18 12:05:59.651733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-04-18 12:05:59.651781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.378 qpair failed and we were unable to recover it. 00:30:09.378 [2024-04-18 12:05:59.652140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-04-18 12:05:59.652477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-04-18 12:05:59.652527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.378 qpair failed and we were unable to recover it. 00:30:09.378 [2024-04-18 12:05:59.652849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-04-18 12:05:59.653129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-04-18 12:05:59.653149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.378 qpair failed and we were unable to recover it. 00:30:09.378 [2024-04-18 12:05:59.653498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-04-18 12:05:59.653703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-04-18 12:05:59.653751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.378 qpair failed and we were unable to recover it. 00:30:09.378 [2024-04-18 12:05:59.654052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-04-18 12:05:59.654434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-04-18 12:05:59.654495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.378 qpair failed and we were unable to recover it. 00:30:09.378 [2024-04-18 12:05:59.654835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-04-18 12:05:59.655245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-04-18 12:05:59.655294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.378 qpair failed and we were unable to recover it. 00:30:09.378 [2024-04-18 12:05:59.655701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-04-18 12:05:59.656096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-04-18 12:05:59.656145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.378 qpair failed and we were unable to recover it. 00:30:09.378 [2024-04-18 12:05:59.656563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-04-18 12:05:59.656846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-04-18 12:05:59.656895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.378 qpair failed and we were unable to recover it. 00:30:09.378 [2024-04-18 12:05:59.657338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-04-18 12:05:59.657696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-04-18 12:05:59.657745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.378 qpair failed and we were unable to recover it. 00:30:09.378 [2024-04-18 12:05:59.658178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-04-18 12:05:59.658595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-04-18 12:05:59.658643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.378 qpair failed and we were unable to recover it. 00:30:09.378 [2024-04-18 12:05:59.658993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-04-18 12:05:59.659425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-04-18 12:05:59.659484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.378 qpair failed and we were unable to recover it. 00:30:09.378 [2024-04-18 12:05:59.659854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-04-18 12:05:59.660257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-04-18 12:05:59.660306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.378 qpair failed and we were unable to recover it. 00:30:09.378 [2024-04-18 12:05:59.660586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-04-18 12:05:59.660850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-04-18 12:05:59.660870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.378 qpair failed and we were unable to recover it. 00:30:09.378 [2024-04-18 12:05:59.661089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-04-18 12:05:59.661480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-04-18 12:05:59.661529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.378 qpair failed and we were unable to recover it. 00:30:09.378 [2024-04-18 12:05:59.661824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-04-18 12:05:59.662157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-04-18 12:05:59.662204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.378 qpair failed and we were unable to recover it. 00:30:09.378 [2024-04-18 12:05:59.662536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-04-18 12:05:59.662874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-04-18 12:05:59.662923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.378 qpair failed and we were unable to recover it. 00:30:09.378 [2024-04-18 12:05:59.663248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-04-18 12:05:59.663526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-04-18 12:05:59.663571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.378 qpair failed and we were unable to recover it. 00:30:09.378 [2024-04-18 12:05:59.663917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-04-18 12:05:59.664277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-04-18 12:05:59.664324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.378 qpair failed and we were unable to recover it. 00:30:09.378 [2024-04-18 12:05:59.664634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-04-18 12:05:59.664970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-04-18 12:05:59.665018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.378 qpair failed and we were unable to recover it. 00:30:09.378 [2024-04-18 12:05:59.665421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-04-18 12:05:59.665817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-04-18 12:05:59.665867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.378 qpair failed and we were unable to recover it. 00:30:09.378 [2024-04-18 12:05:59.666281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-04-18 12:05:59.666605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-04-18 12:05:59.666626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.378 qpair failed and we were unable to recover it. 00:30:09.378 [2024-04-18 12:05:59.666826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-04-18 12:05:59.667161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-04-18 12:05:59.667209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.378 qpair failed and we were unable to recover it. 00:30:09.378 [2024-04-18 12:05:59.667614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-04-18 12:05:59.667906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-04-18 12:05:59.667954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.378 qpair failed and we were unable to recover it. 00:30:09.378 [2024-04-18 12:05:59.668292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-04-18 12:05:59.668558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-04-18 12:05:59.668579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.378 qpair failed and we were unable to recover it. 00:30:09.378 [2024-04-18 12:05:59.668887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-04-18 12:05:59.669223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-04-18 12:05:59.669270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.378 qpair failed and we were unable to recover it. 00:30:09.378 [2024-04-18 12:05:59.669721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-04-18 12:05:59.670052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-04-18 12:05:59.670100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.378 qpair failed and we were unable to recover it. 00:30:09.378 [2024-04-18 12:05:59.670529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-04-18 12:05:59.670923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-04-18 12:05:59.670971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.378 qpair failed and we were unable to recover it. 00:30:09.378 [2024-04-18 12:05:59.671390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-04-18 12:05:59.671713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-04-18 12:05:59.671762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.378 qpair failed and we were unable to recover it. 00:30:09.379 [2024-04-18 12:05:59.672123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-04-18 12:05:59.672478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-04-18 12:05:59.672526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.379 qpair failed and we were unable to recover it. 00:30:09.379 [2024-04-18 12:05:59.672791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-04-18 12:05:59.673103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-04-18 12:05:59.673151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.379 qpair failed and we were unable to recover it. 00:30:09.379 [2024-04-18 12:05:59.673495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-04-18 12:05:59.673881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-04-18 12:05:59.673929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.379 qpair failed and we were unable to recover it. 00:30:09.379 [2024-04-18 12:05:59.674325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-04-18 12:05:59.674725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-04-18 12:05:59.674774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.379 qpair failed and we were unable to recover it. 00:30:09.379 [2024-04-18 12:05:59.675197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-04-18 12:05:59.675601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-04-18 12:05:59.675649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.379 qpair failed and we were unable to recover it. 00:30:09.379 [2024-04-18 12:05:59.676095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-04-18 12:05:59.676439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-04-18 12:05:59.676516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.379 qpair failed and we were unable to recover it. 00:30:09.379 [2024-04-18 12:05:59.676863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-04-18 12:05:59.677261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-04-18 12:05:59.677309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.379 qpair failed and we were unable to recover it. 00:30:09.379 [2024-04-18 12:05:59.677663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-04-18 12:05:59.678066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-04-18 12:05:59.678114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.379 qpair failed and we were unable to recover it. 00:30:09.379 [2024-04-18 12:05:59.678523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-04-18 12:05:59.678843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-04-18 12:05:59.678892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.379 qpair failed and we were unable to recover it. 00:30:09.379 [2024-04-18 12:05:59.679323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-04-18 12:05:59.679654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-04-18 12:05:59.679703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.379 qpair failed and we were unable to recover it. 00:30:09.379 [2024-04-18 12:05:59.679974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-04-18 12:05:59.680310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-04-18 12:05:59.680358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.379 qpair failed and we were unable to recover it. 00:30:09.379 [2024-04-18 12:05:59.680682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-04-18 12:05:59.681014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-04-18 12:05:59.681035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.379 qpair failed and we were unable to recover it. 00:30:09.379 [2024-04-18 12:05:59.681407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-04-18 12:05:59.681825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-04-18 12:05:59.681874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.379 qpair failed and we were unable to recover it. 00:30:09.379 [2024-04-18 12:05:59.682272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-04-18 12:05:59.682663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-04-18 12:05:59.682712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.379 qpair failed and we were unable to recover it. 00:30:09.379 [2024-04-18 12:05:59.683061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-04-18 12:05:59.683461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-04-18 12:05:59.683511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.379 qpair failed and we were unable to recover it. 00:30:09.379 [2024-04-18 12:05:59.683929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-04-18 12:05:59.684319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-04-18 12:05:59.684367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.379 qpair failed and we were unable to recover it. 00:30:09.379 [2024-04-18 12:05:59.684815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-04-18 12:05:59.685200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-04-18 12:05:59.685249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.379 qpair failed and we were unable to recover it. 00:30:09.379 [2024-04-18 12:05:59.685587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-04-18 12:05:59.685976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-04-18 12:05:59.686024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.379 qpair failed and we were unable to recover it. 00:30:09.379 [2024-04-18 12:05:59.686348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-04-18 12:05:59.686689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-04-18 12:05:59.686710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.379 qpair failed and we were unable to recover it. 00:30:09.379 [2024-04-18 12:05:59.687023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-04-18 12:05:59.687340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-04-18 12:05:59.687387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.379 qpair failed and we were unable to recover it. 00:30:09.379 [2024-04-18 12:05:59.687745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-04-18 12:05:59.688100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-04-18 12:05:59.688147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.379 qpair failed and we were unable to recover it. 00:30:09.379 [2024-04-18 12:05:59.688518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-04-18 12:05:59.688756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-04-18 12:05:59.688777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.379 qpair failed and we were unable to recover it. 00:30:09.379 [2024-04-18 12:05:59.689148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-04-18 12:05:59.689486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-04-18 12:05:59.689535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.379 qpair failed and we were unable to recover it. 00:30:09.379 [2024-04-18 12:05:59.689855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-04-18 12:05:59.690213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-04-18 12:05:59.690233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.379 qpair failed and we were unable to recover it. 00:30:09.379 [2024-04-18 12:05:59.690524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-04-18 12:05:59.690839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-04-18 12:05:59.690888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.379 qpair failed and we were unable to recover it. 00:30:09.379 [2024-04-18 12:05:59.691181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-04-18 12:05:59.691608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-04-18 12:05:59.691656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.379 qpair failed and we were unable to recover it. 00:30:09.379 [2024-04-18 12:05:59.692008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-04-18 12:05:59.692439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-04-18 12:05:59.692495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.379 qpair failed and we were unable to recover it. 00:30:09.379 [2024-04-18 12:05:59.692696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-04-18 12:05:59.693028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-04-18 12:05:59.693049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.379 qpair failed and we were unable to recover it. 00:30:09.379 [2024-04-18 12:05:59.693397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-04-18 12:05:59.693757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-04-18 12:05:59.693806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.380 qpair failed and we were unable to recover it. 00:30:09.380 [2024-04-18 12:05:59.694153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-04-18 12:05:59.694481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-04-18 12:05:59.694548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.380 qpair failed and we were unable to recover it. 00:30:09.380 [2024-04-18 12:05:59.694874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-04-18 12:05:59.695332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-04-18 12:05:59.695381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.380 qpair failed and we were unable to recover it. 00:30:09.380 [2024-04-18 12:05:59.695796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-04-18 12:05:59.696084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-04-18 12:05:59.696105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.380 qpair failed and we were unable to recover it. 00:30:09.380 [2024-04-18 12:05:59.696495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-04-18 12:05:59.696858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-04-18 12:05:59.696907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.380 qpair failed and we were unable to recover it. 00:30:09.380 [2024-04-18 12:05:59.697347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-04-18 12:05:59.697604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-04-18 12:05:59.697652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.380 qpair failed and we were unable to recover it. 00:30:09.380 [2024-04-18 12:05:59.697994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-04-18 12:05:59.698380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-04-18 12:05:59.698429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.380 qpair failed and we were unable to recover it. 00:30:09.380 [2024-04-18 12:05:59.698838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-04-18 12:05:59.699169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-04-18 12:05:59.699218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.380 qpair failed and we were unable to recover it. 00:30:09.380 [2024-04-18 12:05:59.699581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-04-18 12:05:59.699915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-04-18 12:05:59.699963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.380 qpair failed and we were unable to recover it. 00:30:09.380 [2024-04-18 12:05:59.700304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-04-18 12:05:59.700715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-04-18 12:05:59.700763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.380 qpair failed and we were unable to recover it. 00:30:09.380 [2024-04-18 12:05:59.701169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-04-18 12:05:59.701437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-04-18 12:05:59.701461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.380 qpair failed and we were unable to recover it. 00:30:09.380 [2024-04-18 12:05:59.701777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-04-18 12:05:59.702096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-04-18 12:05:59.702144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.380 qpair failed and we were unable to recover it. 00:30:09.380 [2024-04-18 12:05:59.702532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-04-18 12:05:59.702919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-04-18 12:05:59.702969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.380 qpair failed and we were unable to recover it. 00:30:09.380 [2024-04-18 12:05:59.703315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-04-18 12:05:59.703659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-04-18 12:05:59.703709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.380 qpair failed and we were unable to recover it. 00:30:09.380 [2024-04-18 12:05:59.704105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-04-18 12:05:59.704420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-04-18 12:05:59.704488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.380 qpair failed and we were unable to recover it. 00:30:09.380 [2024-04-18 12:05:59.704841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-04-18 12:05:59.705125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-04-18 12:05:59.705174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.380 qpair failed and we were unable to recover it. 00:30:09.380 [2024-04-18 12:05:59.705599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-04-18 12:05:59.705853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-04-18 12:05:59.705900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.380 qpair failed and we were unable to recover it. 00:30:09.380 [2024-04-18 12:05:59.706201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-04-18 12:05:59.706532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-04-18 12:05:59.706581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.380 qpair failed and we were unable to recover it. 00:30:09.380 [2024-04-18 12:05:59.706757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-04-18 12:05:59.707160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-04-18 12:05:59.707208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.380 qpair failed and we were unable to recover it. 00:30:09.380 [2024-04-18 12:05:59.707603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-04-18 12:05:59.708005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-04-18 12:05:59.708026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.380 qpair failed and we were unable to recover it. 00:30:09.380 [2024-04-18 12:05:59.708327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-04-18 12:05:59.708679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-04-18 12:05:59.708700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.380 qpair failed and we were unable to recover it. 00:30:09.380 [2024-04-18 12:05:59.709029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-04-18 12:05:59.709418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-04-18 12:05:59.709475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.380 qpair failed and we were unable to recover it. 00:30:09.380 [2024-04-18 12:05:59.709872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-04-18 12:05:59.710156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-04-18 12:05:59.710203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.380 qpair failed and we were unable to recover it. 00:30:09.380 [2024-04-18 12:05:59.710534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-04-18 12:05:59.710840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-04-18 12:05:59.710888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.380 qpair failed and we were unable to recover it. 00:30:09.380 [2024-04-18 12:05:59.711259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-04-18 12:05:59.711575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-04-18 12:05:59.711618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.380 qpair failed and we were unable to recover it. 00:30:09.380 [2024-04-18 12:05:59.711987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-04-18 12:05:59.712318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-04-18 12:05:59.712367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.380 qpair failed and we were unable to recover it. 00:30:09.380 [2024-04-18 12:05:59.712803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-04-18 12:05:59.713120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-04-18 12:05:59.713167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.380 qpair failed and we were unable to recover it. 00:30:09.380 [2024-04-18 12:05:59.713516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-04-18 12:05:59.713930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-04-18 12:05:59.713978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.380 qpair failed and we were unable to recover it. 00:30:09.380 [2024-04-18 12:05:59.714272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-04-18 12:05:59.714682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-04-18 12:05:59.714731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.380 qpair failed and we were unable to recover it. 00:30:09.380 [2024-04-18 12:05:59.715106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-04-18 12:05:59.715521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-04-18 12:05:59.715570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.381 qpair failed and we were unable to recover it. 00:30:09.381 [2024-04-18 12:05:59.715907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-04-18 12:05:59.716244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-04-18 12:05:59.716292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.381 qpair failed and we were unable to recover it. 00:30:09.381 [2024-04-18 12:05:59.716710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-04-18 12:05:59.717142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-04-18 12:05:59.717191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.381 qpair failed and we were unable to recover it. 00:30:09.381 [2024-04-18 12:05:59.717550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-04-18 12:05:59.717959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-04-18 12:05:59.718008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.381 qpair failed and we were unable to recover it. 00:30:09.381 [2024-04-18 12:05:59.718345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-04-18 12:05:59.718722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-04-18 12:05:59.718771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.381 qpair failed and we were unable to recover it. 00:30:09.381 [2024-04-18 12:05:59.719189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-04-18 12:05:59.719540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-04-18 12:05:59.719589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.381 qpair failed and we were unable to recover it. 00:30:09.381 [2024-04-18 12:05:59.719980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-04-18 12:05:59.720316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-04-18 12:05:59.720364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.381 qpair failed and we were unable to recover it. 00:30:09.381 [2024-04-18 12:05:59.720826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-04-18 12:05:59.721179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-04-18 12:05:59.721227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.381 qpair failed and we were unable to recover it. 00:30:09.381 [2024-04-18 12:05:59.721685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-04-18 12:05:59.722066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-04-18 12:05:59.722115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.381 qpair failed and we were unable to recover it. 00:30:09.381 [2024-04-18 12:05:59.722461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-04-18 12:05:59.722750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-04-18 12:05:59.722798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.381 qpair failed and we were unable to recover it. 00:30:09.381 [2024-04-18 12:05:59.723146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-04-18 12:05:59.723482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-04-18 12:05:59.723531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.381 qpair failed and we were unable to recover it. 00:30:09.381 [2024-04-18 12:05:59.723932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-04-18 12:05:59.724273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-04-18 12:05:59.724320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.381 qpair failed and we were unable to recover it. 00:30:09.381 [2024-04-18 12:05:59.724620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-04-18 12:05:59.725110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-04-18 12:05:59.725160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.381 qpair failed and we were unable to recover it. 00:30:09.381 [2024-04-18 12:05:59.725580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-04-18 12:05:59.725866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-04-18 12:05:59.725929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.381 qpair failed and we were unable to recover it. 00:30:09.381 [2024-04-18 12:05:59.726282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-04-18 12:05:59.726663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-04-18 12:05:59.726719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.381 qpair failed and we were unable to recover it. 00:30:09.381 [2024-04-18 12:05:59.727126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-04-18 12:05:59.727375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-04-18 12:05:59.727423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.381 qpair failed and we were unable to recover it. 00:30:09.381 [2024-04-18 12:05:59.727780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-04-18 12:05:59.727925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-04-18 12:05:59.727973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.381 qpair failed and we were unable to recover it. 00:30:09.381 [2024-04-18 12:05:59.728322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-04-18 12:05:59.728579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-04-18 12:05:59.728628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.381 qpair failed and we were unable to recover it. 00:30:09.381 [2024-04-18 12:05:59.728891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-04-18 12:05:59.729159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-04-18 12:05:59.729207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.381 qpair failed and we were unable to recover it. 00:30:09.381 [2024-04-18 12:05:59.729567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-04-18 12:05:59.729970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-04-18 12:05:59.730018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.381 qpair failed and we were unable to recover it. 00:30:09.381 [2024-04-18 12:05:59.730368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-04-18 12:05:59.730753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-04-18 12:05:59.730774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.381 qpair failed and we were unable to recover it. 00:30:09.381 [2024-04-18 12:05:59.731138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-04-18 12:05:59.731463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-04-18 12:05:59.731512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.381 qpair failed and we were unable to recover it. 00:30:09.381 [2024-04-18 12:05:59.731922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-04-18 12:05:59.732333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-04-18 12:05:59.732380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.381 qpair failed and we were unable to recover it. 00:30:09.381 [2024-04-18 12:05:59.732703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-04-18 12:05:59.733074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-04-18 12:05:59.733122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.382 qpair failed and we were unable to recover it. 00:30:09.382 [2024-04-18 12:05:59.733481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-04-18 12:05:59.733712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-04-18 12:05:59.733766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.382 qpair failed and we were unable to recover it. 00:30:09.382 [2024-04-18 12:05:59.734187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-04-18 12:05:59.734472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-04-18 12:05:59.734521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.382 qpair failed and we were unable to recover it. 00:30:09.382 [2024-04-18 12:05:59.734917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-04-18 12:05:59.735247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-04-18 12:05:59.735296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.382 qpair failed and we were unable to recover it. 00:30:09.382 [2024-04-18 12:05:59.735696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-04-18 12:05:59.736101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-04-18 12:05:59.736149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.382 qpair failed and we were unable to recover it. 00:30:09.382 [2024-04-18 12:05:59.736506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-04-18 12:05:59.736750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-04-18 12:05:59.736798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.382 qpair failed and we were unable to recover it. 00:30:09.382 [2024-04-18 12:05:59.737133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-04-18 12:05:59.737462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-04-18 12:05:59.737511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.382 qpair failed and we were unable to recover it. 00:30:09.382 [2024-04-18 12:05:59.737934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-04-18 12:05:59.738207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-04-18 12:05:59.738255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.382 qpair failed and we were unable to recover it. 00:30:09.382 [2024-04-18 12:05:59.738696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-04-18 12:05:59.738956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-04-18 12:05:59.739004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.382 qpair failed and we were unable to recover it. 00:30:09.382 [2024-04-18 12:05:59.739355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-04-18 12:05:59.739681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-04-18 12:05:59.739730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.382 qpair failed and we were unable to recover it. 00:30:09.382 [2024-04-18 12:05:59.740133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-04-18 12:05:59.740487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-04-18 12:05:59.740536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.382 qpair failed and we were unable to recover it. 00:30:09.382 [2024-04-18 12:05:59.740939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-04-18 12:05:59.741297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-04-18 12:05:59.741351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.382 qpair failed and we were unable to recover it. 00:30:09.382 [2024-04-18 12:05:59.741635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-04-18 12:05:59.741983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-04-18 12:05:59.742032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.382 qpair failed and we were unable to recover it. 00:30:09.382 [2024-04-18 12:05:59.742472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-04-18 12:05:59.742889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-04-18 12:05:59.742938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.382 qpair failed and we were unable to recover it. 00:30:09.382 [2024-04-18 12:05:59.743226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-04-18 12:05:59.743653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-04-18 12:05:59.743674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.382 qpair failed and we were unable to recover it. 00:30:09.382 [2024-04-18 12:05:59.743949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-04-18 12:05:59.744233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-04-18 12:05:59.744254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.382 qpair failed and we were unable to recover it. 00:30:09.382 [2024-04-18 12:05:59.744595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-04-18 12:05:59.744868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-04-18 12:05:59.744917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.382 qpair failed and we were unable to recover it. 00:30:09.382 [2024-04-18 12:05:59.745260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-04-18 12:05:59.745645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-04-18 12:05:59.745694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.382 qpair failed and we were unable to recover it. 00:30:09.382 [2024-04-18 12:05:59.746005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-04-18 12:05:59.746415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-04-18 12:05:59.746471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.382 qpair failed and we were unable to recover it. 00:30:09.382 [2024-04-18 12:05:59.746894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-04-18 12:05:59.747251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-04-18 12:05:59.747299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.382 qpair failed and we were unable to recover it. 00:30:09.382 [2024-04-18 12:05:59.747646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-04-18 12:05:59.748016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-04-18 12:05:59.748064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.382 qpair failed and we were unable to recover it. 00:30:09.382 [2024-04-18 12:05:59.748372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-04-18 12:05:59.748800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-04-18 12:05:59.748856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.382 qpair failed and we were unable to recover it. 00:30:09.382 [2024-04-18 12:05:59.749234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-04-18 12:05:59.749584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-04-18 12:05:59.749632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.382 qpair failed and we were unable to recover it. 00:30:09.382 [2024-04-18 12:05:59.750055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-04-18 12:05:59.750390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-04-18 12:05:59.750444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.382 qpair failed and we were unable to recover it. 00:30:09.382 [2024-04-18 12:05:59.750753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-04-18 12:05:59.751084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-04-18 12:05:59.751132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.382 qpair failed and we were unable to recover it. 00:30:09.382 [2024-04-18 12:05:59.751536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-04-18 12:05:59.751937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-04-18 12:05:59.751985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.382 qpair failed and we were unable to recover it. 00:30:09.382 [2024-04-18 12:05:59.752404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-04-18 12:05:59.752747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-04-18 12:05:59.752797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.382 qpair failed and we were unable to recover it. 00:30:09.382 [2024-04-18 12:05:59.753138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-04-18 12:05:59.753475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-04-18 12:05:59.753523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.382 qpair failed and we were unable to recover it. 00:30:09.382 [2024-04-18 12:05:59.753924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-04-18 12:05:59.754193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-04-18 12:05:59.754240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.382 qpair failed and we were unable to recover it. 00:30:09.382 [2024-04-18 12:05:59.754667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-04-18 12:05:59.755008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-04-18 12:05:59.755056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.383 qpair failed and we were unable to recover it. 00:30:09.383 [2024-04-18 12:05:59.755403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-04-18 12:05:59.755798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-04-18 12:05:59.755847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.383 qpair failed and we were unable to recover it. 00:30:09.383 [2024-04-18 12:05:59.756265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-04-18 12:05:59.756623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-04-18 12:05:59.756673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.383 qpair failed and we were unable to recover it. 00:30:09.383 [2024-04-18 12:05:59.757078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-04-18 12:05:59.757294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-04-18 12:05:59.757314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.383 qpair failed and we were unable to recover it. 00:30:09.383 [2024-04-18 12:05:59.757508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-04-18 12:05:59.757881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-04-18 12:05:59.757928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.383 qpair failed and we were unable to recover it. 00:30:09.383 [2024-04-18 12:05:59.758254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-04-18 12:05:59.758583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-04-18 12:05:59.758631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.383 qpair failed and we were unable to recover it. 00:30:09.383 [2024-04-18 12:05:59.759003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-04-18 12:05:59.759334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-04-18 12:05:59.759381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.383 qpair failed and we were unable to recover it. 00:30:09.383 [2024-04-18 12:05:59.759779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-04-18 12:05:59.760051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-04-18 12:05:59.760071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.383 qpair failed and we were unable to recover it. 00:30:09.383 [2024-04-18 12:05:59.760499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-04-18 12:05:59.760837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-04-18 12:05:59.760885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.383 qpair failed and we were unable to recover it. 00:30:09.383 [2024-04-18 12:05:59.761216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-04-18 12:05:59.761614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-04-18 12:05:59.761663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.383 qpair failed and we were unable to recover it. 00:30:09.383 [2024-04-18 12:05:59.761984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-04-18 12:05:59.762220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-04-18 12:05:59.762240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.383 qpair failed and we were unable to recover it. 00:30:09.383 [2024-04-18 12:05:59.762552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-04-18 12:05:59.762880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-04-18 12:05:59.762927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.383 qpair failed and we were unable to recover it. 00:30:09.383 [2024-04-18 12:05:59.763214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-04-18 12:05:59.763550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-04-18 12:05:59.763571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.383 qpair failed and we were unable to recover it. 00:30:09.383 [2024-04-18 12:05:59.763917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-04-18 12:05:59.764249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-04-18 12:05:59.764297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.383 qpair failed and we were unable to recover it. 00:30:09.383 [2024-04-18 12:05:59.764615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-04-18 12:05:59.764974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-04-18 12:05:59.765022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.383 qpair failed and we were unable to recover it. 00:30:09.383 [2024-04-18 12:05:59.765287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-04-18 12:05:59.765604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-04-18 12:05:59.765653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.383 qpair failed and we were unable to recover it. 00:30:09.383 [2024-04-18 12:05:59.766033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-04-18 12:05:59.766353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-04-18 12:05:59.766401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.383 qpair failed and we were unable to recover it. 00:30:09.383 [2024-04-18 12:05:59.766823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-04-18 12:05:59.767275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-04-18 12:05:59.767324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.383 qpair failed and we were unable to recover it. 00:30:09.383 [2024-04-18 12:05:59.767720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-04-18 12:05:59.768051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-04-18 12:05:59.768099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.383 qpair failed and we were unable to recover it. 00:30:09.383 [2024-04-18 12:05:59.768507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-04-18 12:05:59.768841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-04-18 12:05:59.768888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.383 qpair failed and we were unable to recover it. 00:30:09.383 [2024-04-18 12:05:59.769240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-04-18 12:05:59.769562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-04-18 12:05:59.769583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.383 qpair failed and we were unable to recover it. 00:30:09.383 [2024-04-18 12:05:59.769870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-04-18 12:05:59.770206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-04-18 12:05:59.770254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.383 qpair failed and we were unable to recover it. 00:30:09.383 [2024-04-18 12:05:59.770537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-04-18 12:05:59.770876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-04-18 12:05:59.770924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.383 qpair failed and we were unable to recover it. 00:30:09.383 [2024-04-18 12:05:59.771179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-04-18 12:05:59.771403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-04-18 12:05:59.771424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.383 qpair failed and we were unable to recover it. 00:30:09.383 [2024-04-18 12:05:59.771788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-04-18 12:05:59.772049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-04-18 12:05:59.772070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.383 qpair failed and we were unable to recover it. 00:30:09.383 [2024-04-18 12:05:59.772369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-04-18 12:05:59.772651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-04-18 12:05:59.772672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.383 qpair failed and we were unable to recover it. 00:30:09.383 [2024-04-18 12:05:59.772979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-04-18 12:05:59.773191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-04-18 12:05:59.773212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.383 qpair failed and we were unable to recover it. 00:30:09.383 [2024-04-18 12:05:59.773505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-04-18 12:05:59.773755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-04-18 12:05:59.773803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.383 qpair failed and we were unable to recover it. 00:30:09.383 [2024-04-18 12:05:59.774093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-04-18 12:05:59.774432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-04-18 12:05:59.774492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.383 qpair failed and we were unable to recover it. 00:30:09.383 [2024-04-18 12:05:59.774828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-04-18 12:05:59.775135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-04-18 12:05:59.775155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-04-18 12:05:59.775521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-04-18 12:05:59.775840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-04-18 12:05:59.775888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-04-18 12:05:59.776253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-04-18 12:05:59.776579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-04-18 12:05:59.776628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-04-18 12:05:59.776923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-04-18 12:05:59.777616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-04-18 12:05:59.777678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-04-18 12:05:59.778057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-04-18 12:05:59.778443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-04-18 12:05:59.778515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-04-18 12:05:59.778850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-04-18 12:05:59.779127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-04-18 12:05:59.779148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-04-18 12:05:59.779477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-04-18 12:05:59.779751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-04-18 12:05:59.779800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-04-18 12:05:59.780153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-04-18 12:05:59.780406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-04-18 12:05:59.780463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-04-18 12:05:59.780736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-04-18 12:05:59.780987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-04-18 12:05:59.781007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-04-18 12:05:59.781302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-04-18 12:05:59.781707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-04-18 12:05:59.781756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-04-18 12:05:59.782151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-04-18 12:05:59.782365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-04-18 12:05:59.782387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-04-18 12:05:59.782742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-04-18 12:05:59.783072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-04-18 12:05:59.783119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-04-18 12:05:59.783475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-04-18 12:05:59.783894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-04-18 12:05:59.783943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-04-18 12:05:59.784283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-04-18 12:05:59.784553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-04-18 12:05:59.784602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-04-18 12:05:59.785010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-04-18 12:05:59.785272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-04-18 12:05:59.785320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-04-18 12:05:59.785605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-04-18 12:05:59.785925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-04-18 12:05:59.785946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-04-18 12:05:59.786238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-04-18 12:05:59.786519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-04-18 12:05:59.786568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-04-18 12:05:59.786920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-04-18 12:05:59.787257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-04-18 12:05:59.787306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-04-18 12:05:59.787626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-04-18 12:05:59.787966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-04-18 12:05:59.788015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-04-18 12:05:59.788377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-04-18 12:05:59.788632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-04-18 12:05:59.788653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-04-18 12:05:59.788942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-04-18 12:05:59.789166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-04-18 12:05:59.789186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-04-18 12:05:59.789567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-04-18 12:05:59.789848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-04-18 12:05:59.789895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-04-18 12:05:59.790324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-04-18 12:05:59.790729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-04-18 12:05:59.790780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-04-18 12:05:59.791053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-04-18 12:05:59.791403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-04-18 12:05:59.791461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-04-18 12:05:59.791808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-04-18 12:05:59.792171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-04-18 12:05:59.792219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-04-18 12:05:59.792614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-04-18 12:05:59.792961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-04-18 12:05:59.793008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-04-18 12:05:59.793405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-04-18 12:05:59.793810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-04-18 12:05:59.793858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-04-18 12:05:59.794264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-04-18 12:05:59.794595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-04-18 12:05:59.794616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-04-18 12:05:59.794959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-04-18 12:05:59.795278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-04-18 12:05:59.795326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-04-18 12:05:59.795670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-04-18 12:05:59.795979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-04-18 12:05:59.796026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-04-18 12:05:59.796395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-04-18 12:05:59.796806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-04-18 12:05:59.796855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-04-18 12:05:59.797154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-04-18 12:05:59.797521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-04-18 12:05:59.797570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-04-18 12:05:59.797902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-04-18 12:05:59.798280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-04-18 12:05:59.798301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-04-18 12:05:59.798575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-04-18 12:05:59.799003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-04-18 12:05:59.799050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-04-18 12:05:59.799402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-04-18 12:05:59.799749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-04-18 12:05:59.799797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-04-18 12:05:59.800158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-04-18 12:05:59.800481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-04-18 12:05:59.800530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-04-18 12:05:59.800925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-04-18 12:05:59.801262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-04-18 12:05:59.801310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-04-18 12:05:59.801656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-04-18 12:05:59.802013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-04-18 12:05:59.802060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-04-18 12:05:59.802340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-04-18 12:05:59.802616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-04-18 12:05:59.802666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-04-18 12:05:59.803004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-04-18 12:05:59.803368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-04-18 12:05:59.803415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-04-18 12:05:59.803843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-04-18 12:05:59.804114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-04-18 12:05:59.804161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-04-18 12:05:59.804524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-04-18 12:05:59.804791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-04-18 12:05:59.804839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-04-18 12:05:59.805190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-04-18 12:05:59.805524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-04-18 12:05:59.805573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-04-18 12:05:59.805845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-04-18 12:05:59.806159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-04-18 12:05:59.806207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-04-18 12:05:59.806398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-04-18 12:05:59.806741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-04-18 12:05:59.806790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-04-18 12:05:59.807120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-04-18 12:05:59.807466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-04-18 12:05:59.807514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-04-18 12:05:59.807857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-04-18 12:05:59.808117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-04-18 12:05:59.808165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-04-18 12:05:59.808535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-04-18 12:05:59.808896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-04-18 12:05:59.808943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-04-18 12:05:59.809295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-04-18 12:05:59.809644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-04-18 12:05:59.809692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-04-18 12:05:59.810045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-04-18 12:05:59.810328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-04-18 12:05:59.810375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-04-18 12:05:59.810745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-04-18 12:05:59.811151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-04-18 12:05:59.811199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-04-18 12:05:59.811564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-04-18 12:05:59.811874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-04-18 12:05:59.811921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-04-18 12:05:59.812184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-04-18 12:05:59.812497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-04-18 12:05:59.812545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-04-18 12:05:59.812904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-04-18 12:05:59.813307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-04-18 12:05:59.813354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-04-18 12:05:59.813759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-04-18 12:05:59.814107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-04-18 12:05:59.814156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-04-18 12:05:59.814572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-04-18 12:05:59.815005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-04-18 12:05:59.815052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-04-18 12:05:59.815461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-04-18 12:05:59.815833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-04-18 12:05:59.815881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-04-18 12:05:59.816247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-04-18 12:05:59.816646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-04-18 12:05:59.816667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-04-18 12:05:59.816985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-04-18 12:05:59.817322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-04-18 12:05:59.817370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-04-18 12:05:59.817738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-04-18 12:05:59.818100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-04-18 12:05:59.818148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-04-18 12:05:59.818583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-04-18 12:05:59.818968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-04-18 12:05:59.819041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-04-18 12:05:59.819380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-04-18 12:05:59.819729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-04-18 12:05:59.819778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-04-18 12:05:59.820223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-04-18 12:05:59.820638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-04-18 12:05:59.820686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-04-18 12:05:59.821090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-04-18 12:05:59.821413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-04-18 12:05:59.821481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-04-18 12:05:59.821895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-04-18 12:05:59.822338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-04-18 12:05:59.822386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-04-18 12:05:59.822794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-04-18 12:05:59.823156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-04-18 12:05:59.823203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-04-18 12:05:59.823448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-04-18 12:05:59.823770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-04-18 12:05:59.823818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-04-18 12:05:59.824238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-04-18 12:05:59.824571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-04-18 12:05:59.824620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-04-18 12:05:59.825035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-04-18 12:05:59.825434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-04-18 12:05:59.825492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-04-18 12:05:59.825770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-04-18 12:05:59.826180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-04-18 12:05:59.826245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-04-18 12:05:59.826601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-04-18 12:05:59.826990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-04-18 12:05:59.827038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-04-18 12:05:59.827291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-04-18 12:05:59.827693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-04-18 12:05:59.827714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-04-18 12:05:59.828030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-04-18 12:05:59.828346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-04-18 12:05:59.828394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-04-18 12:05:59.828819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-04-18 12:05:59.829158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-04-18 12:05:59.829206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-04-18 12:05:59.829594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-04-18 12:05:59.829967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-04-18 12:05:59.830017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-04-18 12:05:59.830422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-04-18 12:05:59.830782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-04-18 12:05:59.830831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-04-18 12:05:59.831173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-04-18 12:05:59.831602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-04-18 12:05:59.831651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-04-18 12:05:59.831963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-04-18 12:05:59.832355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-04-18 12:05:59.832402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-04-18 12:05:59.832689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-04-18 12:05:59.832962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-04-18 12:05:59.833010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-04-18 12:05:59.833475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-04-18 12:05:59.833810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-04-18 12:05:59.833831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-04-18 12:05:59.834107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-04-18 12:05:59.834397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-04-18 12:05:59.834463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-04-18 12:05:59.834804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-04-18 12:05:59.835085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-04-18 12:05:59.835132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-04-18 12:05:59.835576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-04-18 12:05:59.835826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-04-18 12:05:59.835846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-04-18 12:05:59.836118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-04-18 12:05:59.836393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-04-18 12:05:59.836413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-04-18 12:05:59.836708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-04-18 12:05:59.836998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-04-18 12:05:59.837021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-04-18 12:05:59.837303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-04-18 12:05:59.837651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-04-18 12:05:59.837672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-04-18 12:05:59.837953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-04-18 12:05:59.838291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-04-18 12:05:59.838339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-04-18 12:05:59.838729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-04-18 12:05:59.839150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-04-18 12:05:59.839221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-04-18 12:05:59.839668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-04-18 12:05:59.840072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-04-18 12:05:59.840122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-04-18 12:05:59.840381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-04-18 12:05:59.840798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-04-18 12:05:59.840848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-04-18 12:05:59.841198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-04-18 12:05:59.841572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-04-18 12:05:59.841629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-04-18 12:05:59.842060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-04-18 12:05:59.842375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-04-18 12:05:59.842396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-04-18 12:05:59.842674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-04-18 12:05:59.843002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-04-18 12:05:59.843049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-04-18 12:05:59.843294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-04-18 12:05:59.843575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-04-18 12:05:59.843636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-04-18 12:05:59.843855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-04-18 12:05:59.844158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-04-18 12:05:59.844184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-04-18 12:05:59.844472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-04-18 12:05:59.844735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-04-18 12:05:59.844756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-04-18 12:05:59.844982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-04-18 12:05:59.845237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-04-18 12:05:59.845285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-04-18 12:05:59.845641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-04-18 12:05:59.845872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-04-18 12:05:59.845921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-04-18 12:05:59.846253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-04-18 12:05:59.846524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-04-18 12:05:59.846574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-04-18 12:05:59.846914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-04-18 12:05:59.847191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-04-18 12:05:59.847239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-04-18 12:05:59.847595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-04-18 12:05:59.847937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-04-18 12:05:59.847984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-04-18 12:05:59.848405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-04-18 12:05:59.848841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-04-18 12:05:59.848890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-04-18 12:05:59.849296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-04-18 12:05:59.849714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-04-18 12:05:59.849767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-04-18 12:05:59.850133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-04-18 12:05:59.850561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-04-18 12:05:59.850611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-04-18 12:05:59.850973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-04-18 12:05:59.851299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-04-18 12:05:59.851354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-04-18 12:05:59.851697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-04-18 12:05:59.852062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-04-18 12:05:59.852132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-04-18 12:05:59.852502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-04-18 12:05:59.852924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-04-18 12:05:59.852973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-04-18 12:05:59.853405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-04-18 12:05:59.853818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-04-18 12:05:59.853867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-04-18 12:05:59.854214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-04-18 12:05:59.854531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-04-18 12:05:59.854579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-04-18 12:05:59.854977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-04-18 12:05:59.855247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-04-18 12:05:59.855296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-04-18 12:05:59.855662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-04-18 12:05:59.855988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-04-18 12:05:59.856035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-04-18 12:05:59.856493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-04-18 12:05:59.856820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-04-18 12:05:59.856868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-04-18 12:05:59.857289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-04-18 12:05:59.857692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-04-18 12:05:59.857741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-04-18 12:05:59.858144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-04-18 12:05:59.858479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-04-18 12:05:59.858528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-04-18 12:05:59.858898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-04-18 12:05:59.859215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-04-18 12:05:59.859270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-04-18 12:05:59.859722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-04-18 12:05:59.860028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-04-18 12:05:59.860048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-04-18 12:05:59.860271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-04-18 12:05:59.860574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-04-18 12:05:59.860624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-04-18 12:05:59.860955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-04-18 12:05:59.861310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-04-18 12:05:59.861331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-04-18 12:05:59.861673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-04-18 12:05:59.862077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-04-18 12:05:59.862125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-04-18 12:05:59.862487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-04-18 12:05:59.862698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-04-18 12:05:59.862759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-04-18 12:05:59.863057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-04-18 12:05:59.863415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-04-18 12:05:59.863472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-04-18 12:05:59.863842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-04-18 12:05:59.864110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-04-18 12:05:59.864158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-04-18 12:05:59.864500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-04-18 12:05:59.864904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-04-18 12:05:59.864953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-04-18 12:05:59.865376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-04-18 12:05:59.865734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-04-18 12:05:59.865755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-04-18 12:05:59.866067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-04-18 12:05:59.866410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-04-18 12:05:59.866473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-04-18 12:05:59.866876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-04-18 12:05:59.867204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-04-18 12:05:59.867251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-04-18 12:05:59.867624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-04-18 12:05:59.868013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-04-18 12:05:59.868062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-04-18 12:05:59.868478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-04-18 12:05:59.868881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-04-18 12:05:59.868929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-04-18 12:05:59.869129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-04-18 12:05:59.869494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-04-18 12:05:59.869544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-04-18 12:05:59.869784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-04-18 12:05:59.870116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-04-18 12:05:59.870136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-04-18 12:05:59.870485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-04-18 12:05:59.870905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-04-18 12:05:59.870926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-04-18 12:05:59.871278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-04-18 12:05:59.871613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-04-18 12:05:59.871662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-04-18 12:05:59.871842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-04-18 12:05:59.872189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-04-18 12:05:59.872237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-04-18 12:05:59.872635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-04-18 12:05:59.873047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-04-18 12:05:59.873096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-04-18 12:05:59.873372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-04-18 12:05:59.873724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-04-18 12:05:59.873772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-04-18 12:05:59.874058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-04-18 12:05:59.874374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-04-18 12:05:59.874422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-04-18 12:05:59.874825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-04-18 12:05:59.875151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-04-18 12:05:59.875200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-04-18 12:05:59.875554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-04-18 12:05:59.875863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-04-18 12:05:59.875884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-04-18 12:05:59.876175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-04-18 12:05:59.876583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-04-18 12:05:59.876632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-04-18 12:05:59.876986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-04-18 12:05:59.877393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-04-18 12:05:59.877441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-04-18 12:05:59.877881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-04-18 12:05:59.878180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-04-18 12:05:59.878201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-04-18 12:05:59.878479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-04-18 12:05:59.878786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-04-18 12:05:59.878834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-04-18 12:05:59.879234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-04-18 12:05:59.879591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-04-18 12:05:59.879640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-04-18 12:05:59.879940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-04-18 12:05:59.880218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-04-18 12:05:59.880266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-04-18 12:05:59.880583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-04-18 12:05:59.880935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-04-18 12:05:59.880983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-04-18 12:05:59.881346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-04-18 12:05:59.881751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-04-18 12:05:59.881800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-04-18 12:05:59.882239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-04-18 12:05:59.882597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-04-18 12:05:59.882645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-04-18 12:05:59.883046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-04-18 12:05:59.883379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-04-18 12:05:59.883426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-04-18 12:05:59.883852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-04-18 12:05:59.884236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-04-18 12:05:59.884283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-04-18 12:05:59.884702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-04-18 12:05:59.885020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-04-18 12:05:59.885068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-04-18 12:05:59.885442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-04-18 12:05:59.885720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-04-18 12:05:59.885768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-04-18 12:05:59.886072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-04-18 12:05:59.886336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-04-18 12:05:59.886356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-04-18 12:05:59.886572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-04-18 12:05:59.886780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-04-18 12:05:59.886801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-04-18 12:05:59.887010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-04-18 12:05:59.887340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-04-18 12:05:59.887396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-04-18 12:05:59.887756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-04-18 12:05:59.888163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-04-18 12:05:59.888211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-04-18 12:05:59.888569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-04-18 12:05:59.888957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-04-18 12:05:59.889006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-04-18 12:05:59.889345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-04-18 12:05:59.889675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-04-18 12:05:59.889723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-04-18 12:05:59.890115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-04-18 12:05:59.890532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-04-18 12:05:59.890580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-04-18 12:05:59.890859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-04-18 12:05:59.891246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-04-18 12:05:59.891267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-04-18 12:05:59.891537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-04-18 12:05:59.891835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-04-18 12:05:59.891884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-04-18 12:05:59.892227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-04-18 12:05:59.892560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-04-18 12:05:59.892608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-04-18 12:05:59.893022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-04-18 12:05:59.893263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-04-18 12:05:59.893312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-04-18 12:05:59.893684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-04-18 12:05:59.894000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-04-18 12:05:59.894048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-04-18 12:05:59.894473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-04-18 12:05:59.894803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-04-18 12:05:59.894851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-04-18 12:05:59.895198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-04-18 12:05:59.895516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-04-18 12:05:59.895565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-04-18 12:05:59.895857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-04-18 12:05:59.896123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-04-18 12:05:59.896144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-04-18 12:05:59.896474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-04-18 12:05:59.896737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-04-18 12:05:59.896757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-04-18 12:05:59.897129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-04-18 12:05:59.897407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-04-18 12:05:59.897463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-04-18 12:05:59.897841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-04-18 12:05:59.898257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-04-18 12:05:59.898277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-04-18 12:05:59.898658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-04-18 12:05:59.899021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-04-18 12:05:59.899070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-04-18 12:05:59.899439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-04-18 12:05:59.899861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-04-18 12:05:59.899908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-04-18 12:05:59.900253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-04-18 12:05:59.900466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-04-18 12:05:59.900486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-04-18 12:05:59.900888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-04-18 12:05:59.901228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-04-18 12:05:59.901275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-04-18 12:05:59.901693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-04-18 12:05:59.902057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-04-18 12:05:59.902078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-04-18 12:05:59.902360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-04-18 12:05:59.902768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-04-18 12:05:59.902817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-04-18 12:05:59.903250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-04-18 12:05:59.903606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-04-18 12:05:59.903655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-04-18 12:05:59.903989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-04-18 12:05:59.904347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-04-18 12:05:59.904395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-04-18 12:05:59.904748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-04-18 12:05:59.905105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-04-18 12:05:59.905152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-04-18 12:05:59.905518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-04-18 12:05:59.905946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-04-18 12:05:59.905993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-04-18 12:05:59.906395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-04-18 12:05:59.906790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-04-18 12:05:59.906839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-04-18 12:05:59.907169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-04-18 12:05:59.907577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-04-18 12:05:59.907626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-04-18 12:05:59.907997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-04-18 12:05:59.908401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-04-18 12:05:59.908460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-04-18 12:05:59.908719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-04-18 12:05:59.909052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-04-18 12:05:59.909099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-04-18 12:05:59.909477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-04-18 12:05:59.909743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-04-18 12:05:59.909790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-04-18 12:05:59.910205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-04-18 12:05:59.910535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-04-18 12:05:59.910584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-04-18 12:05:59.910998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-04-18 12:05:59.911310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-04-18 12:05:59.911357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-04-18 12:05:59.911712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-04-18 12:05:59.912041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-04-18 12:05:59.912062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-04-18 12:05:59.912427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-04-18 12:05:59.912739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-04-18 12:05:59.912761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-04-18 12:05:59.913147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-04-18 12:05:59.913370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-04-18 12:05:59.913397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-04-18 12:05:59.913682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-04-18 12:05:59.913966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-04-18 12:05:59.913987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-04-18 12:05:59.914180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-04-18 12:05:59.914408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-04-18 12:05:59.914429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-04-18 12:05:59.914660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-04-18 12:05:59.914959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-04-18 12:05:59.914980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-04-18 12:05:59.915202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-04-18 12:05:59.915556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-04-18 12:05:59.915577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-04-18 12:05:59.915855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-04-18 12:05:59.916163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-04-18 12:05:59.916184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-04-18 12:05:59.916471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-04-18 12:05:59.916836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-04-18 12:05:59.916856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-04-18 12:05:59.917142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.658 [2024-04-18 12:05:59.917473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.658 [2024-04-18 12:05:59.917495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.658 qpair failed and we were unable to recover it. 00:30:09.658 [2024-04-18 12:05:59.917857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.658 [2024-04-18 12:05:59.918133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.658 [2024-04-18 12:05:59.918153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.658 qpair failed and we were unable to recover it. 00:30:09.658 [2024-04-18 12:05:59.918434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.658 [2024-04-18 12:05:59.918737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.658 [2024-04-18 12:05:59.918758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.658 qpair failed and we were unable to recover it. 00:30:09.658 [2024-04-18 12:05:59.919023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.658 [2024-04-18 12:05:59.919314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.658 [2024-04-18 12:05:59.919362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.658 qpair failed and we were unable to recover it. 00:30:09.658 [2024-04-18 12:05:59.919723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.658 [2024-04-18 12:05:59.919989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.658 [2024-04-18 12:05:59.920038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.658 qpair failed and we were unable to recover it. 00:30:09.658 [2024-04-18 12:05:59.920380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.658 [2024-04-18 12:05:59.920572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.658 [2024-04-18 12:05:59.920622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.658 qpair failed and we were unable to recover it. 00:30:09.658 [2024-04-18 12:05:59.920956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.658 [2024-04-18 12:05:59.921199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.658 [2024-04-18 12:05:59.921247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.658 qpair failed and we were unable to recover it. 00:30:09.658 [2024-04-18 12:05:59.921587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.658 [2024-04-18 12:05:59.921923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.658 [2024-04-18 12:05:59.921944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.658 qpair failed and we were unable to recover it. 00:30:09.658 [2024-04-18 12:05:59.922223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.658 [2024-04-18 12:05:59.922405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.658 [2024-04-18 12:05:59.922425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.658 qpair failed and we were unable to recover it. 00:30:09.658 [2024-04-18 12:05:59.922716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.658 [2024-04-18 12:05:59.922954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.658 [2024-04-18 12:05:59.922974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.658 qpair failed and we were unable to recover it. 00:30:09.658 [2024-04-18 12:05:59.923244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.658 [2024-04-18 12:05:59.923528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.658 [2024-04-18 12:05:59.923577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.658 qpair failed and we were unable to recover it. 00:30:09.658 [2024-04-18 12:05:59.923992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.658 [2024-04-18 12:05:59.924257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.659 [2024-04-18 12:05:59.924305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.659 qpair failed and we were unable to recover it. 00:30:09.659 [2024-04-18 12:05:59.924711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.659 [2024-04-18 12:05:59.924984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.659 [2024-04-18 12:05:59.925033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.659 qpair failed and we were unable to recover it. 00:30:09.659 [2024-04-18 12:05:59.925341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.659 [2024-04-18 12:05:59.925701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.659 [2024-04-18 12:05:59.925750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.659 qpair failed and we were unable to recover it. 00:30:09.659 [2024-04-18 12:05:59.926035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.659 [2024-04-18 12:05:59.926396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.659 [2024-04-18 12:05:59.926444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.659 qpair failed and we were unable to recover it. 00:30:09.659 [2024-04-18 12:05:59.926792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.659 [2024-04-18 12:05:59.927193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.659 [2024-04-18 12:05:59.927214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.659 qpair failed and we were unable to recover it. 00:30:09.659 [2024-04-18 12:05:59.927522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.659 [2024-04-18 12:05:59.927905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.659 [2024-04-18 12:05:59.927953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.659 qpair failed and we were unable to recover it. 00:30:09.659 [2024-04-18 12:05:59.928253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.659 [2024-04-18 12:05:59.928522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.659 [2024-04-18 12:05:59.928571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.659 qpair failed and we were unable to recover it. 00:30:09.659 [2024-04-18 12:05:59.928990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.659 [2024-04-18 12:05:59.929261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.659 [2024-04-18 12:05:59.929309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.659 qpair failed and we were unable to recover it. 00:30:09.659 [2024-04-18 12:05:59.929725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.659 [2024-04-18 12:05:59.930100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.659 [2024-04-18 12:05:59.930149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.659 qpair failed and we were unable to recover it. 00:30:09.659 [2024-04-18 12:05:59.930573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.659 [2024-04-18 12:05:59.930986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.659 [2024-04-18 12:05:59.931035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.659 qpair failed and we were unable to recover it. 00:30:09.659 [2024-04-18 12:05:59.931384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.659 [2024-04-18 12:05:59.931509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.659 [2024-04-18 12:05:59.931530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.659 qpair failed and we were unable to recover it. 00:30:09.659 [2024-04-18 12:05:59.931822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.659 [2024-04-18 12:05:59.932175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.659 [2024-04-18 12:05:59.932224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.659 qpair failed and we were unable to recover it. 00:30:09.659 [2024-04-18 12:05:59.932593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.659 [2024-04-18 12:05:59.932933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.659 [2024-04-18 12:05:59.932981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.659 qpair failed and we were unable to recover it. 00:30:09.659 [2024-04-18 12:05:59.933433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.659 [2024-04-18 12:05:59.933788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.659 [2024-04-18 12:05:59.933836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.659 qpair failed and we were unable to recover it. 00:30:09.659 [2024-04-18 12:05:59.934254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.659 [2024-04-18 12:05:59.934492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.659 [2024-04-18 12:05:59.934541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.659 qpair failed and we were unable to recover it. 00:30:09.659 [2024-04-18 12:05:59.934886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.659 [2024-04-18 12:05:59.935207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.659 [2024-04-18 12:05:59.935260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.659 qpair failed and we were unable to recover it. 00:30:09.659 [2024-04-18 12:05:59.935659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.659 [2024-04-18 12:05:59.936070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.659 [2024-04-18 12:05:59.936117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.659 qpair failed and we were unable to recover it. 00:30:09.659 [2024-04-18 12:05:59.936471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.659 [2024-04-18 12:05:59.936831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.659 [2024-04-18 12:05:59.936880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.659 qpair failed and we were unable to recover it. 00:30:09.659 [2024-04-18 12:05:59.937199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.659 [2024-04-18 12:05:59.937522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.659 [2024-04-18 12:05:59.937571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.659 qpair failed and we were unable to recover it. 00:30:09.659 [2024-04-18 12:05:59.937982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.659 [2024-04-18 12:05:59.938338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.659 [2024-04-18 12:05:59.938359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.659 qpair failed and we were unable to recover it. 00:30:09.659 [2024-04-18 12:05:59.938592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.659 [2024-04-18 12:05:59.938931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.659 [2024-04-18 12:05:59.938980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.659 qpair failed and we were unable to recover it. 00:30:09.659 [2024-04-18 12:05:59.939401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.659 [2024-04-18 12:05:59.939713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.659 [2024-04-18 12:05:59.939734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.659 qpair failed and we were unable to recover it. 00:30:09.659 [2024-04-18 12:05:59.940027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.659 [2024-04-18 12:05:59.940411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.659 [2024-04-18 12:05:59.940466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.659 qpair failed and we were unable to recover it. 00:30:09.659 [2024-04-18 12:05:59.940831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.659 [2024-04-18 12:05:59.941214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.659 [2024-04-18 12:05:59.941261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.659 qpair failed and we were unable to recover it. 00:30:09.659 [2024-04-18 12:05:59.941675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.659 [2024-04-18 12:05:59.942071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.659 [2024-04-18 12:05:59.942092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.659 qpair failed and we were unable to recover it. 00:30:09.659 [2024-04-18 12:05:59.942433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.659 [2024-04-18 12:05:59.942718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.659 [2024-04-18 12:05:59.942739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.659 qpair failed and we were unable to recover it. 00:30:09.659 [2024-04-18 12:05:59.943025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.659 [2024-04-18 12:05:59.943339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.659 [2024-04-18 12:05:59.943387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.659 qpair failed and we were unable to recover it. 00:30:09.659 [2024-04-18 12:05:59.943776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.659 [2024-04-18 12:05:59.944092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.659 [2024-04-18 12:05:59.944140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.659 qpair failed and we were unable to recover it. 00:30:09.659 [2024-04-18 12:05:59.944535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.659 [2024-04-18 12:05:59.944868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.659 [2024-04-18 12:05:59.944916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.659 qpair failed and we were unable to recover it. 00:30:09.659 [2024-04-18 12:05:59.945253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.660 [2024-04-18 12:05:59.945634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.660 [2024-04-18 12:05:59.945655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.660 qpair failed and we were unable to recover it. 00:30:09.660 [2024-04-18 12:05:59.945929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.660 [2024-04-18 12:05:59.946260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.660 [2024-04-18 12:05:59.946281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.660 qpair failed and we were unable to recover it. 00:30:09.660 [2024-04-18 12:05:59.946588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.660 [2024-04-18 12:05:59.946864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.660 [2024-04-18 12:05:59.946885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.660 qpair failed and we were unable to recover it. 00:30:09.660 [2024-04-18 12:05:59.947110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.660 [2024-04-18 12:05:59.947409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.660 [2024-04-18 12:05:59.947465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.660 qpair failed and we were unable to recover it. 00:30:09.660 [2024-04-18 12:05:59.947843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.660 [2024-04-18 12:05:59.948166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.660 [2024-04-18 12:05:59.948213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.660 qpair failed and we were unable to recover it. 00:30:09.660 [2024-04-18 12:05:59.948504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.660 [2024-04-18 12:05:59.948767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.660 [2024-04-18 12:05:59.948817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.660 qpair failed and we were unable to recover it. 00:30:09.660 [2024-04-18 12:05:59.949191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.660 [2024-04-18 12:05:59.949529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.660 [2024-04-18 12:05:59.949577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.660 qpair failed and we were unable to recover it. 00:30:09.660 [2024-04-18 12:05:59.949928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.660 [2024-04-18 12:05:59.950328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.660 [2024-04-18 12:05:59.950375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.660 qpair failed and we were unable to recover it. 00:30:09.660 [2024-04-18 12:05:59.950738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.660 [2024-04-18 12:05:59.950987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.660 [2024-04-18 12:05:59.951047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.660 qpair failed and we were unable to recover it. 00:30:09.660 [2024-04-18 12:05:59.951315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.660 [2024-04-18 12:05:59.951649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.660 [2024-04-18 12:05:59.951708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.660 qpair failed and we were unable to recover it. 00:30:09.660 [2024-04-18 12:05:59.952095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.660 [2024-04-18 12:05:59.952397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.660 [2024-04-18 12:05:59.952418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.660 qpair failed and we were unable to recover it. 00:30:09.660 [2024-04-18 12:05:59.952697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.660 [2024-04-18 12:05:59.952917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.660 [2024-04-18 12:05:59.952937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.660 qpair failed and we were unable to recover it. 00:30:09.660 [2024-04-18 12:05:59.953161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.660 [2024-04-18 12:05:59.953421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.660 [2024-04-18 12:05:59.953442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.660 qpair failed and we were unable to recover it. 00:30:09.660 [2024-04-18 12:05:59.953783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.660 [2024-04-18 12:05:59.953993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.660 [2024-04-18 12:05:59.954014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.660 qpair failed and we were unable to recover it. 00:30:09.660 [2024-04-18 12:05:59.954379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.660 [2024-04-18 12:05:59.954775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.660 [2024-04-18 12:05:59.954824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.660 qpair failed and we were unable to recover it. 00:30:09.660 [2024-04-18 12:05:59.955178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.660 [2024-04-18 12:05:59.955560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.660 [2024-04-18 12:05:59.955608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.660 qpair failed and we were unable to recover it. 00:30:09.660 [2024-04-18 12:05:59.955943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.660 [2024-04-18 12:05:59.956350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.660 [2024-04-18 12:05:59.956399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.660 qpair failed and we were unable to recover it. 00:30:09.660 [2024-04-18 12:05:59.956764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.660 [2024-04-18 12:05:59.957120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.660 [2024-04-18 12:05:59.957168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.660 qpair failed and we were unable to recover it. 00:30:09.660 [2024-04-18 12:05:59.957436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.660 [2024-04-18 12:05:59.957745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.660 [2024-04-18 12:05:59.957794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.660 qpair failed and we were unable to recover it. 00:30:09.660 [2024-04-18 12:05:59.957998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.660 [2024-04-18 12:05:59.958330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.660 [2024-04-18 12:05:59.958377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.660 qpair failed and we were unable to recover it. 00:30:09.660 [2024-04-18 12:05:59.958796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.660 [2024-04-18 12:05:59.959140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.660 [2024-04-18 12:05:59.959194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.660 qpair failed and we were unable to recover it. 00:30:09.660 [2024-04-18 12:05:59.959461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.660 [2024-04-18 12:05:59.959798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.660 [2024-04-18 12:05:59.959845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.660 qpair failed and we were unable to recover it. 00:30:09.660 [2024-04-18 12:05:59.960148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.660 [2024-04-18 12:05:59.960578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.660 [2024-04-18 12:05:59.960626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.660 qpair failed and we were unable to recover it. 00:30:09.660 [2024-04-18 12:05:59.960826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.660 [2024-04-18 12:05:59.961193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.660 [2024-04-18 12:05:59.961213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.660 qpair failed and we were unable to recover it. 00:30:09.660 [2024-04-18 12:05:59.961504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.660 [2024-04-18 12:05:59.961853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.660 [2024-04-18 12:05:59.961873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.660 qpair failed and we were unable to recover it. 00:30:09.660 [2024-04-18 12:05:59.962249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.660 [2024-04-18 12:05:59.962472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.660 [2024-04-18 12:05:59.962530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.660 qpair failed and we were unable to recover it. 00:30:09.660 [2024-04-18 12:05:59.962877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.660 [2024-04-18 12:05:59.963209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.660 [2024-04-18 12:05:59.963258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.660 qpair failed and we were unable to recover it. 00:30:09.660 [2024-04-18 12:05:59.963533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.660 [2024-04-18 12:05:59.963965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.660 [2024-04-18 12:05:59.964013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.660 qpair failed and we were unable to recover it. 00:30:09.660 [2024-04-18 12:05:59.964363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.660 [2024-04-18 12:05:59.964743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.660 [2024-04-18 12:05:59.964801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.660 qpair failed and we were unable to recover it. 00:30:09.660 [2024-04-18 12:05:59.965076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.661 [2024-04-18 12:05:59.965491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.661 [2024-04-18 12:05:59.965542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.661 qpair failed and we were unable to recover it. 00:30:09.661 [2024-04-18 12:05:59.965905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.661 [2024-04-18 12:05:59.966314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.661 [2024-04-18 12:05:59.966370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.661 qpair failed and we were unable to recover it. 00:30:09.661 [2024-04-18 12:05:59.966708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.661 [2024-04-18 12:05:59.967091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.661 [2024-04-18 12:05:59.967146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.661 qpair failed and we were unable to recover it. 00:30:09.661 [2024-04-18 12:05:59.967384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.661 [2024-04-18 12:05:59.967684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.661 [2024-04-18 12:05:59.967733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.661 qpair failed and we were unable to recover it. 00:30:09.661 [2024-04-18 12:05:59.968009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.661 [2024-04-18 12:05:59.968332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.661 [2024-04-18 12:05:59.968380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.661 qpair failed and we were unable to recover it. 00:30:09.661 [2024-04-18 12:05:59.968775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.661 [2024-04-18 12:05:59.969050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.661 [2024-04-18 12:05:59.969097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.661 qpair failed and we were unable to recover it. 00:30:09.661 [2024-04-18 12:05:59.969489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.661 [2024-04-18 12:05:59.969833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.661 [2024-04-18 12:05:59.969882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.661 qpair failed and we were unable to recover it. 00:30:09.661 [2024-04-18 12:05:59.970208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.661 [2024-04-18 12:05:59.970546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.661 [2024-04-18 12:05:59.970594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.661 qpair failed and we were unable to recover it. 00:30:09.661 [2024-04-18 12:05:59.970947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.661 [2024-04-18 12:05:59.971122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.661 [2024-04-18 12:05:59.971170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.661 qpair failed and we were unable to recover it. 00:30:09.661 [2024-04-18 12:05:59.971513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.661 [2024-04-18 12:05:59.971856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.661 [2024-04-18 12:05:59.971904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.661 qpair failed and we were unable to recover it. 00:30:09.661 [2024-04-18 12:05:59.972330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.661 [2024-04-18 12:05:59.972738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.661 [2024-04-18 12:05:59.972786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.661 qpair failed and we were unable to recover it. 00:30:09.661 [2024-04-18 12:05:59.973145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.661 [2024-04-18 12:05:59.973580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.661 [2024-04-18 12:05:59.973636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.661 qpair failed and we were unable to recover it. 00:30:09.661 [2024-04-18 12:05:59.973986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.661 [2024-04-18 12:05:59.974306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.661 [2024-04-18 12:05:59.974346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.661 qpair failed and we were unable to recover it. 00:30:09.661 [2024-04-18 12:05:59.974688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.661 [2024-04-18 12:05:59.974956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.661 [2024-04-18 12:05:59.975004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.661 qpair failed and we were unable to recover it. 00:30:09.661 [2024-04-18 12:05:59.975409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.661 [2024-04-18 12:05:59.975771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.661 [2024-04-18 12:05:59.975823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.661 qpair failed and we were unable to recover it. 00:30:09.661 [2024-04-18 12:05:59.976180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.661 [2024-04-18 12:05:59.976501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.661 [2024-04-18 12:05:59.976550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.661 qpair failed and we were unable to recover it. 00:30:09.661 [2024-04-18 12:05:59.976970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.661 [2024-04-18 12:05:59.977437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.661 [2024-04-18 12:05:59.977511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.661 qpair failed and we were unable to recover it. 00:30:09.661 [2024-04-18 12:05:59.977767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.661 [2024-04-18 12:05:59.978086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.661 [2024-04-18 12:05:59.978135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.661 qpair failed and we were unable to recover it. 00:30:09.661 [2024-04-18 12:05:59.978530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.661 [2024-04-18 12:05:59.978966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.661 [2024-04-18 12:05:59.979015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.661 qpair failed and we were unable to recover it. 00:30:09.661 [2024-04-18 12:05:59.979347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.661 [2024-04-18 12:05:59.979637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.661 [2024-04-18 12:05:59.979686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.661 qpair failed and we were unable to recover it. 00:30:09.661 [2024-04-18 12:05:59.980085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.661 [2024-04-18 12:05:59.980410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.661 [2024-04-18 12:05:59.980431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.661 qpair failed and we were unable to recover it. 00:30:09.661 [2024-04-18 12:05:59.980799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.661 [2024-04-18 12:05:59.981068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.661 [2024-04-18 12:05:59.981116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.661 qpair failed and we were unable to recover it. 00:30:09.661 [2024-04-18 12:05:59.981529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.661 [2024-04-18 12:05:59.981889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.661 [2024-04-18 12:05:59.981938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.661 qpair failed and we were unable to recover it. 00:30:09.661 [2024-04-18 12:05:59.982288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.661 [2024-04-18 12:05:59.982701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.661 [2024-04-18 12:05:59.982723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.661 qpair failed and we were unable to recover it. 00:30:09.661 [2024-04-18 12:05:59.982847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.661 [2024-04-18 12:05:59.983062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.661 [2024-04-18 12:05:59.983083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.661 qpair failed and we were unable to recover it. 00:30:09.661 [2024-04-18 12:05:59.983323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.661 [2024-04-18 12:05:59.983685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.661 [2024-04-18 12:05:59.983735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.661 qpair failed and we were unable to recover it. 00:30:09.661 [2024-04-18 12:05:59.984133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.661 [2024-04-18 12:05:59.984539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.661 [2024-04-18 12:05:59.984588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.661 qpair failed and we were unable to recover it. 00:30:09.661 [2024-04-18 12:05:59.985014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.661 [2024-04-18 12:05:59.985285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.661 [2024-04-18 12:05:59.985332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.661 qpair failed and we were unable to recover it. 00:30:09.661 [2024-04-18 12:05:59.985660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.661 [2024-04-18 12:05:59.986003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.661 [2024-04-18 12:05:59.986052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.661 qpair failed and we were unable to recover it. 00:30:09.662 [2024-04-18 12:05:59.986305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.662 [2024-04-18 12:05:59.986680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.662 [2024-04-18 12:05:59.986701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.662 qpair failed and we were unable to recover it. 00:30:09.662 [2024-04-18 12:05:59.987054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.662 [2024-04-18 12:05:59.987372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.662 [2024-04-18 12:05:59.987421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.662 qpair failed and we were unable to recover it. 00:30:09.662 [2024-04-18 12:05:59.987767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.662 [2024-04-18 12:05:59.988048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.662 [2024-04-18 12:05:59.988096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.662 qpair failed and we were unable to recover it. 00:30:09.662 [2024-04-18 12:05:59.988533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.662 [2024-04-18 12:05:59.988870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.662 [2024-04-18 12:05:59.988918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.662 qpair failed and we were unable to recover it. 00:30:09.662 [2024-04-18 12:05:59.989213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.662 [2024-04-18 12:05:59.989542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.662 [2024-04-18 12:05:59.989563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.662 qpair failed and we were unable to recover it. 00:30:09.662 [2024-04-18 12:05:59.989911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.662 [2024-04-18 12:05:59.990105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.662 [2024-04-18 12:05:59.990152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.662 qpair failed and we were unable to recover it. 00:30:09.662 [2024-04-18 12:05:59.990526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.662 [2024-04-18 12:05:59.990911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.662 [2024-04-18 12:05:59.990958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.662 qpair failed and we were unable to recover it. 00:30:09.662 [2024-04-18 12:05:59.991252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.662 [2024-04-18 12:05:59.991658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.662 [2024-04-18 12:05:59.991707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.662 qpair failed and we were unable to recover it. 00:30:09.662 [2024-04-18 12:05:59.991995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.662 [2024-04-18 12:05:59.992376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.662 [2024-04-18 12:05:59.992424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.662 qpair failed and we were unable to recover it. 00:30:09.662 [2024-04-18 12:05:59.992855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.662 [2024-04-18 12:05:59.993196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.662 [2024-04-18 12:05:59.993244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.662 qpair failed and we were unable to recover it. 00:30:09.662 [2024-04-18 12:05:59.993596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.662 [2024-04-18 12:05:59.994011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.662 [2024-04-18 12:05:59.994060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.662 qpair failed and we were unable to recover it. 00:30:09.662 [2024-04-18 12:05:59.994475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.662 [2024-04-18 12:05:59.994812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.662 [2024-04-18 12:05:59.994859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.662 qpair failed and we were unable to recover it. 00:30:09.662 [2024-04-18 12:05:59.995198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.662 [2024-04-18 12:05:59.995516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.662 [2024-04-18 12:05:59.995537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.662 qpair failed and we were unable to recover it. 00:30:09.662 [2024-04-18 12:05:59.995808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.662 [2024-04-18 12:05:59.996098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.662 [2024-04-18 12:05:59.996118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.662 qpair failed and we were unable to recover it. 00:30:09.662 [2024-04-18 12:05:59.996354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.662 [2024-04-18 12:05:59.996633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.662 [2024-04-18 12:05:59.996681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.662 qpair failed and we were unable to recover it. 00:30:09.662 [2024-04-18 12:05:59.997061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.662 [2024-04-18 12:05:59.997404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.662 [2024-04-18 12:05:59.997462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.662 qpair failed and we were unable to recover it. 00:30:09.662 [2024-04-18 12:05:59.997833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.662 [2024-04-18 12:05:59.998082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.662 [2024-04-18 12:05:59.998132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.662 qpair failed and we were unable to recover it. 00:30:09.662 [2024-04-18 12:05:59.998482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.662 [2024-04-18 12:05:59.998819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.662 [2024-04-18 12:05:59.998868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.662 qpair failed and we were unable to recover it. 00:30:09.662 [2024-04-18 12:05:59.999344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.662 [2024-04-18 12:05:59.999720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.662 [2024-04-18 12:05:59.999769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.662 qpair failed and we were unable to recover it. 00:30:09.662 [2024-04-18 12:06:00.000123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.662 [2024-04-18 12:06:00.000436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.662 [2024-04-18 12:06:00.000496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.662 qpair failed and we were unable to recover it. 00:30:09.662 [2024-04-18 12:06:00.000877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.662 [2024-04-18 12:06:00.001238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.662 [2024-04-18 12:06:00.001258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.662 qpair failed and we were unable to recover it. 00:30:09.662 [2024-04-18 12:06:00.001513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.662 [2024-04-18 12:06:00.001827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.662 [2024-04-18 12:06:00.001876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.662 qpair failed and we were unable to recover it. 00:30:09.662 [2024-04-18 12:06:00.002151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.662 [2024-04-18 12:06:00.002515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.662 [2024-04-18 12:06:00.002565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.662 qpair failed and we were unable to recover it. 00:30:09.662 [2024-04-18 12:06:00.002866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.662 [2024-04-18 12:06:00.003120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.662 [2024-04-18 12:06:00.003183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.662 qpair failed and we were unable to recover it. 00:30:09.662 [2024-04-18 12:06:00.003481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.662 [2024-04-18 12:06:00.003820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.662 [2024-04-18 12:06:00.003870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.662 qpair failed and we were unable to recover it. 00:30:09.662 [2024-04-18 12:06:00.004209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.662 [2024-04-18 12:06:00.004548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.662 [2024-04-18 12:06:00.004597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.662 qpair failed and we were unable to recover it. 00:30:09.662 [2024-04-18 12:06:00.004881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.662 [2024-04-18 12:06:00.005141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.662 [2024-04-18 12:06:00.005162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.662 qpair failed and we were unable to recover it. 00:30:09.662 [2024-04-18 12:06:00.005375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.662 [2024-04-18 12:06:00.005657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.662 [2024-04-18 12:06:00.005678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.662 qpair failed and we were unable to recover it. 00:30:09.662 [2024-04-18 12:06:00.005908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.662 [2024-04-18 12:06:00.006180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.662 [2024-04-18 12:06:00.006201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.662 qpair failed and we were unable to recover it. 00:30:09.663 [2024-04-18 12:06:00.006462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.663 [2024-04-18 12:06:00.006704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.663 [2024-04-18 12:06:00.006725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.663 qpair failed and we were unable to recover it. 00:30:09.663 [2024-04-18 12:06:00.006950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.663 [2024-04-18 12:06:00.007250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.663 [2024-04-18 12:06:00.007271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.663 qpair failed and we were unable to recover it. 00:30:09.663 [2024-04-18 12:06:00.007576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.663 [2024-04-18 12:06:00.007846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.663 [2024-04-18 12:06:00.007867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.663 qpair failed and we were unable to recover it. 00:30:09.663 [2024-04-18 12:06:00.008085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.663 [2024-04-18 12:06:00.008291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.663 [2024-04-18 12:06:00.008312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.663 qpair failed and we were unable to recover it. 00:30:09.663 [2024-04-18 12:06:00.008598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.663 [2024-04-18 12:06:00.008945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.663 [2024-04-18 12:06:00.008966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.663 qpair failed and we were unable to recover it. 00:30:09.663 [2024-04-18 12:06:00.009183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.663 [2024-04-18 12:06:00.009444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.663 [2024-04-18 12:06:00.009477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.663 qpair failed and we were unable to recover it. 00:30:09.663 [2024-04-18 12:06:00.009764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.663 [2024-04-18 12:06:00.010038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.663 [2024-04-18 12:06:00.010059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.663 qpair failed and we were unable to recover it. 00:30:09.663 [2024-04-18 12:06:00.010330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.663 [2024-04-18 12:06:00.010600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.663 [2024-04-18 12:06:00.010621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.663 qpair failed and we were unable to recover it. 00:30:09.663 [2024-04-18 12:06:00.010906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.663 [2024-04-18 12:06:00.011179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.663 [2024-04-18 12:06:00.011200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.663 qpair failed and we were unable to recover it. 00:30:09.663 [2024-04-18 12:06:00.011471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.663 [2024-04-18 12:06:00.011698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.663 [2024-04-18 12:06:00.011719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.663 qpair failed and we were unable to recover it. 00:30:09.663 [2024-04-18 12:06:00.012005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.663 [2024-04-18 12:06:00.012350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.663 [2024-04-18 12:06:00.012371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.663 qpair failed and we were unable to recover it. 00:30:09.663 [2024-04-18 12:06:00.012663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.663 [2024-04-18 12:06:00.012950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.663 [2024-04-18 12:06:00.012972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.663 qpair failed and we were unable to recover it. 00:30:09.663 [2024-04-18 12:06:00.013198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.663 [2024-04-18 12:06:00.013421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.663 [2024-04-18 12:06:00.013441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.663 qpair failed and we were unable to recover it. 00:30:09.663 [2024-04-18 12:06:00.013734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.663 [2024-04-18 12:06:00.013980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.663 [2024-04-18 12:06:00.014001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.663 qpair failed and we were unable to recover it. 00:30:09.663 [2024-04-18 12:06:00.014341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.663 [2024-04-18 12:06:00.014693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.663 [2024-04-18 12:06:00.014714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.663 qpair failed and we were unable to recover it. 00:30:09.663 [2024-04-18 12:06:00.015082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.663 [2024-04-18 12:06:00.015287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.663 [2024-04-18 12:06:00.015308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.663 qpair failed and we were unable to recover it. 00:30:09.663 [2024-04-18 12:06:00.015521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.663 [2024-04-18 12:06:00.015709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.663 [2024-04-18 12:06:00.015729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.663 qpair failed and we were unable to recover it. 00:30:09.663 [2024-04-18 12:06:00.016013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.663 [2024-04-18 12:06:00.016298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.663 [2024-04-18 12:06:00.016319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.663 qpair failed and we were unable to recover it. 00:30:09.663 [2024-04-18 12:06:00.016617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.663 [2024-04-18 12:06:00.016819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.663 [2024-04-18 12:06:00.016839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.663 qpair failed and we were unable to recover it. 00:30:09.663 [2024-04-18 12:06:00.017138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.663 [2024-04-18 12:06:00.017414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.663 [2024-04-18 12:06:00.017434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.663 qpair failed and we were unable to recover it. 00:30:09.663 [2024-04-18 12:06:00.017739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.663 [2024-04-18 12:06:00.018087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.663 [2024-04-18 12:06:00.018108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.663 qpair failed and we were unable to recover it. 00:30:09.663 [2024-04-18 12:06:00.018456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.663 [2024-04-18 12:06:00.018631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.663 [2024-04-18 12:06:00.018651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.663 qpair failed and we were unable to recover it. 00:30:09.663 [2024-04-18 12:06:00.018998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.663 [2024-04-18 12:06:00.019262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.663 [2024-04-18 12:06:00.019292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.663 qpair failed and we were unable to recover it. 00:30:09.664 [2024-04-18 12:06:00.019607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.664 [2024-04-18 12:06:00.019906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.664 [2024-04-18 12:06:00.019941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.664 qpair failed and we were unable to recover it. 00:30:09.664 [2024-04-18 12:06:00.020253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.664 [2024-04-18 12:06:00.020501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.664 [2024-04-18 12:06:00.020537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.664 qpair failed and we were unable to recover it. 00:30:09.664 [2024-04-18 12:06:00.020853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.664 [2024-04-18 12:06:00.021159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.664 [2024-04-18 12:06:00.021192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.664 qpair failed and we were unable to recover it. 00:30:09.664 [2024-04-18 12:06:00.021528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.664 [2024-04-18 12:06:00.021765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.664 [2024-04-18 12:06:00.021786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.664 qpair failed and we were unable to recover it. 00:30:09.664 [2024-04-18 12:06:00.022091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.664 [2024-04-18 12:06:00.022286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.664 [2024-04-18 12:06:00.022305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.664 qpair failed and we were unable to recover it. 00:30:09.664 [2024-04-18 12:06:00.022541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.664 [2024-04-18 12:06:00.022782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.664 [2024-04-18 12:06:00.022803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.664 qpair failed and we were unable to recover it. 00:30:09.664 [2024-04-18 12:06:00.023090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.664 [2024-04-18 12:06:00.023372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.664 [2024-04-18 12:06:00.023393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.664 qpair failed and we were unable to recover it. 00:30:09.664 [2024-04-18 12:06:00.023660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.664 [2024-04-18 12:06:00.023935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.664 [2024-04-18 12:06:00.023956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.664 qpair failed and we were unable to recover it. 00:30:09.664 [2024-04-18 12:06:00.024237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.664 [2024-04-18 12:06:00.024443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.664 [2024-04-18 12:06:00.024469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.664 qpair failed and we were unable to recover it. 00:30:09.664 [2024-04-18 12:06:00.024668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.664 [2024-04-18 12:06:00.024950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.664 [2024-04-18 12:06:00.024970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.664 qpair failed and we were unable to recover it. 00:30:09.664 [2024-04-18 12:06:00.025256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.664 [2024-04-18 12:06:00.025468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.664 [2024-04-18 12:06:00.025490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.664 qpair failed and we were unable to recover it. 00:30:09.664 [2024-04-18 12:06:00.025799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.664 [2024-04-18 12:06:00.026077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.664 [2024-04-18 12:06:00.026098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.664 qpair failed and we were unable to recover it. 00:30:09.664 [2024-04-18 12:06:00.026291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.664 [2024-04-18 12:06:00.026643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.664 [2024-04-18 12:06:00.026664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.664 qpair failed and we were unable to recover it. 00:30:09.664 [2024-04-18 12:06:00.026955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.664 [2024-04-18 12:06:00.027242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.664 [2024-04-18 12:06:00.027263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.664 qpair failed and we were unable to recover it. 00:30:09.664 [2024-04-18 12:06:00.027555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.664 [2024-04-18 12:06:00.027774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.664 [2024-04-18 12:06:00.027794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.664 qpair failed and we were unable to recover it. 00:30:09.664 [2024-04-18 12:06:00.028020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.664 [2024-04-18 12:06:00.028351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.664 [2024-04-18 12:06:00.028371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.664 qpair failed and we were unable to recover it. 00:30:09.664 [2024-04-18 12:06:00.028703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.664 [2024-04-18 12:06:00.028975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.664 [2024-04-18 12:06:00.028995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.664 qpair failed and we were unable to recover it. 00:30:09.664 [2024-04-18 12:06:00.029331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.664 [2024-04-18 12:06:00.029613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.664 [2024-04-18 12:06:00.029634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.664 qpair failed and we were unable to recover it. 00:30:09.664 [2024-04-18 12:06:00.029767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.664 [2024-04-18 12:06:00.030021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.664 [2024-04-18 12:06:00.030042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.664 qpair failed and we were unable to recover it. 00:30:09.664 [2024-04-18 12:06:00.030311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.664 [2024-04-18 12:06:00.030622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.664 [2024-04-18 12:06:00.030643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.664 qpair failed and we were unable to recover it. 00:30:09.664 [2024-04-18 12:06:00.030871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.664 [2024-04-18 12:06:00.031141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.664 [2024-04-18 12:06:00.031162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.664 qpair failed and we were unable to recover it. 00:30:09.664 [2024-04-18 12:06:00.031512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.664 [2024-04-18 12:06:00.031804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.664 [2024-04-18 12:06:00.031825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.664 qpair failed and we were unable to recover it. 00:30:09.664 [2024-04-18 12:06:00.032104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.664 [2024-04-18 12:06:00.032300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.664 [2024-04-18 12:06:00.032320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.664 qpair failed and we were unable to recover it. 00:30:09.664 [2024-04-18 12:06:00.032591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.664 [2024-04-18 12:06:00.032864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.664 [2024-04-18 12:06:00.032884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.664 qpair failed and we were unable to recover it. 00:30:09.664 [2024-04-18 12:06:00.033256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.664 [2024-04-18 12:06:00.033421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.664 [2024-04-18 12:06:00.033442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.664 qpair failed and we were unable to recover it. 00:30:09.664 [2024-04-18 12:06:00.033811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.664 [2024-04-18 12:06:00.034143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.664 [2024-04-18 12:06:00.034165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.664 qpair failed and we were unable to recover it. 00:30:09.664 [2024-04-18 12:06:00.034457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.664 [2024-04-18 12:06:00.034595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.664 [2024-04-18 12:06:00.034616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.664 qpair failed and we were unable to recover it. 00:30:09.664 [2024-04-18 12:06:00.034889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.664 [2024-04-18 12:06:00.035058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.664 [2024-04-18 12:06:00.035078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.664 qpair failed and we were unable to recover it. 00:30:09.664 [2024-04-18 12:06:00.035432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.664 [2024-04-18 12:06:00.035727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.664 [2024-04-18 12:06:00.035749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.665 qpair failed and we were unable to recover it. 00:30:09.665 [2024-04-18 12:06:00.035980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.665 [2024-04-18 12:06:00.036123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.665 [2024-04-18 12:06:00.036144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.665 qpair failed and we were unable to recover it. 00:30:09.665 [2024-04-18 12:06:00.036494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.665 [2024-04-18 12:06:00.036881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.665 [2024-04-18 12:06:00.036929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.665 qpair failed and we were unable to recover it. 00:30:09.665 [2024-04-18 12:06:00.037328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.665 [2024-04-18 12:06:00.037675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.665 [2024-04-18 12:06:00.037724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.665 qpair failed and we were unable to recover it. 00:30:09.665 [2024-04-18 12:06:00.038143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.665 [2024-04-18 12:06:00.038474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.665 [2024-04-18 12:06:00.038495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.665 qpair failed and we were unable to recover it. 00:30:09.665 [2024-04-18 12:06:00.038625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.665 [2024-04-18 12:06:00.038873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.665 [2024-04-18 12:06:00.038893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.665 qpair failed and we were unable to recover it. 00:30:09.665 [2024-04-18 12:06:00.039143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.665 [2024-04-18 12:06:00.039409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.665 [2024-04-18 12:06:00.039463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.665 qpair failed and we were unable to recover it. 00:30:09.665 [2024-04-18 12:06:00.039810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.665 [2024-04-18 12:06:00.040227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.665 [2024-04-18 12:06:00.040276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.665 qpair failed and we were unable to recover it. 00:30:09.665 [2024-04-18 12:06:00.040562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.665 [2024-04-18 12:06:00.040828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.665 [2024-04-18 12:06:00.040849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.665 qpair failed and we were unable to recover it. 00:30:09.665 [2024-04-18 12:06:00.041054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.665 [2024-04-18 12:06:00.041290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.665 [2024-04-18 12:06:00.041311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.665 qpair failed and we were unable to recover it. 00:30:09.665 [2024-04-18 12:06:00.041653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.665 [2024-04-18 12:06:00.042005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.665 [2024-04-18 12:06:00.042026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.665 qpair failed and we were unable to recover it. 00:30:09.665 [2024-04-18 12:06:00.042326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.665 [2024-04-18 12:06:00.042581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.665 [2024-04-18 12:06:00.042602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.665 qpair failed and we were unable to recover it. 00:30:09.665 [2024-04-18 12:06:00.042962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.665 [2024-04-18 12:06:00.043228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.665 [2024-04-18 12:06:00.043249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.665 qpair failed and we were unable to recover it. 00:30:09.665 [2024-04-18 12:06:00.043519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.665 [2024-04-18 12:06:00.043796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.665 [2024-04-18 12:06:00.043817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.665 qpair failed and we were unable to recover it. 00:30:09.665 [2024-04-18 12:06:00.043969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.665 [2024-04-18 12:06:00.044160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.665 [2024-04-18 12:06:00.044181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.665 qpair failed and we were unable to recover it. 00:30:09.665 [2024-04-18 12:06:00.044520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.665 [2024-04-18 12:06:00.044725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.665 [2024-04-18 12:06:00.044746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.665 qpair failed and we were unable to recover it. 00:30:09.665 [2024-04-18 12:06:00.045030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.665 [2024-04-18 12:06:00.045236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.665 [2024-04-18 12:06:00.045256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.665 qpair failed and we were unable to recover it. 00:30:09.665 [2024-04-18 12:06:00.045530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.665 [2024-04-18 12:06:00.045815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.665 [2024-04-18 12:06:00.045837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.665 qpair failed and we were unable to recover it. 00:30:09.665 [2024-04-18 12:06:00.046008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.665 [2024-04-18 12:06:00.046131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.665 [2024-04-18 12:06:00.046153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.665 qpair failed and we were unable to recover it. 00:30:09.665 [2024-04-18 12:06:00.046264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.665 [2024-04-18 12:06:00.046564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.665 [2024-04-18 12:06:00.046586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.665 qpair failed and we were unable to recover it. 00:30:09.665 [2024-04-18 12:06:00.046945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.665 [2024-04-18 12:06:00.047242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.665 [2024-04-18 12:06:00.047263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.665 qpair failed and we were unable to recover it. 00:30:09.665 [2024-04-18 12:06:00.047552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.665 [2024-04-18 12:06:00.047848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.665 [2024-04-18 12:06:00.047868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.665 qpair failed and we were unable to recover it. 00:30:09.665 [2024-04-18 12:06:00.048095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.665 [2024-04-18 12:06:00.048461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.665 [2024-04-18 12:06:00.048482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.665 qpair failed and we were unable to recover it. 00:30:09.665 [2024-04-18 12:06:00.048756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.665 [2024-04-18 12:06:00.048957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.665 [2024-04-18 12:06:00.048978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.665 qpair failed and we were unable to recover it. 00:30:09.665 [2024-04-18 12:06:00.049193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.665 [2024-04-18 12:06:00.049425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.665 [2024-04-18 12:06:00.049446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.665 qpair failed and we were unable to recover it. 00:30:09.665 [2024-04-18 12:06:00.049873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.665 [2024-04-18 12:06:00.050182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.665 [2024-04-18 12:06:00.050202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.665 qpair failed and we were unable to recover it. 00:30:09.665 [2024-04-18 12:06:00.050415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.665 [2024-04-18 12:06:00.050658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.665 [2024-04-18 12:06:00.050679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.665 qpair failed and we were unable to recover it. 00:30:09.665 [2024-04-18 12:06:00.050995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.665 [2024-04-18 12:06:00.051314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.665 [2024-04-18 12:06:00.051335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.665 qpair failed and we were unable to recover it. 00:30:09.665 [2024-04-18 12:06:00.051661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.665 [2024-04-18 12:06:00.051907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.665 [2024-04-18 12:06:00.051928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.665 qpair failed and we were unable to recover it. 00:30:09.665 [2024-04-18 12:06:00.052192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.665 [2024-04-18 12:06:00.052341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.665 [2024-04-18 12:06:00.052362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.665 qpair failed and we were unable to recover it. 00:30:09.666 [2024-04-18 12:06:00.052649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.666 [2024-04-18 12:06:00.053032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.666 [2024-04-18 12:06:00.053079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.666 qpair failed and we were unable to recover it. 00:30:09.666 [2024-04-18 12:06:00.053330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.666 [2024-04-18 12:06:00.053545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.666 [2024-04-18 12:06:00.053566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.666 qpair failed and we were unable to recover it. 00:30:09.666 [2024-04-18 12:06:00.053782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.666 [2024-04-18 12:06:00.054007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.666 [2024-04-18 12:06:00.054028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.666 qpair failed and we were unable to recover it. 00:30:09.666 [2024-04-18 12:06:00.054336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.666 [2024-04-18 12:06:00.054463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.666 [2024-04-18 12:06:00.054488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.666 qpair failed and we were unable to recover it. 00:30:09.666 [2024-04-18 12:06:00.054904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.666 [2024-04-18 12:06:00.055121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.666 [2024-04-18 12:06:00.055142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.666 qpair failed and we were unable to recover it. 00:30:09.666 [2024-04-18 12:06:00.055505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.666 [2024-04-18 12:06:00.055775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.666 [2024-04-18 12:06:00.055795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.666 qpair failed and we were unable to recover it. 00:30:09.666 [2024-04-18 12:06:00.056099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.666 [2024-04-18 12:06:00.056320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.666 [2024-04-18 12:06:00.056340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.666 qpair failed and we were unable to recover it. 00:30:09.666 [2024-04-18 12:06:00.056651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.666 [2024-04-18 12:06:00.056984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.666 [2024-04-18 12:06:00.057005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.666 qpair failed and we were unable to recover it. 00:30:09.666 [2024-04-18 12:06:00.057376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.666 [2024-04-18 12:06:00.057592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.666 [2024-04-18 12:06:00.057613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.666 qpair failed and we were unable to recover it. 00:30:09.666 [2024-04-18 12:06:00.057948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.666 [2024-04-18 12:06:00.058279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.666 [2024-04-18 12:06:00.058299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.666 qpair failed and we were unable to recover it. 00:30:09.666 [2024-04-18 12:06:00.058430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.666 [2024-04-18 12:06:00.058702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.666 [2024-04-18 12:06:00.058723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.666 qpair failed and we were unable to recover it. 00:30:09.666 [2024-04-18 12:06:00.058987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.666 [2024-04-18 12:06:00.059241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.666 [2024-04-18 12:06:00.059262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.666 qpair failed and we were unable to recover it. 00:30:09.666 [2024-04-18 12:06:00.059487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.666 [2024-04-18 12:06:00.059691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.666 [2024-04-18 12:06:00.059711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.666 qpair failed and we were unable to recover it. 00:30:09.666 [2024-04-18 12:06:00.059927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.666 [2024-04-18 12:06:00.060121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.666 [2024-04-18 12:06:00.060143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.666 qpair failed and we were unable to recover it. 00:30:09.666 [2024-04-18 12:06:00.060503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.666 [2024-04-18 12:06:00.060790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.666 [2024-04-18 12:06:00.060811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.666 qpair failed and we were unable to recover it. 00:30:09.666 [2024-04-18 12:06:00.061098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.666 [2024-04-18 12:06:00.061411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.666 [2024-04-18 12:06:00.061432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.666 qpair failed and we were unable to recover it. 00:30:09.666 [2024-04-18 12:06:00.061675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.666 [2024-04-18 12:06:00.061864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.666 [2024-04-18 12:06:00.061884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.666 qpair failed and we were unable to recover it. 00:30:09.666 [2024-04-18 12:06:00.062190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.666 [2024-04-18 12:06:00.062494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.666 [2024-04-18 12:06:00.062514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.666 qpair failed and we were unable to recover it. 00:30:09.666 [2024-04-18 12:06:00.062875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.666 [2024-04-18 12:06:00.063156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.666 [2024-04-18 12:06:00.063177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.666 qpair failed and we were unable to recover it. 00:30:09.666 [2024-04-18 12:06:00.063393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.666 [2024-04-18 12:06:00.063615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.666 [2024-04-18 12:06:00.063637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.666 qpair failed and we were unable to recover it. 00:30:09.666 [2024-04-18 12:06:00.063929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.666 [2024-04-18 12:06:00.064199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.666 [2024-04-18 12:06:00.064220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.666 qpair failed and we were unable to recover it. 00:30:09.666 [2024-04-18 12:06:00.064356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.666 [2024-04-18 12:06:00.064568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.666 [2024-04-18 12:06:00.064588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.666 qpair failed and we were unable to recover it. 00:30:09.666 [2024-04-18 12:06:00.064950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.666 [2024-04-18 12:06:00.065216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.666 [2024-04-18 12:06:00.065236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.666 qpair failed and we were unable to recover it. 00:30:09.666 [2024-04-18 12:06:00.065593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.666 [2024-04-18 12:06:00.065810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.666 [2024-04-18 12:06:00.065833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.666 qpair failed and we were unable to recover it. 00:30:09.666 [2024-04-18 12:06:00.066110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.666 [2024-04-18 12:06:00.066464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.666 [2024-04-18 12:06:00.066485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.666 qpair failed and we were unable to recover it. 00:30:09.666 [2024-04-18 12:06:00.066790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.666 [2024-04-18 12:06:00.067062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.666 [2024-04-18 12:06:00.067082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.666 qpair failed and we were unable to recover it. 00:30:09.666 [2024-04-18 12:06:00.067404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.666 [2024-04-18 12:06:00.067651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.666 [2024-04-18 12:06:00.067672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.666 qpair failed and we were unable to recover it. 00:30:09.666 [2024-04-18 12:06:00.067986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.666 [2024-04-18 12:06:00.068340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.666 [2024-04-18 12:06:00.068361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.666 qpair failed and we were unable to recover it. 00:30:09.666 [2024-04-18 12:06:00.068653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.666 [2024-04-18 12:06:00.068990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.667 [2024-04-18 12:06:00.069011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.667 qpair failed and we were unable to recover it. 00:30:09.667 [2024-04-18 12:06:00.069317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.667 [2024-04-18 12:06:00.069572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.667 [2024-04-18 12:06:00.069594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.667 qpair failed and we were unable to recover it. 00:30:09.667 [2024-04-18 12:06:00.069846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.667 [2024-04-18 12:06:00.070146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.667 [2024-04-18 12:06:00.070171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.667 qpair failed and we were unable to recover it. 00:30:09.667 [2024-04-18 12:06:00.070457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.667 [2024-04-18 12:06:00.070611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.667 [2024-04-18 12:06:00.070632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.667 qpair failed and we were unable to recover it. 00:30:09.667 [2024-04-18 12:06:00.071002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.667 [2024-04-18 12:06:00.071286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.667 [2024-04-18 12:06:00.071307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.667 qpair failed and we were unable to recover it. 00:30:09.667 [2024-04-18 12:06:00.071617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.667 [2024-04-18 12:06:00.071885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.667 [2024-04-18 12:06:00.071908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.667 qpair failed and we were unable to recover it. 00:30:09.667 [2024-04-18 12:06:00.072084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.667 [2024-04-18 12:06:00.072407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.667 [2024-04-18 12:06:00.072427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.667 qpair failed and we were unable to recover it. 00:30:09.667 [2024-04-18 12:06:00.072789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.667 [2024-04-18 12:06:00.073072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.667 [2024-04-18 12:06:00.073092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.667 qpair failed and we were unable to recover it. 00:30:09.667 [2024-04-18 12:06:00.073445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.667 [2024-04-18 12:06:00.073656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.667 [2024-04-18 12:06:00.073677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.667 qpair failed and we were unable to recover it. 00:30:09.667 [2024-04-18 12:06:00.074040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.667 [2024-04-18 12:06:00.074238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.667 [2024-04-18 12:06:00.074258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.667 qpair failed and we were unable to recover it. 00:30:09.667 [2024-04-18 12:06:00.074537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.667 [2024-04-18 12:06:00.074794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.667 [2024-04-18 12:06:00.074815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.667 qpair failed and we were unable to recover it. 00:30:09.667 [2024-04-18 12:06:00.075171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.667 [2024-04-18 12:06:00.075404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.667 [2024-04-18 12:06:00.075424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.667 qpair failed and we were unable to recover it. 00:30:09.667 [2024-04-18 12:06:00.075732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.667 [2024-04-18 12:06:00.076017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.667 [2024-04-18 12:06:00.076038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.667 qpair failed and we were unable to recover it. 00:30:09.667 [2024-04-18 12:06:00.076308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.667 [2024-04-18 12:06:00.076432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.667 [2024-04-18 12:06:00.076465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.667 qpair failed and we were unable to recover it. 00:30:09.667 [2024-04-18 12:06:00.076701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.667 [2024-04-18 12:06:00.077056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.667 [2024-04-18 12:06:00.077077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.667 qpair failed and we were unable to recover it. 00:30:09.667 [2024-04-18 12:06:00.077355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.667 [2024-04-18 12:06:00.077622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.667 [2024-04-18 12:06:00.077643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.667 qpair failed and we were unable to recover it. 00:30:09.667 [2024-04-18 12:06:00.077868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.667 [2024-04-18 12:06:00.078086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.667 [2024-04-18 12:06:00.078106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.667 qpair failed and we were unable to recover it. 00:30:09.667 [2024-04-18 12:06:00.078492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.667 [2024-04-18 12:06:00.078755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.667 [2024-04-18 12:06:00.078776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.667 qpair failed and we were unable to recover it. 00:30:09.667 [2024-04-18 12:06:00.078982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.667 [2024-04-18 12:06:00.079117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.667 [2024-04-18 12:06:00.079138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.667 qpair failed and we were unable to recover it. 00:30:09.667 [2024-04-18 12:06:00.079413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.667 [2024-04-18 12:06:00.079669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.667 [2024-04-18 12:06:00.079690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.667 qpair failed and we were unable to recover it. 00:30:09.667 [2024-04-18 12:06:00.080023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.667 [2024-04-18 12:06:00.080316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.667 [2024-04-18 12:06:00.080337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.667 qpair failed and we were unable to recover it. 00:30:09.667 [2024-04-18 12:06:00.080566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.667 [2024-04-18 12:06:00.080917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.667 [2024-04-18 12:06:00.080938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.667 qpair failed and we were unable to recover it. 00:30:09.667 [2024-04-18 12:06:00.081206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.667 [2024-04-18 12:06:00.081483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.667 [2024-04-18 12:06:00.081504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.667 qpair failed and we were unable to recover it. 00:30:09.667 [2024-04-18 12:06:00.081792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.667 [2024-04-18 12:06:00.082096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.667 [2024-04-18 12:06:00.082116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.667 qpair failed and we were unable to recover it. 00:30:09.667 [2024-04-18 12:06:00.082309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.667 [2024-04-18 12:06:00.082597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.667 [2024-04-18 12:06:00.082619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.667 qpair failed and we were unable to recover it. 00:30:09.668 [2024-04-18 12:06:00.082880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.668 [2024-04-18 12:06:00.083103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.668 [2024-04-18 12:06:00.083123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.668 qpair failed and we were unable to recover it. 00:30:09.668 [2024-04-18 12:06:00.083376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.668 [2024-04-18 12:06:00.083595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.668 [2024-04-18 12:06:00.083616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.668 qpair failed and we were unable to recover it. 00:30:09.668 [2024-04-18 12:06:00.083848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.668 [2024-04-18 12:06:00.084054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.668 [2024-04-18 12:06:00.084074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.668 qpair failed and we were unable to recover it. 00:30:09.668 [2024-04-18 12:06:00.084191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.668 [2024-04-18 12:06:00.084485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.668 [2024-04-18 12:06:00.084506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.668 qpair failed and we were unable to recover it. 00:30:09.668 [2024-04-18 12:06:00.084792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.668 [2024-04-18 12:06:00.085151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.668 [2024-04-18 12:06:00.085171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.668 qpair failed and we were unable to recover it. 00:30:09.668 [2024-04-18 12:06:00.085470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.668 [2024-04-18 12:06:00.085749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.668 [2024-04-18 12:06:00.085770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.668 qpair failed and we were unable to recover it. 00:30:09.668 [2024-04-18 12:06:00.086053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.668 [2024-04-18 12:06:00.086405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.668 [2024-04-18 12:06:00.086426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.668 qpair failed and we were unable to recover it. 00:30:09.668 [2024-04-18 12:06:00.086735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.668 [2024-04-18 12:06:00.087027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.668 [2024-04-18 12:06:00.087048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.668 qpair failed and we were unable to recover it. 00:30:09.668 [2024-04-18 12:06:00.087233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.668 [2024-04-18 12:06:00.087497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.668 [2024-04-18 12:06:00.087518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.668 qpair failed and we were unable to recover it. 00:30:09.668 [2024-04-18 12:06:00.087802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.668 [2024-04-18 12:06:00.088067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.668 [2024-04-18 12:06:00.088088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.668 qpair failed and we were unable to recover it. 00:30:09.668 [2024-04-18 12:06:00.088358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.668 [2024-04-18 12:06:00.088622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.668 [2024-04-18 12:06:00.088644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.668 qpair failed and we were unable to recover it. 00:30:09.668 [2024-04-18 12:06:00.088925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.668 [2024-04-18 12:06:00.089186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.668 [2024-04-18 12:06:00.089207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.668 qpair failed and we were unable to recover it. 00:30:09.668 [2024-04-18 12:06:00.089425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.668 [2024-04-18 12:06:00.089623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.668 [2024-04-18 12:06:00.089644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.668 qpair failed and we were unable to recover it. 00:30:09.668 [2024-04-18 12:06:00.089896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.668 [2024-04-18 12:06:00.090227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.668 [2024-04-18 12:06:00.090247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.668 qpair failed and we were unable to recover it. 00:30:09.668 [2024-04-18 12:06:00.090623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.668 [2024-04-18 12:06:00.090832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.668 [2024-04-18 12:06:00.090853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.668 qpair failed and we were unable to recover it. 00:30:09.668 [2024-04-18 12:06:00.091032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.668 [2024-04-18 12:06:00.091311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.668 [2024-04-18 12:06:00.091332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.668 qpair failed and we were unable to recover it. 00:30:09.668 [2024-04-18 12:06:00.091611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.668 [2024-04-18 12:06:00.091722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.668 [2024-04-18 12:06:00.091743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.668 qpair failed and we were unable to recover it. 00:30:09.668 [2024-04-18 12:06:00.092088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.668 [2024-04-18 12:06:00.092258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.668 [2024-04-18 12:06:00.092278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.668 qpair failed and we were unable to recover it. 00:30:09.668 [2024-04-18 12:06:00.092549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.668 [2024-04-18 12:06:00.092893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.668 [2024-04-18 12:06:00.092914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.668 qpair failed and we were unable to recover it. 00:30:09.668 [2024-04-18 12:06:00.093271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.668 [2024-04-18 12:06:00.093654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.668 [2024-04-18 12:06:00.093675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.668 qpair failed and we were unable to recover it. 00:30:09.668 [2024-04-18 12:06:00.093898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.668 [2024-04-18 12:06:00.094199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.668 [2024-04-18 12:06:00.094220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.668 qpair failed and we were unable to recover it. 00:30:09.668 [2024-04-18 12:06:00.094509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.668 [2024-04-18 12:06:00.094805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.668 [2024-04-18 12:06:00.094827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.668 qpair failed and we were unable to recover it. 00:30:09.668 [2024-04-18 12:06:00.095162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.668 [2024-04-18 12:06:00.095378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.668 [2024-04-18 12:06:00.095399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.668 qpair failed and we were unable to recover it. 00:30:09.668 [2024-04-18 12:06:00.095664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.668 [2024-04-18 12:06:00.095939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.668 [2024-04-18 12:06:00.095960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.668 qpair failed and we were unable to recover it. 00:30:09.668 [2024-04-18 12:06:00.096203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.668 [2024-04-18 12:06:00.096403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.668 [2024-04-18 12:06:00.096424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.668 qpair failed and we were unable to recover it. 00:30:09.668 [2024-04-18 12:06:00.096659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.668 [2024-04-18 12:06:00.097030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.668 [2024-04-18 12:06:00.097051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.668 qpair failed and we were unable to recover it. 00:30:09.668 [2024-04-18 12:06:00.097183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.668 [2024-04-18 12:06:00.097400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.668 [2024-04-18 12:06:00.097420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.668 qpair failed and we were unable to recover it. 00:30:09.668 [2024-04-18 12:06:00.097774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.668 [2024-04-18 12:06:00.098056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.668 [2024-04-18 12:06:00.098077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.668 qpair failed and we were unable to recover it. 00:30:09.668 [2024-04-18 12:06:00.098445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.668 [2024-04-18 12:06:00.098749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.668 [2024-04-18 12:06:00.098769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.668 qpair failed and we were unable to recover it. 00:30:09.669 [2024-04-18 12:06:00.099118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.669 [2024-04-18 12:06:00.099402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.669 [2024-04-18 12:06:00.099423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.669 qpair failed and we were unable to recover it. 00:30:09.669 [2024-04-18 12:06:00.099550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.669 [2024-04-18 12:06:00.099865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.669 [2024-04-18 12:06:00.099885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.669 qpair failed and we were unable to recover it. 00:30:09.669 [2024-04-18 12:06:00.100087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.669 [2024-04-18 12:06:00.100425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.669 [2024-04-18 12:06:00.100446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.669 qpair failed and we were unable to recover it. 00:30:09.669 [2024-04-18 12:06:00.100763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.669 [2024-04-18 12:06:00.101065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.669 [2024-04-18 12:06:00.101087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.669 qpair failed and we were unable to recover it. 00:30:09.669 [2024-04-18 12:06:00.101366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.669 [2024-04-18 12:06:00.101631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.669 [2024-04-18 12:06:00.101652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.669 qpair failed and we were unable to recover it. 00:30:09.669 [2024-04-18 12:06:00.101919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.669 [2024-04-18 12:06:00.102182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.669 [2024-04-18 12:06:00.102202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.669 qpair failed and we were unable to recover it. 00:30:09.669 [2024-04-18 12:06:00.102489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.669 [2024-04-18 12:06:00.102819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.669 [2024-04-18 12:06:00.102839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.669 qpair failed and we were unable to recover it. 00:30:09.669 [2024-04-18 12:06:00.103120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.669 [2024-04-18 12:06:00.103322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.669 [2024-04-18 12:06:00.103343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.669 qpair failed and we were unable to recover it. 00:30:09.669 [2024-04-18 12:06:00.103692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.669 [2024-04-18 12:06:00.103965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.669 [2024-04-18 12:06:00.103985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.669 qpair failed and we were unable to recover it. 00:30:09.669 [2024-04-18 12:06:00.104320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.669 [2024-04-18 12:06:00.104533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.669 [2024-04-18 12:06:00.104554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.669 qpair failed and we were unable to recover it. 00:30:09.669 [2024-04-18 12:06:00.104869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.669 [2024-04-18 12:06:00.105132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.669 [2024-04-18 12:06:00.105153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.669 qpair failed and we were unable to recover it. 00:30:09.669 [2024-04-18 12:06:00.105412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.669 [2024-04-18 12:06:00.105771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.669 [2024-04-18 12:06:00.105793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.669 qpair failed and we were unable to recover it. 00:30:09.669 [2024-04-18 12:06:00.105959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.669 [2024-04-18 12:06:00.106291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.669 [2024-04-18 12:06:00.106311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.669 qpair failed and we were unable to recover it. 00:30:09.669 [2024-04-18 12:06:00.106541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.669 [2024-04-18 12:06:00.106793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.669 [2024-04-18 12:06:00.106842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.669 qpair failed and we were unable to recover it. 00:30:09.669 [2024-04-18 12:06:00.107192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.669 [2024-04-18 12:06:00.107603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.669 [2024-04-18 12:06:00.107624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.669 qpair failed and we were unable to recover it. 00:30:09.669 [2024-04-18 12:06:00.107988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.669 [2024-04-18 12:06:00.108228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.669 [2024-04-18 12:06:00.108249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.669 qpair failed and we were unable to recover it. 00:30:09.669 [2024-04-18 12:06:00.108444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.669 [2024-04-18 12:06:00.108674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.669 [2024-04-18 12:06:00.108695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.669 qpair failed and we were unable to recover it. 00:30:09.669 [2024-04-18 12:06:00.108984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.669 [2024-04-18 12:06:00.109265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.669 [2024-04-18 12:06:00.109285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.669 qpair failed and we were unable to recover it. 00:30:09.669 [2024-04-18 12:06:00.109562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.669 [2024-04-18 12:06:00.109894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.669 [2024-04-18 12:06:00.109915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.669 qpair failed and we were unable to recover it. 00:30:09.669 [2024-04-18 12:06:00.110126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.669 [2024-04-18 12:06:00.110396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.669 [2024-04-18 12:06:00.110417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.669 qpair failed and we were unable to recover it. 00:30:09.669 [2024-04-18 12:06:00.110763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.669 [2024-04-18 12:06:00.111144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.669 [2024-04-18 12:06:00.111165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.669 qpair failed and we were unable to recover it. 00:30:09.669 [2024-04-18 12:06:00.111461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.669 [2024-04-18 12:06:00.111682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.669 [2024-04-18 12:06:00.111703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.669 qpair failed and we were unable to recover it. 00:30:09.669 [2024-04-18 12:06:00.111900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.669 [2024-04-18 12:06:00.112255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.669 [2024-04-18 12:06:00.112276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.669 qpair failed and we were unable to recover it. 00:30:09.669 [2024-04-18 12:06:00.112588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.669 [2024-04-18 12:06:00.112817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.669 [2024-04-18 12:06:00.112838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.669 qpair failed and we were unable to recover it. 00:30:09.669 [2024-04-18 12:06:00.113175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.669 [2024-04-18 12:06:00.113528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.669 [2024-04-18 12:06:00.113549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.669 qpair failed and we were unable to recover it. 00:30:09.669 [2024-04-18 12:06:00.113773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.669 [2024-04-18 12:06:00.113982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.669 [2024-04-18 12:06:00.114002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.669 qpair failed and we were unable to recover it. 00:30:09.669 [2024-04-18 12:06:00.114267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.669 [2024-04-18 12:06:00.114526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.669 [2024-04-18 12:06:00.114547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.669 qpair failed and we were unable to recover it. 00:30:09.669 [2024-04-18 12:06:00.114766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.669 [2024-04-18 12:06:00.115098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.669 [2024-04-18 12:06:00.115119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.669 qpair failed and we were unable to recover it. 00:30:09.669 [2024-04-18 12:06:00.115403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.669 [2024-04-18 12:06:00.115693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.670 [2024-04-18 12:06:00.115714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.670 qpair failed and we were unable to recover it. 00:30:09.670 [2024-04-18 12:06:00.116048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.670 [2024-04-18 12:06:00.116330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.670 [2024-04-18 12:06:00.116351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.670 qpair failed and we were unable to recover it. 00:30:09.670 [2024-04-18 12:06:00.116585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.670 [2024-04-18 12:06:00.116962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.670 [2024-04-18 12:06:00.116983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.670 qpair failed and we were unable to recover it. 00:30:09.670 [2024-04-18 12:06:00.117299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.670 [2024-04-18 12:06:00.117630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.670 [2024-04-18 12:06:00.117651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:09.670 qpair failed and we were unable to recover it. 00:30:09.670 [2024-04-18 12:06:00.117911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.670 [2024-04-18 12:06:00.118224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.670 [2024-04-18 12:06:00.118250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:30:09.670 qpair failed and we were unable to recover it. 00:30:09.670 [2024-04-18 12:06:00.118579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.670 [2024-04-18 12:06:00.118935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.670 [2024-04-18 12:06:00.118956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:30:09.670 qpair failed and we were unable to recover it. 00:30:09.670 [2024-04-18 12:06:00.119105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.670 [2024-04-18 12:06:00.119436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.670 [2024-04-18 12:06:00.119462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:30:09.670 qpair failed and we were unable to recover it. 00:30:09.670 [2024-04-18 12:06:00.119791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.670 [2024-04-18 12:06:00.120175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.670 [2024-04-18 12:06:00.120197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:30:09.670 qpair failed and we were unable to recover it. 00:30:09.670 [2024-04-18 12:06:00.120496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.670 [2024-04-18 12:06:00.120783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.670 [2024-04-18 12:06:00.120804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:30:09.670 qpair failed and we were unable to recover it. 00:30:09.670 [2024-04-18 12:06:00.121066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.670 [2024-04-18 12:06:00.121348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.670 [2024-04-18 12:06:00.121369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:30:09.670 qpair failed and we were unable to recover it. 00:30:09.670 [2024-04-18 12:06:00.121601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.670 [2024-04-18 12:06:00.121937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.670 [2024-04-18 12:06:00.121958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:30:09.670 qpair failed and we were unable to recover it. 00:30:09.670 [2024-04-18 12:06:00.122325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.670 [2024-04-18 12:06:00.122675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.670 [2024-04-18 12:06:00.122696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:30:09.670 qpair failed and we were unable to recover it. 00:30:09.670 [2024-04-18 12:06:00.122993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.670 [2024-04-18 12:06:00.123353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.670 [2024-04-18 12:06:00.123373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:30:09.670 qpair failed and we were unable to recover it. 00:30:09.670 [2024-04-18 12:06:00.123722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.670 [2024-04-18 12:06:00.124071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.670 [2024-04-18 12:06:00.124092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:30:09.670 qpair failed and we were unable to recover it. 00:30:09.670 [2024-04-18 12:06:00.124459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.670 [2024-04-18 12:06:00.124735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.670 [2024-04-18 12:06:00.124757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:30:09.670 qpair failed and we were unable to recover it. 00:30:09.670 [2024-04-18 12:06:00.125062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.670 [2024-04-18 12:06:00.125337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.670 [2024-04-18 12:06:00.125358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:30:09.670 qpair failed and we were unable to recover it. 00:30:09.670 [2024-04-18 12:06:00.125597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.670 [2024-04-18 12:06:00.125735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.670 [2024-04-18 12:06:00.125756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:30:09.670 qpair failed and we were unable to recover it. 00:30:09.670 [2024-04-18 12:06:00.126041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.670 [2024-04-18 12:06:00.126264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.670 [2024-04-18 12:06:00.126285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:30:09.670 qpair failed and we were unable to recover it. 00:30:09.670 [2024-04-18 12:06:00.126670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.670 [2024-04-18 12:06:00.126974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.670 [2024-04-18 12:06:00.126996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:30:09.670 qpair failed and we were unable to recover it. 00:30:09.670 [2024-04-18 12:06:00.127353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.670 [2024-04-18 12:06:00.127729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.670 [2024-04-18 12:06:00.127752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:30:09.670 qpair failed and we were unable to recover it. 00:30:09.670 [2024-04-18 12:06:00.128044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.670 [2024-04-18 12:06:00.128331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.670 [2024-04-18 12:06:00.128352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:30:09.670 qpair failed and we were unable to recover it. 00:30:09.670 [2024-04-18 12:06:00.128712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.670 [2024-04-18 12:06:00.128990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.670 [2024-04-18 12:06:00.129011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:30:09.670 qpair failed and we were unable to recover it. 00:30:09.670 [2024-04-18 12:06:00.129297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.670 [2024-04-18 12:06:00.129526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.670 [2024-04-18 12:06:00.129548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:30:09.670 qpair failed and we were unable to recover it. 00:30:09.670 [2024-04-18 12:06:00.129775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.670 [2024-04-18 12:06:00.130072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.670 [2024-04-18 12:06:00.130093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:30:09.670 qpair failed and we were unable to recover it. 00:30:09.670 [2024-04-18 12:06:00.130429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.670 [2024-04-18 12:06:00.130815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.670 [2024-04-18 12:06:00.130836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:30:09.670 qpair failed and we were unable to recover it. 00:30:09.670 [2024-04-18 12:06:00.131133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.670 [2024-04-18 12:06:00.131406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.670 [2024-04-18 12:06:00.131427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:30:09.670 qpair failed and we were unable to recover it. 00:30:09.670 [2024-04-18 12:06:00.131842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.670 [2024-04-18 12:06:00.132238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.670 [2024-04-18 12:06:00.132258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.670 qpair failed and we were unable to recover it. 00:30:09.670 [2024-04-18 12:06:00.132566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.670 [2024-04-18 12:06:00.132844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.670 [2024-04-18 12:06:00.132860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.670 qpair failed and we were unable to recover it. 00:30:09.670 [2024-04-18 12:06:00.133238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.670 [2024-04-18 12:06:00.133532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.670 [2024-04-18 12:06:00.133549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.670 qpair failed and we were unable to recover it. 00:30:09.670 [2024-04-18 12:06:00.133750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.670 [2024-04-18 12:06:00.134022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.671 [2024-04-18 12:06:00.134038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.671 qpair failed and we were unable to recover it. 00:30:09.671 [2024-04-18 12:06:00.134315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.671 [2024-04-18 12:06:00.134635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.671 [2024-04-18 12:06:00.134651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.671 qpair failed and we were unable to recover it. 00:30:09.671 [2024-04-18 12:06:00.134952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.671 [2024-04-18 12:06:00.135209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.671 [2024-04-18 12:06:00.135259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.671 qpair failed and we were unable to recover it. 00:30:09.671 [2024-04-18 12:06:00.135595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.671 [2024-04-18 12:06:00.135915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.671 [2024-04-18 12:06:00.135964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.671 qpair failed and we were unable to recover it. 00:30:09.671 [2024-04-18 12:06:00.136347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.671 [2024-04-18 12:06:00.136611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.671 [2024-04-18 12:06:00.136661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.671 qpair failed and we were unable to recover it. 00:30:09.671 [2024-04-18 12:06:00.136987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.671 [2024-04-18 12:06:00.137378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.671 [2024-04-18 12:06:00.137427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.671 qpair failed and we were unable to recover it. 00:30:09.671 [2024-04-18 12:06:00.137848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.671 [2024-04-18 12:06:00.138179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.671 [2024-04-18 12:06:00.138228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.671 qpair failed and we were unable to recover it. 00:30:09.671 [2024-04-18 12:06:00.138651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.671 [2024-04-18 12:06:00.138998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.671 [2024-04-18 12:06:00.139025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.671 qpair failed and we were unable to recover it. 00:30:09.671 [2024-04-18 12:06:00.139304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.671 [2024-04-18 12:06:00.139634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.671 [2024-04-18 12:06:00.139650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.671 qpair failed and we were unable to recover it. 00:30:09.671 [2024-04-18 12:06:00.139926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.671 [2024-04-18 12:06:00.140194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.671 [2024-04-18 12:06:00.140209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.671 qpair failed and we were unable to recover it. 00:30:09.671 [2024-04-18 12:06:00.140558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.671 [2024-04-18 12:06:00.140753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.671 [2024-04-18 12:06:00.140769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.671 qpair failed and we were unable to recover it. 00:30:09.671 [2024-04-18 12:06:00.140945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.671 [2024-04-18 12:06:00.141273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.671 [2024-04-18 12:06:00.141288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.671 qpair failed and we were unable to recover it. 00:30:09.671 [2024-04-18 12:06:00.141563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.671 [2024-04-18 12:06:00.141824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.671 [2024-04-18 12:06:00.141841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.671 qpair failed and we were unable to recover it. 00:30:09.671 [2024-04-18 12:06:00.142126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.671 [2024-04-18 12:06:00.142469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.671 [2024-04-18 12:06:00.142485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.671 qpair failed and we were unable to recover it. 00:30:09.671 [2024-04-18 12:06:00.142822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.671 [2024-04-18 12:06:00.143042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.671 [2024-04-18 12:06:00.143058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.671 qpair failed and we were unable to recover it. 00:30:09.671 [2024-04-18 12:06:00.143297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.671 [2024-04-18 12:06:00.143561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.671 [2024-04-18 12:06:00.143578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.671 qpair failed and we were unable to recover it. 00:30:09.671 [2024-04-18 12:06:00.143864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.671 [2024-04-18 12:06:00.144149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.671 [2024-04-18 12:06:00.144164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.671 qpair failed and we were unable to recover it. 00:30:09.671 [2024-04-18 12:06:00.144425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.671 [2024-04-18 12:06:00.144771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.671 [2024-04-18 12:06:00.144788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.671 qpair failed and we were unable to recover it. 00:30:09.671 [2024-04-18 12:06:00.145049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.671 [2024-04-18 12:06:00.145328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.671 [2024-04-18 12:06:00.145344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.671 qpair failed and we were unable to recover it. 00:30:09.671 [2024-04-18 12:06:00.145625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.671 [2024-04-18 12:06:00.145900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.671 [2024-04-18 12:06:00.145915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.671 qpair failed and we were unable to recover it. 00:30:09.671 [2024-04-18 12:06:00.146101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.671 [2024-04-18 12:06:00.146401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.671 [2024-04-18 12:06:00.146417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.671 qpair failed and we were unable to recover it. 00:30:09.671 [2024-04-18 12:06:00.146587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.671 [2024-04-18 12:06:00.146786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.671 [2024-04-18 12:06:00.146806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.671 qpair failed and we were unable to recover it. 00:30:09.671 [2024-04-18 12:06:00.147084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.671 [2024-04-18 12:06:00.147359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.671 [2024-04-18 12:06:00.147375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.671 qpair failed and we were unable to recover it. 00:30:09.671 [2024-04-18 12:06:00.147587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.671 [2024-04-18 12:06:00.147913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.671 [2024-04-18 12:06:00.147929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.671 qpair failed and we were unable to recover it. 00:30:09.671 [2024-04-18 12:06:00.148120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.671 [2024-04-18 12:06:00.148464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.671 [2024-04-18 12:06:00.148496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.671 qpair failed and we were unable to recover it. 00:30:09.671 [2024-04-18 12:06:00.148843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.671 [2024-04-18 12:06:00.149050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.671 [2024-04-18 12:06:00.149068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.671 qpair failed and we were unable to recover it. 00:30:09.671 [2024-04-18 12:06:00.149291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.671 [2024-04-18 12:06:00.149555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.671 [2024-04-18 12:06:00.149572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.671 qpair failed and we were unable to recover it. 00:30:09.671 [2024-04-18 12:06:00.149849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.671 [2024-04-18 12:06:00.150127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.671 [2024-04-18 12:06:00.150142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.671 qpair failed and we were unable to recover it. 00:30:09.671 [2024-04-18 12:06:00.150416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.671 [2024-04-18 12:06:00.150709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.671 [2024-04-18 12:06:00.150725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.671 qpair failed and we were unable to recover it. 00:30:09.671 [2024-04-18 12:06:00.151001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.671 [2024-04-18 12:06:00.151217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.672 [2024-04-18 12:06:00.151233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.672 qpair failed and we were unable to recover it. 00:30:09.672 [2024-04-18 12:06:00.151460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.672 [2024-04-18 12:06:00.151780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.672 [2024-04-18 12:06:00.151796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.672 qpair failed and we were unable to recover it. 00:30:09.672 [2024-04-18 12:06:00.152074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.672 [2024-04-18 12:06:00.152345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.672 [2024-04-18 12:06:00.152361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.672 qpair failed and we were unable to recover it. 00:30:09.672 [2024-04-18 12:06:00.152687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.672 [2024-04-18 12:06:00.153010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.672 [2024-04-18 12:06:00.153026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.672 qpair failed and we were unable to recover it. 00:30:09.672 [2024-04-18 12:06:00.153300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.672 [2024-04-18 12:06:00.153567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.672 [2024-04-18 12:06:00.153583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.672 qpair failed and we were unable to recover it. 00:30:09.672 [2024-04-18 12:06:00.153858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.672 [2024-04-18 12:06:00.154129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.672 [2024-04-18 12:06:00.154145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.672 qpair failed and we were unable to recover it. 00:30:09.672 [2024-04-18 12:06:00.154472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.672 [2024-04-18 12:06:00.154816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.672 [2024-04-18 12:06:00.154835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.672 qpair failed and we were unable to recover it. 00:30:09.672 [2024-04-18 12:06:00.155187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.672 [2024-04-18 12:06:00.155456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.672 [2024-04-18 12:06:00.155472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.672 qpair failed and we were unable to recover it. 00:30:09.672 [2024-04-18 12:06:00.155846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.672 [2024-04-18 12:06:00.156120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.672 [2024-04-18 12:06:00.156137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.672 qpair failed and we were unable to recover it. 00:30:09.672 [2024-04-18 12:06:00.156369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.672 [2024-04-18 12:06:00.156708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.672 [2024-04-18 12:06:00.156724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.672 qpair failed and we were unable to recover it. 00:30:09.672 [2024-04-18 12:06:00.157025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.672 [2024-04-18 12:06:00.157372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.672 [2024-04-18 12:06:00.157388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.672 qpair failed and we were unable to recover it. 00:30:09.672 [2024-04-18 12:06:00.157662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.672 [2024-04-18 12:06:00.157936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.672 [2024-04-18 12:06:00.157952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.672 qpair failed and we were unable to recover it. 00:30:09.672 [2024-04-18 12:06:00.158229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.672 [2024-04-18 12:06:00.158428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.672 [2024-04-18 12:06:00.158444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.672 qpair failed and we were unable to recover it. 00:30:09.672 [2024-04-18 12:06:00.158791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.672 [2024-04-18 12:06:00.159137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.672 [2024-04-18 12:06:00.159153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.672 qpair failed and we were unable to recover it. 00:30:09.672 [2024-04-18 12:06:00.159476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.672 [2024-04-18 12:06:00.159694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.672 [2024-04-18 12:06:00.159710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.672 qpair failed and we were unable to recover it. 00:30:09.672 [2024-04-18 12:06:00.159994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.672 [2024-04-18 12:06:00.160263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.672 [2024-04-18 12:06:00.160279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.672 qpair failed and we were unable to recover it. 00:30:09.672 [2024-04-18 12:06:00.160553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.672 [2024-04-18 12:06:00.160805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.672 [2024-04-18 12:06:00.160823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.672 qpair failed and we were unable to recover it. 00:30:09.672 [2024-04-18 12:06:00.160951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.672 [2024-04-18 12:06:00.161254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.672 [2024-04-18 12:06:00.161270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.672 qpair failed and we were unable to recover it. 00:30:09.672 [2024-04-18 12:06:00.161553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.672 [2024-04-18 12:06:00.161897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.672 [2024-04-18 12:06:00.161912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.672 qpair failed and we were unable to recover it. 00:30:09.672 [2024-04-18 12:06:00.162166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.672 [2024-04-18 12:06:00.162444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.672 [2024-04-18 12:06:00.162472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.672 qpair failed and we were unable to recover it. 00:30:09.672 [2024-04-18 12:06:00.162742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.672 [2024-04-18 12:06:00.163006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.672 [2024-04-18 12:06:00.163023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.672 qpair failed and we were unable to recover it. 00:30:09.672 [2024-04-18 12:06:00.163292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.672 [2024-04-18 12:06:00.163547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.672 [2024-04-18 12:06:00.163563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.672 qpair failed and we were unable to recover it. 00:30:09.672 [2024-04-18 12:06:00.163849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.672 [2024-04-18 12:06:00.164114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.672 [2024-04-18 12:06:00.164129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.672 qpair failed and we were unable to recover it. 00:30:09.672 [2024-04-18 12:06:00.164472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.672 [2024-04-18 12:06:00.164803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.672 [2024-04-18 12:06:00.164820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.672 qpair failed and we were unable to recover it. 00:30:09.672 [2024-04-18 12:06:00.165033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.672 [2024-04-18 12:06:00.165231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.673 [2024-04-18 12:06:00.165246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.673 qpair failed and we were unable to recover it. 00:30:09.673 [2024-04-18 12:06:00.165529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.673 [2024-04-18 12:06:00.165783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.673 [2024-04-18 12:06:00.165799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.673 qpair failed and we were unable to recover it. 00:30:09.673 [2024-04-18 12:06:00.166077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.673 [2024-04-18 12:06:00.166349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.673 [2024-04-18 12:06:00.166368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.673 qpair failed and we were unable to recover it. 00:30:09.673 [2024-04-18 12:06:00.166651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.673 [2024-04-18 12:06:00.166924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.673 [2024-04-18 12:06:00.166940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.673 qpair failed and we were unable to recover it. 00:30:09.673 [2024-04-18 12:06:00.167153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.673 [2024-04-18 12:06:00.167381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.673 [2024-04-18 12:06:00.167397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.673 qpair failed and we were unable to recover it. 00:30:09.673 [2024-04-18 12:06:00.167665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.673 [2024-04-18 12:06:00.167918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.673 [2024-04-18 12:06:00.167934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.673 qpair failed and we were unable to recover it. 00:30:09.673 [2024-04-18 12:06:00.168293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.673 [2024-04-18 12:06:00.168560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.673 [2024-04-18 12:06:00.168577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.673 qpair failed and we were unable to recover it. 00:30:09.673 [2024-04-18 12:06:00.168780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.673 [2024-04-18 12:06:00.169102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.673 [2024-04-18 12:06:00.169118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.673 qpair failed and we were unable to recover it. 00:30:09.673 [2024-04-18 12:06:00.169327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.673 [2024-04-18 12:06:00.169517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.673 [2024-04-18 12:06:00.169533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.673 qpair failed and we were unable to recover it. 00:30:09.673 [2024-04-18 12:06:00.169856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.673 [2024-04-18 12:06:00.170177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.673 [2024-04-18 12:06:00.170193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.673 qpair failed and we were unable to recover it. 00:30:09.673 [2024-04-18 12:06:00.170400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.673 [2024-04-18 12:06:00.170746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.673 [2024-04-18 12:06:00.170762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.673 qpair failed and we were unable to recover it. 00:30:09.673 [2024-04-18 12:06:00.171111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.673 [2024-04-18 12:06:00.171233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.673 [2024-04-18 12:06:00.171249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.673 qpair failed and we were unable to recover it. 00:30:09.673 [2024-04-18 12:06:00.171571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.673 [2024-04-18 12:06:00.171860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.673 [2024-04-18 12:06:00.171876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.673 qpair failed and we were unable to recover it. 00:30:09.673 [2024-04-18 12:06:00.172228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.673 [2024-04-18 12:06:00.172448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.673 [2024-04-18 12:06:00.172469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.673 qpair failed and we were unable to recover it. 00:30:09.673 [2024-04-18 12:06:00.172755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.673 [2024-04-18 12:06:00.173076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.673 [2024-04-18 12:06:00.173092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.673 qpair failed and we were unable to recover it. 00:30:09.673 [2024-04-18 12:06:00.173437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.673 [2024-04-18 12:06:00.173734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.673 [2024-04-18 12:06:00.173749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.673 qpair failed and we were unable to recover it. 00:30:09.673 [2024-04-18 12:06:00.173981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.673 [2024-04-18 12:06:00.174260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.673 [2024-04-18 12:06:00.174275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.673 qpair failed and we were unable to recover it. 00:30:09.673 [2024-04-18 12:06:00.174609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.673 [2024-04-18 12:06:00.174808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.673 [2024-04-18 12:06:00.174824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.673 qpair failed and we were unable to recover it. 00:30:09.673 [2024-04-18 12:06:00.175170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.673 [2024-04-18 12:06:00.175501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.673 [2024-04-18 12:06:00.175516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.673 qpair failed and we were unable to recover it. 00:30:09.673 [2024-04-18 12:06:00.175790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.673 [2024-04-18 12:06:00.176058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.673 [2024-04-18 12:06:00.176074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.673 qpair failed and we were unable to recover it. 00:30:09.673 [2024-04-18 12:06:00.176368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.673 [2024-04-18 12:06:00.176667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.673 [2024-04-18 12:06:00.176683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.673 qpair failed and we were unable to recover it. 00:30:09.673 [2024-04-18 12:06:00.176947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.673 [2024-04-18 12:06:00.177229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.673 [2024-04-18 12:06:00.177246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.673 qpair failed and we were unable to recover it. 00:30:09.673 [2024-04-18 12:06:00.177489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.673 [2024-04-18 12:06:00.177771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.673 [2024-04-18 12:06:00.177787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.673 qpair failed and we were unable to recover it. 00:30:09.673 [2024-04-18 12:06:00.178119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.673 [2024-04-18 12:06:00.178425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.673 [2024-04-18 12:06:00.178441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.673 qpair failed and we were unable to recover it. 00:30:09.673 [2024-04-18 12:06:00.178720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.673 [2024-04-18 12:06:00.178925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.673 [2024-04-18 12:06:00.178941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.673 qpair failed and we were unable to recover it. 00:30:09.673 [2024-04-18 12:06:00.179287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.673 [2024-04-18 12:06:00.179579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.673 [2024-04-18 12:06:00.179595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.673 qpair failed and we were unable to recover it. 00:30:09.673 [2024-04-18 12:06:00.179925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.673 [2024-04-18 12:06:00.180266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.673 [2024-04-18 12:06:00.180282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.673 qpair failed and we were unable to recover it. 00:30:09.673 [2024-04-18 12:06:00.180630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.673 [2024-04-18 12:06:00.180950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.673 [2024-04-18 12:06:00.180966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.673 qpair failed and we were unable to recover it. 00:30:09.673 [2024-04-18 12:06:00.181247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.673 [2024-04-18 12:06:00.181428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.673 [2024-04-18 12:06:00.181443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.673 qpair failed and we were unable to recover it. 00:30:09.673 [2024-04-18 12:06:00.181737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.673 [2024-04-18 12:06:00.181940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.674 [2024-04-18 12:06:00.181956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.674 qpair failed and we were unable to recover it. 00:30:09.674 [2024-04-18 12:06:00.182177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.674 [2024-04-18 12:06:00.182519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.674 [2024-04-18 12:06:00.182535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.674 qpair failed and we were unable to recover it. 00:30:09.674 [2024-04-18 12:06:00.182884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.674 [2024-04-18 12:06:00.182998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.674 [2024-04-18 12:06:00.183014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.674 qpair failed and we were unable to recover it. 00:30:09.674 [2024-04-18 12:06:00.183358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.674 [2024-04-18 12:06:00.183650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.674 [2024-04-18 12:06:00.183666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.674 qpair failed and we were unable to recover it. 00:30:09.674 [2024-04-18 12:06:00.183958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.674 [2024-04-18 12:06:00.184213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.674 [2024-04-18 12:06:00.184229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.674 qpair failed and we were unable to recover it. 00:30:09.674 [2024-04-18 12:06:00.184608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.674 [2024-04-18 12:06:00.184930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.674 [2024-04-18 12:06:00.184946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.674 qpair failed and we were unable to recover it. 00:30:09.674 [2024-04-18 12:06:00.185221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.674 [2024-04-18 12:06:00.185600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.674 [2024-04-18 12:06:00.185616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.674 qpair failed and we were unable to recover it. 00:30:09.674 [2024-04-18 12:06:00.185954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.674 [2024-04-18 12:06:00.186240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.674 [2024-04-18 12:06:00.186256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.674 qpair failed and we were unable to recover it. 00:30:09.674 [2024-04-18 12:06:00.186546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.674 [2024-04-18 12:06:00.186762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.674 [2024-04-18 12:06:00.186778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.674 qpair failed and we were unable to recover it. 00:30:09.674 [2024-04-18 12:06:00.186999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.674 [2024-04-18 12:06:00.187321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.674 [2024-04-18 12:06:00.187338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.674 qpair failed and we were unable to recover it. 00:30:09.674 [2024-04-18 12:06:00.187664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.674 [2024-04-18 12:06:00.187986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.674 [2024-04-18 12:06:00.188002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.674 qpair failed and we were unable to recover it. 00:30:09.674 [2024-04-18 12:06:00.188284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.674 [2024-04-18 12:06:00.188584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.674 [2024-04-18 12:06:00.188600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.674 qpair failed and we were unable to recover it. 00:30:09.674 [2024-04-18 12:06:00.188804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.674 [2024-04-18 12:06:00.189011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.674 [2024-04-18 12:06:00.189027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.674 qpair failed and we were unable to recover it. 00:30:09.674 [2024-04-18 12:06:00.189301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.674 [2024-04-18 12:06:00.189569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.674 [2024-04-18 12:06:00.189585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.674 qpair failed and we were unable to recover it. 00:30:09.674 [2024-04-18 12:06:00.189803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.674 [2024-04-18 12:06:00.189994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.674 [2024-04-18 12:06:00.190010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.674 qpair failed and we were unable to recover it. 00:30:09.674 [2024-04-18 12:06:00.190206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.674 [2024-04-18 12:06:00.190482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.674 [2024-04-18 12:06:00.190498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.674 qpair failed and we were unable to recover it. 00:30:09.674 [2024-04-18 12:06:00.190833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.674 [2024-04-18 12:06:00.191094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.674 [2024-04-18 12:06:00.191110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.674 qpair failed and we were unable to recover it. 00:30:09.674 [2024-04-18 12:06:00.191459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.674 [2024-04-18 12:06:00.191709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.674 [2024-04-18 12:06:00.191725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.674 qpair failed and we were unable to recover it. 00:30:09.674 [2024-04-18 12:06:00.191994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.674 [2024-04-18 12:06:00.192268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.674 [2024-04-18 12:06:00.192283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.674 qpair failed and we were unable to recover it. 00:30:09.674 [2024-04-18 12:06:00.192559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.674 [2024-04-18 12:06:00.192920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.674 [2024-04-18 12:06:00.192936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.674 qpair failed and we were unable to recover it. 00:30:09.674 [2024-04-18 12:06:00.193259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.674 [2024-04-18 12:06:00.193533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.674 [2024-04-18 12:06:00.193548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.674 qpair failed and we were unable to recover it. 00:30:09.674 [2024-04-18 12:06:00.193895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.674 [2024-04-18 12:06:00.194111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.674 [2024-04-18 12:06:00.194126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.674 qpair failed and we were unable to recover it. 00:30:09.674 [2024-04-18 12:06:00.194417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.674 [2024-04-18 12:06:00.194737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.674 [2024-04-18 12:06:00.194753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.674 qpair failed and we were unable to recover it. 00:30:09.674 [2024-04-18 12:06:00.195098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.674 [2024-04-18 12:06:00.195364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.674 [2024-04-18 12:06:00.195380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.674 qpair failed and we were unable to recover it. 00:30:09.674 [2024-04-18 12:06:00.195639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.674 [2024-04-18 12:06:00.195894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.674 [2024-04-18 12:06:00.195910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.674 qpair failed and we were unable to recover it. 00:30:09.674 [2024-04-18 12:06:00.196085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.674 [2024-04-18 12:06:00.196357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.674 [2024-04-18 12:06:00.196373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.674 qpair failed and we were unable to recover it. 00:30:09.674 [2024-04-18 12:06:00.196592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.674 [2024-04-18 12:06:00.196809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.674 [2024-04-18 12:06:00.196825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.674 qpair failed and we were unable to recover it. 00:30:09.674 [2024-04-18 12:06:00.197093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.674 [2024-04-18 12:06:00.197430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.674 [2024-04-18 12:06:00.197446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.674 qpair failed and we were unable to recover it. 00:30:09.674 [2024-04-18 12:06:00.197774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.674 [2024-04-18 12:06:00.198094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.674 [2024-04-18 12:06:00.198109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.674 qpair failed and we were unable to recover it. 00:30:09.944 [2024-04-18 12:06:00.198318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-04-18 12:06:00.198661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-04-18 12:06:00.198677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.944 qpair failed and we were unable to recover it. 00:30:09.944 [2024-04-18 12:06:00.199005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-04-18 12:06:00.199192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-04-18 12:06:00.199208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.944 qpair failed and we were unable to recover it. 00:30:09.944 [2024-04-18 12:06:00.199534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-04-18 12:06:00.199830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-04-18 12:06:00.199846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.944 qpair failed and we were unable to recover it. 00:30:09.944 [2024-04-18 12:06:00.200169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-04-18 12:06:00.200429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-04-18 12:06:00.200445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.944 qpair failed and we were unable to recover it. 00:30:09.944 [2024-04-18 12:06:00.200726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-04-18 12:06:00.201000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-04-18 12:06:00.201016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.944 qpair failed and we were unable to recover it. 00:30:09.944 [2024-04-18 12:06:00.201282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-04-18 12:06:00.201548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-04-18 12:06:00.201570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.944 qpair failed and we were unable to recover it. 00:30:09.944 [2024-04-18 12:06:00.201885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-04-18 12:06:00.202073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-04-18 12:06:00.202089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.944 qpair failed and we were unable to recover it. 00:30:09.944 [2024-04-18 12:06:00.202390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-04-18 12:06:00.202679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-04-18 12:06:00.202695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.944 qpair failed and we were unable to recover it. 00:30:09.944 [2024-04-18 12:06:00.202957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-04-18 12:06:00.203244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-04-18 12:06:00.203260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.944 qpair failed and we were unable to recover it. 00:30:09.944 [2024-04-18 12:06:00.203593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-04-18 12:06:00.203859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-04-18 12:06:00.203875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.944 qpair failed and we were unable to recover it. 00:30:09.944 [2024-04-18 12:06:00.204217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-04-18 12:06:00.204474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-04-18 12:06:00.204490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.944 qpair failed and we were unable to recover it. 00:30:09.944 [2024-04-18 12:06:00.204816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-04-18 12:06:00.205087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-04-18 12:06:00.205103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.944 qpair failed and we were unable to recover it. 00:30:09.944 [2024-04-18 12:06:00.205308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-04-18 12:06:00.205650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-04-18 12:06:00.205666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.944 qpair failed and we were unable to recover it. 00:30:09.944 [2024-04-18 12:06:00.205962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-04-18 12:06:00.206228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-04-18 12:06:00.206244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.944 qpair failed and we were unable to recover it. 00:30:09.944 [2024-04-18 12:06:00.206500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-04-18 12:06:00.206843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-04-18 12:06:00.206859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.944 qpair failed and we were unable to recover it. 00:30:09.944 [2024-04-18 12:06:00.207106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-04-18 12:06:00.207475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-04-18 12:06:00.207491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.944 qpair failed and we were unable to recover it. 00:30:09.944 [2024-04-18 12:06:00.207821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-04-18 12:06:00.208089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-04-18 12:06:00.208105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.944 qpair failed and we were unable to recover it. 00:30:09.944 [2024-04-18 12:06:00.208425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-04-18 12:06:00.208689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-04-18 12:06:00.208705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.944 qpair failed and we were unable to recover it. 00:30:09.944 [2024-04-18 12:06:00.208959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-04-18 12:06:00.209209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-04-18 12:06:00.209224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.944 qpair failed and we were unable to recover it. 00:30:09.944 [2024-04-18 12:06:00.209566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-04-18 12:06:00.209850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-04-18 12:06:00.209865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.944 qpair failed and we were unable to recover it. 00:30:09.944 [2024-04-18 12:06:00.210088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-04-18 12:06:00.210280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-04-18 12:06:00.210296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.944 qpair failed and we were unable to recover it. 00:30:09.944 [2024-04-18 12:06:00.210510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-04-18 12:06:00.210698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-04-18 12:06:00.210714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.944 qpair failed and we were unable to recover it. 00:30:09.944 [2024-04-18 12:06:00.210992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-04-18 12:06:00.211273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-04-18 12:06:00.211289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.945 qpair failed and we were unable to recover it. 00:30:09.945 [2024-04-18 12:06:00.211467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-04-18 12:06:00.211812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-04-18 12:06:00.211827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.945 qpair failed and we were unable to recover it. 00:30:09.945 [2024-04-18 12:06:00.212114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-04-18 12:06:00.212320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-04-18 12:06:00.212336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.945 qpair failed and we were unable to recover it. 00:30:09.945 [2024-04-18 12:06:00.212668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-04-18 12:06:00.212882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-04-18 12:06:00.212898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.945 qpair failed and we were unable to recover it. 00:30:09.945 [2024-04-18 12:06:00.213238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-04-18 12:06:00.213456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-04-18 12:06:00.213472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.945 qpair failed and we were unable to recover it. 00:30:09.945 [2024-04-18 12:06:00.213745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-04-18 12:06:00.214018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-04-18 12:06:00.214034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.945 qpair failed and we were unable to recover it. 00:30:09.945 [2024-04-18 12:06:00.214246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-04-18 12:06:00.214523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-04-18 12:06:00.214546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.945 qpair failed and we were unable to recover it. 00:30:09.945 [2024-04-18 12:06:00.214828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-04-18 12:06:00.215167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-04-18 12:06:00.215183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.945 qpair failed and we were unable to recover it. 00:30:09.945 [2024-04-18 12:06:00.215506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-04-18 12:06:00.215782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-04-18 12:06:00.215797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.945 qpair failed and we were unable to recover it. 00:30:09.945 [2024-04-18 12:06:00.216090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-04-18 12:06:00.216289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-04-18 12:06:00.216305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.945 qpair failed and we were unable to recover it. 00:30:09.945 [2024-04-18 12:06:00.216643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-04-18 12:06:00.217001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-04-18 12:06:00.217017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.945 qpair failed and we were unable to recover it. 00:30:09.945 [2024-04-18 12:06:00.217391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-04-18 12:06:00.217712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-04-18 12:06:00.217728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.945 qpair failed and we were unable to recover it. 00:30:09.945 [2024-04-18 12:06:00.217992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-04-18 12:06:00.218273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-04-18 12:06:00.218289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.945 qpair failed and we were unable to recover it. 00:30:09.945 [2024-04-18 12:06:00.218513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-04-18 12:06:00.218868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-04-18 12:06:00.218884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.945 qpair failed and we were unable to recover it. 00:30:09.945 [2024-04-18 12:06:00.219173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-04-18 12:06:00.219493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-04-18 12:06:00.219509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.945 qpair failed and we were unable to recover it. 00:30:09.945 [2024-04-18 12:06:00.219781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-04-18 12:06:00.220068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-04-18 12:06:00.220084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.945 qpair failed and we were unable to recover it. 00:30:09.945 [2024-04-18 12:06:00.220351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-04-18 12:06:00.220701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-04-18 12:06:00.220717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.945 qpair failed and we were unable to recover it. 00:30:09.945 [2024-04-18 12:06:00.221022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-04-18 12:06:00.221343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-04-18 12:06:00.221359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.945 qpair failed and we were unable to recover it. 00:30:09.945 [2024-04-18 12:06:00.221578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-04-18 12:06:00.221801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-04-18 12:06:00.221817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.945 qpair failed and we were unable to recover it. 00:30:09.945 [2024-04-18 12:06:00.222102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-04-18 12:06:00.222357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-04-18 12:06:00.222373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.945 qpair failed and we were unable to recover it. 00:30:09.945 [2024-04-18 12:06:00.222646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-04-18 12:06:00.222923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-04-18 12:06:00.222938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.945 qpair failed and we were unable to recover it. 00:30:09.945 [2024-04-18 12:06:00.223219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-04-18 12:06:00.223563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-04-18 12:06:00.223579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.945 qpair failed and we were unable to recover it. 00:30:09.945 [2024-04-18 12:06:00.223845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-04-18 12:06:00.224118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-04-18 12:06:00.224134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.945 qpair failed and we were unable to recover it. 00:30:09.945 [2024-04-18 12:06:00.224393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-04-18 12:06:00.224735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-04-18 12:06:00.224751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.945 qpair failed and we were unable to recover it. 00:30:09.945 [2024-04-18 12:06:00.225075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-04-18 12:06:00.225443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-04-18 12:06:00.225462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.945 qpair failed and we were unable to recover it. 00:30:09.945 [2024-04-18 12:06:00.225840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-04-18 12:06:00.226162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-04-18 12:06:00.226177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.946 qpair failed and we were unable to recover it. 00:30:09.946 [2024-04-18 12:06:00.226460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-04-18 12:06:00.226788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-04-18 12:06:00.226804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.946 qpair failed and we were unable to recover it. 00:30:09.946 [2024-04-18 12:06:00.227077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-04-18 12:06:00.227401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-04-18 12:06:00.227418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.946 qpair failed and we were unable to recover it. 00:30:09.946 [2024-04-18 12:06:00.227679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-04-18 12:06:00.227878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-04-18 12:06:00.227894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.946 qpair failed and we were unable to recover it. 00:30:09.946 [2024-04-18 12:06:00.228162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-04-18 12:06:00.228358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-04-18 12:06:00.228374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.946 qpair failed and we were unable to recover it. 00:30:09.946 [2024-04-18 12:06:00.228667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-04-18 12:06:00.228998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-04-18 12:06:00.229013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.946 qpair failed and we were unable to recover it. 00:30:09.946 [2024-04-18 12:06:00.229378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-04-18 12:06:00.229742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-04-18 12:06:00.229758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.946 qpair failed and we were unable to recover it. 00:30:09.946 [2024-04-18 12:06:00.230084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-04-18 12:06:00.230348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-04-18 12:06:00.230364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.946 qpair failed and we were unable to recover it. 00:30:09.946 [2024-04-18 12:06:00.230621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-04-18 12:06:00.230835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-04-18 12:06:00.230851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.946 qpair failed and we were unable to recover it. 00:30:09.946 [2024-04-18 12:06:00.231130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-04-18 12:06:00.231394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-04-18 12:06:00.231410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.946 qpair failed and we were unable to recover it. 00:30:09.946 [2024-04-18 12:06:00.231708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-04-18 12:06:00.231978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-04-18 12:06:00.231994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.946 qpair failed and we were unable to recover it. 00:30:09.946 [2024-04-18 12:06:00.232184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-04-18 12:06:00.232470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-04-18 12:06:00.232486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.946 qpair failed and we were unable to recover it. 00:30:09.946 [2024-04-18 12:06:00.232777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-04-18 12:06:00.233117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-04-18 12:06:00.233133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.946 qpair failed and we were unable to recover it. 00:30:09.946 [2024-04-18 12:06:00.233398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-04-18 12:06:00.233667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-04-18 12:06:00.233683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.946 qpair failed and we were unable to recover it. 00:30:09.946 [2024-04-18 12:06:00.234034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-04-18 12:06:00.234241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-04-18 12:06:00.234257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.946 qpair failed and we were unable to recover it. 00:30:09.946 [2024-04-18 12:06:00.234546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-04-18 12:06:00.234915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-04-18 12:06:00.234931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.946 qpair failed and we were unable to recover it. 00:30:09.946 [2024-04-18 12:06:00.235282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-04-18 12:06:00.235479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-04-18 12:06:00.235494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.946 qpair failed and we were unable to recover it. 00:30:09.946 [2024-04-18 12:06:00.235727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-04-18 12:06:00.236049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-04-18 12:06:00.236065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.946 qpair failed and we were unable to recover it. 00:30:09.946 [2024-04-18 12:06:00.236289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-04-18 12:06:00.236633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-04-18 12:06:00.236650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.946 qpair failed and we were unable to recover it. 00:30:09.946 [2024-04-18 12:06:00.236906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-04-18 12:06:00.237182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-04-18 12:06:00.237198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.946 qpair failed and we were unable to recover it. 00:30:09.946 [2024-04-18 12:06:00.237489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-04-18 12:06:00.237713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-04-18 12:06:00.237729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.946 qpair failed and we were unable to recover it. 00:30:09.946 [2024-04-18 12:06:00.237939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-04-18 12:06:00.238119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-04-18 12:06:00.238135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.946 qpair failed and we were unable to recover it. 00:30:09.946 [2024-04-18 12:06:00.238406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-04-18 12:06:00.238748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-04-18 12:06:00.238764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.946 qpair failed and we were unable to recover it. 00:30:09.946 [2024-04-18 12:06:00.239089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-04-18 12:06:00.239425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-04-18 12:06:00.239441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.946 qpair failed and we were unable to recover it. 00:30:09.946 [2024-04-18 12:06:00.239777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-04-18 12:06:00.240074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-04-18 12:06:00.240089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.947 qpair failed and we were unable to recover it. 00:30:09.947 [2024-04-18 12:06:00.240435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-04-18 12:06:00.240707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-04-18 12:06:00.240723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.947 qpair failed and we were unable to recover it. 00:30:09.947 [2024-04-18 12:06:00.241015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-04-18 12:06:00.241287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-04-18 12:06:00.241303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.947 qpair failed and we were unable to recover it. 00:30:09.947 [2024-04-18 12:06:00.241644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-04-18 12:06:00.241993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-04-18 12:06:00.242009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.947 qpair failed and we were unable to recover it. 00:30:09.947 [2024-04-18 12:06:00.242335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-04-18 12:06:00.242630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-04-18 12:06:00.242651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.947 qpair failed and we were unable to recover it. 00:30:09.947 [2024-04-18 12:06:00.242760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-04-18 12:06:00.243009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-04-18 12:06:00.243025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.947 qpair failed and we were unable to recover it. 00:30:09.947 [2024-04-18 12:06:00.243350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-04-18 12:06:00.243678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-04-18 12:06:00.243694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.947 qpair failed and we were unable to recover it. 00:30:09.947 [2024-04-18 12:06:00.244045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-04-18 12:06:00.244299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-04-18 12:06:00.244315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.947 qpair failed and we were unable to recover it. 00:30:09.947 [2024-04-18 12:06:00.244429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-04-18 12:06:00.244689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-04-18 12:06:00.244706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.947 qpair failed and we were unable to recover it. 00:30:09.947 [2024-04-18 12:06:00.244995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-04-18 12:06:00.245250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-04-18 12:06:00.245266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.947 qpair failed and we were unable to recover it. 00:30:09.947 [2024-04-18 12:06:00.245546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-04-18 12:06:00.245892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-04-18 12:06:00.245907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.947 qpair failed and we were unable to recover it. 00:30:09.947 [2024-04-18 12:06:00.246253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-04-18 12:06:00.246540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-04-18 12:06:00.246556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.947 qpair failed and we were unable to recover it. 00:30:09.947 [2024-04-18 12:06:00.246763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-04-18 12:06:00.247105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-04-18 12:06:00.247122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.947 qpair failed and we were unable to recover it. 00:30:09.947 [2024-04-18 12:06:00.247472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-04-18 12:06:00.247724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-04-18 12:06:00.247739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.947 qpair failed and we were unable to recover it. 00:30:09.947 [2024-04-18 12:06:00.247945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-04-18 12:06:00.248123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-04-18 12:06:00.248141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.947 qpair failed and we were unable to recover it. 00:30:09.947 [2024-04-18 12:06:00.248419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-04-18 12:06:00.248763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-04-18 12:06:00.248779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.947 qpair failed and we were unable to recover it. 00:30:09.947 [2024-04-18 12:06:00.249078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-04-18 12:06:00.249400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-04-18 12:06:00.249415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.947 qpair failed and we were unable to recover it. 00:30:09.947 [2024-04-18 12:06:00.249716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-04-18 12:06:00.249966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-04-18 12:06:00.249982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.947 qpair failed and we were unable to recover it. 00:30:09.947 [2024-04-18 12:06:00.250246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-04-18 12:06:00.250534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-04-18 12:06:00.250550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.947 qpair failed and we were unable to recover it. 00:30:09.947 [2024-04-18 12:06:00.250882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-04-18 12:06:00.251251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-04-18 12:06:00.251267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.947 qpair failed and we were unable to recover it. 00:30:09.947 [2024-04-18 12:06:00.251645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-04-18 12:06:00.251995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-04-18 12:06:00.252011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.947 qpair failed and we were unable to recover it. 00:30:09.947 [2024-04-18 12:06:00.252280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-04-18 12:06:00.252645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-04-18 12:06:00.252661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.947 qpair failed and we were unable to recover it. 00:30:09.947 [2024-04-18 12:06:00.252941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-04-18 12:06:00.253193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-04-18 12:06:00.253209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.947 qpair failed and we were unable to recover it. 00:30:09.947 [2024-04-18 12:06:00.253434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-04-18 12:06:00.253759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-04-18 12:06:00.253775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.947 qpair failed and we were unable to recover it. 00:30:09.947 [2024-04-18 12:06:00.254047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-04-18 12:06:00.254297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-04-18 12:06:00.254316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.947 qpair failed and we were unable to recover it. 00:30:09.947 [2024-04-18 12:06:00.254501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-04-18 12:06:00.254835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-04-18 12:06:00.254851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.947 qpair failed and we were unable to recover it. 00:30:09.947 [2024-04-18 12:06:00.255132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-04-18 12:06:00.255477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-04-18 12:06:00.255498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.947 qpair failed and we were unable to recover it. 00:30:09.947 [2024-04-18 12:06:00.255772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-04-18 12:06:00.255975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-04-18 12:06:00.255992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.948 qpair failed and we were unable to recover it. 00:30:09.948 [2024-04-18 12:06:00.256292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-04-18 12:06:00.256407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-04-18 12:06:00.256423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.948 qpair failed and we were unable to recover it. 00:30:09.948 [2024-04-18 12:06:00.256629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-04-18 12:06:00.256883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-04-18 12:06:00.256899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.948 qpair failed and we were unable to recover it. 00:30:09.948 [2024-04-18 12:06:00.257116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-04-18 12:06:00.257458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-04-18 12:06:00.257474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.948 qpair failed and we were unable to recover it. 00:30:09.948 [2024-04-18 12:06:00.257649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-04-18 12:06:00.257968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-04-18 12:06:00.257983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.948 qpair failed and we were unable to recover it. 00:30:09.948 [2024-04-18 12:06:00.258308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-04-18 12:06:00.258575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-04-18 12:06:00.258590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.948 qpair failed and we were unable to recover it. 00:30:09.948 [2024-04-18 12:06:00.258877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-04-18 12:06:00.259184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-04-18 12:06:00.259199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.948 qpair failed and we were unable to recover it. 00:30:09.948 [2024-04-18 12:06:00.259457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-04-18 12:06:00.259650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-04-18 12:06:00.259666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.948 qpair failed and we were unable to recover it. 00:30:09.948 [2024-04-18 12:06:00.259945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-04-18 12:06:00.260158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-04-18 12:06:00.260174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.948 qpair failed and we were unable to recover it. 00:30:09.948 [2024-04-18 12:06:00.260498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-04-18 12:06:00.260806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-04-18 12:06:00.260823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.948 qpair failed and we were unable to recover it. 00:30:09.948 [2024-04-18 12:06:00.261104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-04-18 12:06:00.261382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-04-18 12:06:00.261398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.948 qpair failed and we were unable to recover it. 00:30:09.948 [2024-04-18 12:06:00.261673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-04-18 12:06:00.262017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-04-18 12:06:00.262033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.948 qpair failed and we were unable to recover it. 00:30:09.948 [2024-04-18 12:06:00.262160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-04-18 12:06:00.262483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-04-18 12:06:00.262499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.948 qpair failed and we were unable to recover it. 00:30:09.948 [2024-04-18 12:06:00.262755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-04-18 12:06:00.263100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-04-18 12:06:00.263116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.948 qpair failed and we were unable to recover it. 00:30:09.948 [2024-04-18 12:06:00.263414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-04-18 12:06:00.263666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-04-18 12:06:00.263683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.948 qpair failed and we were unable to recover it. 00:30:09.948 [2024-04-18 12:06:00.263940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-04-18 12:06:00.264281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-04-18 12:06:00.264296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.948 qpair failed and we were unable to recover it. 00:30:09.948 [2024-04-18 12:06:00.264567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-04-18 12:06:00.264916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-04-18 12:06:00.264932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.948 qpair failed and we were unable to recover it. 00:30:09.948 [2024-04-18 12:06:00.265281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-04-18 12:06:00.265543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-04-18 12:06:00.265560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.948 qpair failed and we were unable to recover it. 00:30:09.948 [2024-04-18 12:06:00.265934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-04-18 12:06:00.266261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-04-18 12:06:00.266277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.948 qpair failed and we were unable to recover it. 00:30:09.948 [2024-04-18 12:06:00.266540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-04-18 12:06:00.266910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-04-18 12:06:00.266925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.948 qpair failed and we were unable to recover it. 00:30:09.948 [2024-04-18 12:06:00.267297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-04-18 12:06:00.267582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-04-18 12:06:00.267598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.948 qpair failed and we were unable to recover it. 00:30:09.948 [2024-04-18 12:06:00.267811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-04-18 12:06:00.268029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-04-18 12:06:00.268045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.948 qpair failed and we were unable to recover it. 00:30:09.948 [2024-04-18 12:06:00.268303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-04-18 12:06:00.268577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-04-18 12:06:00.268593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.948 qpair failed and we were unable to recover it. 00:30:09.948 [2024-04-18 12:06:00.268954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-04-18 12:06:00.269239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-04-18 12:06:00.269255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.948 qpair failed and we were unable to recover it. 00:30:09.949 [2024-04-18 12:06:00.269577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-04-18 12:06:00.269796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-04-18 12:06:00.269811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.949 qpair failed and we were unable to recover it. 00:30:09.949 [2024-04-18 12:06:00.270133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-04-18 12:06:00.270482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-04-18 12:06:00.270498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.949 qpair failed and we were unable to recover it. 00:30:09.949 [2024-04-18 12:06:00.270728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-04-18 12:06:00.270980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-04-18 12:06:00.270996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.949 qpair failed and we were unable to recover it. 00:30:09.949 [2024-04-18 12:06:00.271209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-04-18 12:06:00.271462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-04-18 12:06:00.271479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.949 qpair failed and we were unable to recover it. 00:30:09.949 [2024-04-18 12:06:00.271743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-04-18 12:06:00.272015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-04-18 12:06:00.272031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.949 qpair failed and we were unable to recover it. 00:30:09.949 [2024-04-18 12:06:00.272378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-04-18 12:06:00.272704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-04-18 12:06:00.272721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.949 qpair failed and we were unable to recover it. 00:30:09.949 [2024-04-18 12:06:00.273003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-04-18 12:06:00.273271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-04-18 12:06:00.273286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.949 qpair failed and we were unable to recover it. 00:30:09.949 [2024-04-18 12:06:00.273574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-04-18 12:06:00.273789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-04-18 12:06:00.273805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.949 qpair failed and we were unable to recover it. 00:30:09.949 [2024-04-18 12:06:00.274128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-04-18 12:06:00.274481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-04-18 12:06:00.274497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.949 qpair failed and we were unable to recover it. 00:30:09.949 [2024-04-18 12:06:00.274773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-04-18 12:06:00.275039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-04-18 12:06:00.275055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.949 qpair failed and we were unable to recover it. 00:30:09.949 [2024-04-18 12:06:00.275183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-04-18 12:06:00.275514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-04-18 12:06:00.275530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.949 qpair failed and we were unable to recover it. 00:30:09.949 [2024-04-18 12:06:00.275793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-04-18 12:06:00.276151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-04-18 12:06:00.276171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.949 qpair failed and we were unable to recover it. 00:30:09.949 [2024-04-18 12:06:00.276430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-04-18 12:06:00.276774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-04-18 12:06:00.276790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.949 qpair failed and we were unable to recover it. 00:30:09.949 [2024-04-18 12:06:00.277083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-04-18 12:06:00.277430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-04-18 12:06:00.277446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.949 qpair failed and we were unable to recover it. 00:30:09.949 [2024-04-18 12:06:00.277712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-04-18 12:06:00.277916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-04-18 12:06:00.277933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.949 qpair failed and we were unable to recover it. 00:30:09.949 [2024-04-18 12:06:00.278310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-04-18 12:06:00.278631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-04-18 12:06:00.278648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.949 qpair failed and we were unable to recover it. 00:30:09.949 [2024-04-18 12:06:00.279026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-04-18 12:06:00.279392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-04-18 12:06:00.279408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.949 qpair failed and we were unable to recover it. 00:30:09.949 [2024-04-18 12:06:00.279733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-04-18 12:06:00.280079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-04-18 12:06:00.280095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.949 qpair failed and we were unable to recover it. 00:30:09.949 [2024-04-18 12:06:00.280420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-04-18 12:06:00.280699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-04-18 12:06:00.280715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.949 qpair failed and we were unable to recover it. 00:30:09.949 [2024-04-18 12:06:00.280930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-04-18 12:06:00.281147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-04-18 12:06:00.281163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.949 qpair failed and we were unable to recover it. 00:30:09.949 [2024-04-18 12:06:00.281485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-04-18 12:06:00.281752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-04-18 12:06:00.281767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.950 qpair failed and we were unable to recover it. 00:30:09.950 [2024-04-18 12:06:00.282114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-04-18 12:06:00.282377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-04-18 12:06:00.282393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.950 qpair failed and we were unable to recover it. 00:30:09.950 [2024-04-18 12:06:00.282651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-04-18 12:06:00.282999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-04-18 12:06:00.283015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.950 qpair failed and we were unable to recover it. 00:30:09.950 [2024-04-18 12:06:00.283316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-04-18 12:06:00.283520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-04-18 12:06:00.283536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.950 qpair failed and we were unable to recover it. 00:30:09.950 [2024-04-18 12:06:00.283900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-04-18 12:06:00.284241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-04-18 12:06:00.284257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.950 qpair failed and we were unable to recover it. 00:30:09.950 [2024-04-18 12:06:00.284634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-04-18 12:06:00.284928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-04-18 12:06:00.284944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.950 qpair failed and we were unable to recover it. 00:30:09.950 [2024-04-18 12:06:00.285237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-04-18 12:06:00.285351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-04-18 12:06:00.285367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.950 qpair failed and we were unable to recover it. 00:30:09.950 [2024-04-18 12:06:00.285696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-04-18 12:06:00.285968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-04-18 12:06:00.285984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.950 qpair failed and we were unable to recover it. 00:30:09.950 [2024-04-18 12:06:00.286329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-04-18 12:06:00.286647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-04-18 12:06:00.286663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.950 qpair failed and we were unable to recover it. 00:30:09.950 [2024-04-18 12:06:00.287025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-04-18 12:06:00.287362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-04-18 12:06:00.287377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.950 qpair failed and we were unable to recover it. 00:30:09.950 [2024-04-18 12:06:00.287706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-04-18 12:06:00.288028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-04-18 12:06:00.288044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.950 qpair failed and we were unable to recover it. 00:30:09.950 [2024-04-18 12:06:00.288368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-04-18 12:06:00.288750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-04-18 12:06:00.288766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.950 qpair failed and we were unable to recover it. 00:30:09.950 [2024-04-18 12:06:00.289041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-04-18 12:06:00.289409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-04-18 12:06:00.289425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.950 qpair failed and we were unable to recover it. 00:30:09.950 [2024-04-18 12:06:00.289626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-04-18 12:06:00.289825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-04-18 12:06:00.289839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.950 qpair failed and we were unable to recover it. 00:30:09.950 [2024-04-18 12:06:00.290180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-04-18 12:06:00.290457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-04-18 12:06:00.290473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.950 qpair failed and we were unable to recover it. 00:30:09.950 [2024-04-18 12:06:00.290697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-04-18 12:06:00.290964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-04-18 12:06:00.290980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.950 qpair failed and we were unable to recover it. 00:30:09.950 [2024-04-18 12:06:00.291171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-04-18 12:06:00.291502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-04-18 12:06:00.291518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.950 qpair failed and we were unable to recover it. 00:30:09.950 [2024-04-18 12:06:00.291638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-04-18 12:06:00.292009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-04-18 12:06:00.292024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.950 qpair failed and we were unable to recover it. 00:30:09.950 [2024-04-18 12:06:00.292355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-04-18 12:06:00.292608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-04-18 12:06:00.292624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.950 qpair failed and we were unable to recover it. 00:30:09.950 [2024-04-18 12:06:00.292883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-04-18 12:06:00.293155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-04-18 12:06:00.293171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.950 qpair failed and we were unable to recover it. 00:30:09.950 [2024-04-18 12:06:00.293428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-04-18 12:06:00.293687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-04-18 12:06:00.293704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.950 qpair failed and we were unable to recover it. 00:30:09.950 [2024-04-18 12:06:00.293987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-04-18 12:06:00.294202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-04-18 12:06:00.294218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.950 qpair failed and we were unable to recover it. 00:30:09.950 [2024-04-18 12:06:00.294496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-04-18 12:06:00.294703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-04-18 12:06:00.294719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.950 qpair failed and we were unable to recover it. 00:30:09.950 [2024-04-18 12:06:00.295044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-04-18 12:06:00.295329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-04-18 12:06:00.295345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.950 qpair failed and we were unable to recover it. 00:30:09.950 [2024-04-18 12:06:00.295613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-04-18 12:06:00.295943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-04-18 12:06:00.295960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.950 qpair failed and we were unable to recover it. 00:30:09.950 [2024-04-18 12:06:00.296217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-04-18 12:06:00.296483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-04-18 12:06:00.296499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.950 qpair failed and we were unable to recover it. 00:30:09.951 [2024-04-18 12:06:00.296848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-04-18 12:06:00.297053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-04-18 12:06:00.297069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.951 qpair failed and we were unable to recover it. 00:30:09.951 [2024-04-18 12:06:00.297351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-04-18 12:06:00.297624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-04-18 12:06:00.297639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.951 qpair failed and we were unable to recover it. 00:30:09.951 [2024-04-18 12:06:00.297931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-04-18 12:06:00.298253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-04-18 12:06:00.298269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.951 qpair failed and we were unable to recover it. 00:30:09.951 [2024-04-18 12:06:00.298557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-04-18 12:06:00.298744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-04-18 12:06:00.298760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.951 qpair failed and we were unable to recover it. 00:30:09.951 [2024-04-18 12:06:00.299097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-04-18 12:06:00.299363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-04-18 12:06:00.299379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.951 qpair failed and we were unable to recover it. 00:30:09.951 [2024-04-18 12:06:00.299679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-04-18 12:06:00.300025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-04-18 12:06:00.300041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.951 qpair failed and we were unable to recover it. 00:30:09.951 [2024-04-18 12:06:00.300386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-04-18 12:06:00.300636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-04-18 12:06:00.300652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.951 qpair failed and we were unable to recover it. 00:30:09.951 [2024-04-18 12:06:00.300845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-04-18 12:06:00.301190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-04-18 12:06:00.301206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.951 qpair failed and we were unable to recover it. 00:30:09.951 [2024-04-18 12:06:00.301557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-04-18 12:06:00.301853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-04-18 12:06:00.301870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.951 qpair failed and we were unable to recover it. 00:30:09.951 [2024-04-18 12:06:00.302040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-04-18 12:06:00.302359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-04-18 12:06:00.302375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.951 qpair failed and we were unable to recover it. 00:30:09.951 [2024-04-18 12:06:00.302669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-04-18 12:06:00.302887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-04-18 12:06:00.302903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.951 qpair failed and we were unable to recover it. 00:30:09.951 [2024-04-18 12:06:00.303202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-04-18 12:06:00.303413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-04-18 12:06:00.303429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.951 qpair failed and we were unable to recover it. 00:30:09.951 [2024-04-18 12:06:00.303743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-04-18 12:06:00.304071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-04-18 12:06:00.304086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.951 qpair failed and we were unable to recover it. 00:30:09.951 [2024-04-18 12:06:00.304370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-04-18 12:06:00.304716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-04-18 12:06:00.304732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.951 qpair failed and we were unable to recover it. 00:30:09.951 [2024-04-18 12:06:00.305003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-04-18 12:06:00.305279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-04-18 12:06:00.305295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.951 qpair failed and we were unable to recover it. 00:30:09.951 [2024-04-18 12:06:00.305567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-04-18 12:06:00.305905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-04-18 12:06:00.305921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.951 qpair failed and we were unable to recover it. 00:30:09.951 [2024-04-18 12:06:00.306259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-04-18 12:06:00.306541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-04-18 12:06:00.306556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.951 qpair failed and we were unable to recover it. 00:30:09.951 [2024-04-18 12:06:00.306821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-04-18 12:06:00.306987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-04-18 12:06:00.307002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.951 qpair failed and we were unable to recover it. 00:30:09.951 [2024-04-18 12:06:00.307226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-04-18 12:06:00.307456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-04-18 12:06:00.307472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.951 qpair failed and we were unable to recover it. 00:30:09.951 [2024-04-18 12:06:00.307820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-04-18 12:06:00.308190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-04-18 12:06:00.308206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.951 qpair failed and we were unable to recover it. 00:30:09.951 [2024-04-18 12:06:00.308486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-04-18 12:06:00.308774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-04-18 12:06:00.308790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.951 qpair failed and we were unable to recover it. 00:30:09.951 [2024-04-18 12:06:00.309077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-04-18 12:06:00.309347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-04-18 12:06:00.309363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.951 qpair failed and we were unable to recover it. 00:30:09.951 [2024-04-18 12:06:00.309571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-04-18 12:06:00.309696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-04-18 12:06:00.309712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.952 qpair failed and we were unable to recover it. 00:30:09.952 [2024-04-18 12:06:00.310007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-04-18 12:06:00.310286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-04-18 12:06:00.310302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.952 qpair failed and we were unable to recover it. 00:30:09.952 [2024-04-18 12:06:00.310565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-04-18 12:06:00.310838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-04-18 12:06:00.310854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.952 qpair failed and we were unable to recover it. 00:30:09.952 [2024-04-18 12:06:00.311200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-04-18 12:06:00.311401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-04-18 12:06:00.311418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.952 qpair failed and we were unable to recover it. 00:30:09.952 [2024-04-18 12:06:00.311676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-04-18 12:06:00.311943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-04-18 12:06:00.311959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.952 qpair failed and we were unable to recover it. 00:30:09.952 [2024-04-18 12:06:00.312282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-04-18 12:06:00.312548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-04-18 12:06:00.312564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.952 qpair failed and we were unable to recover it. 00:30:09.952 [2024-04-18 12:06:00.312914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-04-18 12:06:00.313183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-04-18 12:06:00.313199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.952 qpair failed and we were unable to recover it. 00:30:09.952 [2024-04-18 12:06:00.313329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-04-18 12:06:00.313649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-04-18 12:06:00.313665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.952 qpair failed and we were unable to recover it. 00:30:09.952 [2024-04-18 12:06:00.313947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-04-18 12:06:00.314202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-04-18 12:06:00.314218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.952 qpair failed and we were unable to recover it. 00:30:09.952 [2024-04-18 12:06:00.314542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-04-18 12:06:00.314808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-04-18 12:06:00.314824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.952 qpair failed and we were unable to recover it. 00:30:09.952 [2024-04-18 12:06:00.315121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-04-18 12:06:00.315395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-04-18 12:06:00.315410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.952 qpair failed and we were unable to recover it. 00:30:09.952 [2024-04-18 12:06:00.315736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-04-18 12:06:00.316068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-04-18 12:06:00.316084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.952 qpair failed and we were unable to recover it. 00:30:09.952 [2024-04-18 12:06:00.316307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-04-18 12:06:00.316635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-04-18 12:06:00.316651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.952 qpair failed and we were unable to recover it. 00:30:09.952 [2024-04-18 12:06:00.316910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-04-18 12:06:00.317229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-04-18 12:06:00.317245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.952 qpair failed and we were unable to recover it. 00:30:09.952 [2024-04-18 12:06:00.317599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-04-18 12:06:00.317867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-04-18 12:06:00.317882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.952 qpair failed and we were unable to recover it. 00:30:09.952 [2024-04-18 12:06:00.318255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-04-18 12:06:00.318577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-04-18 12:06:00.318601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.952 qpair failed and we were unable to recover it. 00:30:09.952 [2024-04-18 12:06:00.318812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-04-18 12:06:00.319087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-04-18 12:06:00.319102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.952 qpair failed and we were unable to recover it. 00:30:09.952 [2024-04-18 12:06:00.319355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-04-18 12:06:00.319530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-04-18 12:06:00.319546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.952 qpair failed and we were unable to recover it. 00:30:09.952 [2024-04-18 12:06:00.319805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-04-18 12:06:00.320071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-04-18 12:06:00.320086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.952 qpair failed and we were unable to recover it. 00:30:09.952 [2024-04-18 12:06:00.320379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-04-18 12:06:00.320729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-04-18 12:06:00.320746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.952 qpair failed and we were unable to recover it. 00:30:09.952 [2024-04-18 12:06:00.321041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-04-18 12:06:00.321310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-04-18 12:06:00.321326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.952 qpair failed and we were unable to recover it. 00:30:09.952 [2024-04-18 12:06:00.321650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-04-18 12:06:00.321943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-04-18 12:06:00.321958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.952 qpair failed and we were unable to recover it. 00:30:09.952 [2024-04-18 12:06:00.322310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-04-18 12:06:00.322701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-04-18 12:06:00.322718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.952 qpair failed and we were unable to recover it. 00:30:09.952 [2024-04-18 12:06:00.322908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-04-18 12:06:00.323231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-04-18 12:06:00.323247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.952 qpair failed and we were unable to recover it. 00:30:09.952 [2024-04-18 12:06:00.323519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-04-18 12:06:00.323722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-04-18 12:06:00.323737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.952 qpair failed and we were unable to recover it. 00:30:09.952 [2024-04-18 12:06:00.323974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-04-18 12:06:00.324154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-04-18 12:06:00.324170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.952 qpair failed and we were unable to recover it. 00:30:09.952 [2024-04-18 12:06:00.324499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-04-18 12:06:00.324797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-04-18 12:06:00.324813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.952 qpair failed and we were unable to recover it. 00:30:09.952 [2024-04-18 12:06:00.325014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-04-18 12:06:00.325289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-04-18 12:06:00.325305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.952 qpair failed and we were unable to recover it. 00:30:09.952 [2024-04-18 12:06:00.325648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-04-18 12:06:00.325991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-04-18 12:06:00.326008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.953 qpair failed and we were unable to recover it. 00:30:09.953 [2024-04-18 12:06:00.326201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-04-18 12:06:00.326401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-04-18 12:06:00.326417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.953 qpair failed and we were unable to recover it. 00:30:09.953 [2024-04-18 12:06:00.326696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-04-18 12:06:00.326974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-04-18 12:06:00.326989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.953 qpair failed and we were unable to recover it. 00:30:09.953 [2024-04-18 12:06:00.327282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-04-18 12:06:00.327605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-04-18 12:06:00.327621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.953 qpair failed and we were unable to recover it. 00:30:09.953 [2024-04-18 12:06:00.327835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-04-18 12:06:00.328107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-04-18 12:06:00.328123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.953 qpair failed and we were unable to recover it. 00:30:09.953 [2024-04-18 12:06:00.328466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-04-18 12:06:00.328648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-04-18 12:06:00.328663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.953 qpair failed and we were unable to recover it. 00:30:09.953 [2024-04-18 12:06:00.328945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-04-18 12:06:00.329292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-04-18 12:06:00.329307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.953 qpair failed and we were unable to recover it. 00:30:09.953 [2024-04-18 12:06:00.329520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-04-18 12:06:00.329799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-04-18 12:06:00.329814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.953 qpair failed and we were unable to recover it. 00:30:09.953 [2024-04-18 12:06:00.329941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-04-18 12:06:00.330205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-04-18 12:06:00.330223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.953 qpair failed and we were unable to recover it. 00:30:09.953 [2024-04-18 12:06:00.330456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-04-18 12:06:00.330770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-04-18 12:06:00.330785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.953 qpair failed and we were unable to recover it. 00:30:09.953 [2024-04-18 12:06:00.331135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-04-18 12:06:00.331470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-04-18 12:06:00.331486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.953 qpair failed and we were unable to recover it. 00:30:09.953 [2024-04-18 12:06:00.331837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-04-18 12:06:00.332042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-04-18 12:06:00.332057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.953 qpair failed and we were unable to recover it. 00:30:09.953 [2024-04-18 12:06:00.332397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-04-18 12:06:00.332661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-04-18 12:06:00.332678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.953 qpair failed and we were unable to recover it. 00:30:09.953 [2024-04-18 12:06:00.332953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-04-18 12:06:00.333167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-04-18 12:06:00.333183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.953 qpair failed and we were unable to recover it. 00:30:09.953 [2024-04-18 12:06:00.333459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-04-18 12:06:00.333662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-04-18 12:06:00.333678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.953 qpair failed and we were unable to recover it. 00:30:09.953 [2024-04-18 12:06:00.333938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-04-18 12:06:00.334140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-04-18 12:06:00.334155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.953 qpair failed and we were unable to recover it. 00:30:09.953 [2024-04-18 12:06:00.334478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-04-18 12:06:00.334759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-04-18 12:06:00.334774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.953 qpair failed and we were unable to recover it. 00:30:09.953 [2024-04-18 12:06:00.335048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-04-18 12:06:00.335303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-04-18 12:06:00.335318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.953 qpair failed and we were unable to recover it. 00:30:09.953 [2024-04-18 12:06:00.335621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-04-18 12:06:00.335905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-04-18 12:06:00.335922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.953 qpair failed and we were unable to recover it. 00:30:09.953 [2024-04-18 12:06:00.336193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-04-18 12:06:00.336485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-04-18 12:06:00.336501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.953 qpair failed and we were unable to recover it. 00:30:09.953 [2024-04-18 12:06:00.336766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-04-18 12:06:00.337111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-04-18 12:06:00.337127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.953 qpair failed and we were unable to recover it. 00:30:09.953 [2024-04-18 12:06:00.337397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-04-18 12:06:00.337668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-04-18 12:06:00.337685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.953 qpair failed and we were unable to recover it. 00:30:09.953 [2024-04-18 12:06:00.337875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-04-18 12:06:00.338093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-04-18 12:06:00.338109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.953 qpair failed and we were unable to recover it. 00:30:09.953 [2024-04-18 12:06:00.338504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-04-18 12:06:00.338729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-04-18 12:06:00.338746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.953 qpair failed and we were unable to recover it. 00:30:09.953 [2024-04-18 12:06:00.339005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-04-18 12:06:00.339297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-04-18 12:06:00.339313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.953 qpair failed and we were unable to recover it. 00:30:09.953 [2024-04-18 12:06:00.339649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-04-18 12:06:00.339988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-04-18 12:06:00.340004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.953 qpair failed and we were unable to recover it. 00:30:09.953 [2024-04-18 12:06:00.340113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-04-18 12:06:00.340432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-04-18 12:06:00.340448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.953 qpair failed and we were unable to recover it. 00:30:09.953 [2024-04-18 12:06:00.340720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-04-18 12:06:00.341025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-04-18 12:06:00.341041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.953 qpair failed and we were unable to recover it. 00:30:09.953 [2024-04-18 12:06:00.341255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-04-18 12:06:00.341578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-04-18 12:06:00.341596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.954 qpair failed and we were unable to recover it. 00:30:09.954 [2024-04-18 12:06:00.341886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-04-18 12:06:00.342180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-04-18 12:06:00.342196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.954 qpair failed and we were unable to recover it. 00:30:09.954 [2024-04-18 12:06:00.342488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-04-18 12:06:00.342773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-04-18 12:06:00.342789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.954 qpair failed and we were unable to recover it. 00:30:09.954 [2024-04-18 12:06:00.343066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-04-18 12:06:00.343329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-04-18 12:06:00.343345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.954 qpair failed and we were unable to recover it. 00:30:09.954 [2024-04-18 12:06:00.343605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-04-18 12:06:00.343879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-04-18 12:06:00.343895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.954 qpair failed and we were unable to recover it. 00:30:09.954 [2024-04-18 12:06:00.344093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-04-18 12:06:00.344206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-04-18 12:06:00.344222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.954 qpair failed and we were unable to recover it. 00:30:09.954 [2024-04-18 12:06:00.344496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-04-18 12:06:00.344680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-04-18 12:06:00.344696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.954 qpair failed and we were unable to recover it. 00:30:09.954 [2024-04-18 12:06:00.345045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-04-18 12:06:00.345314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-04-18 12:06:00.345330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.954 qpair failed and we were unable to recover it. 00:30:09.954 [2024-04-18 12:06:00.345677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-04-18 12:06:00.345940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-04-18 12:06:00.345956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.954 qpair failed and we were unable to recover it. 00:30:09.954 [2024-04-18 12:06:00.346156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-04-18 12:06:00.346373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-04-18 12:06:00.346389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.954 qpair failed and we were unable to recover it. 00:30:09.954 [2024-04-18 12:06:00.346653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-04-18 12:06:00.346909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-04-18 12:06:00.346927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.954 qpair failed and we were unable to recover it. 00:30:09.954 [2024-04-18 12:06:00.347202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-04-18 12:06:00.347477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-04-18 12:06:00.347495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.954 qpair failed and we were unable to recover it. 00:30:09.954 [2024-04-18 12:06:00.347822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-04-18 12:06:00.348028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-04-18 12:06:00.348044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.954 qpair failed and we were unable to recover it. 00:30:09.954 [2024-04-18 12:06:00.348305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-04-18 12:06:00.348649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-04-18 12:06:00.348665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.954 qpair failed and we were unable to recover it. 00:30:09.954 [2024-04-18 12:06:00.348923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-04-18 12:06:00.349202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-04-18 12:06:00.349218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.954 qpair failed and we were unable to recover it. 00:30:09.954 [2024-04-18 12:06:00.349420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-04-18 12:06:00.349693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-04-18 12:06:00.349715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.954 qpair failed and we were unable to recover it. 00:30:09.954 [2024-04-18 12:06:00.349992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-04-18 12:06:00.350336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-04-18 12:06:00.350352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.954 qpair failed and we were unable to recover it. 00:30:09.954 [2024-04-18 12:06:00.350608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-04-18 12:06:00.350802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-04-18 12:06:00.350818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.954 qpair failed and we were unable to recover it. 00:30:09.954 [2024-04-18 12:06:00.351045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-04-18 12:06:00.351336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-04-18 12:06:00.351351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.954 qpair failed and we were unable to recover it. 00:30:09.954 [2024-04-18 12:06:00.351612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-04-18 12:06:00.351934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-04-18 12:06:00.351950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.954 qpair failed and we were unable to recover it. 00:30:09.954 [2024-04-18 12:06:00.352226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-04-18 12:06:00.352494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-04-18 12:06:00.352510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.954 qpair failed and we were unable to recover it. 00:30:09.954 [2024-04-18 12:06:00.352721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-04-18 12:06:00.353043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-04-18 12:06:00.353058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.954 qpair failed and we were unable to recover it. 00:30:09.954 [2024-04-18 12:06:00.353339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-04-18 12:06:00.353559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-04-18 12:06:00.353575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.954 qpair failed and we were unable to recover it. 00:30:09.954 [2024-04-18 12:06:00.353785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-04-18 12:06:00.354039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-04-18 12:06:00.354055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.955 qpair failed and we were unable to recover it. 00:30:09.955 [2024-04-18 12:06:00.354277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-04-18 12:06:00.354547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-04-18 12:06:00.354563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.955 qpair failed and we were unable to recover it. 00:30:09.955 [2024-04-18 12:06:00.354827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-04-18 12:06:00.355170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-04-18 12:06:00.355185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.955 qpair failed and we were unable to recover it. 00:30:09.955 [2024-04-18 12:06:00.355470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-04-18 12:06:00.355784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-04-18 12:06:00.355800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.955 qpair failed and we were unable to recover it. 00:30:09.955 [2024-04-18 12:06:00.355997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-04-18 12:06:00.356182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-04-18 12:06:00.356199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.955 qpair failed and we were unable to recover it. 00:30:09.955 [2024-04-18 12:06:00.356475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-04-18 12:06:00.356779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-04-18 12:06:00.356795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.955 qpair failed and we were unable to recover it. 00:30:09.955 [2024-04-18 12:06:00.356993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-04-18 12:06:00.357258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-04-18 12:06:00.357274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.955 qpair failed and we were unable to recover it. 00:30:09.955 [2024-04-18 12:06:00.357490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-04-18 12:06:00.357762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-04-18 12:06:00.357777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.955 qpair failed and we were unable to recover it. 00:30:09.955 [2024-04-18 12:06:00.357909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-04-18 12:06:00.358093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-04-18 12:06:00.358107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.955 qpair failed and we were unable to recover it. 00:30:09.955 [2024-04-18 12:06:00.358310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-04-18 12:06:00.358501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-04-18 12:06:00.358517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.955 qpair failed and we were unable to recover it. 00:30:09.955 [2024-04-18 12:06:00.358721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-04-18 12:06:00.358988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-04-18 12:06:00.359004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.955 qpair failed and we were unable to recover it. 00:30:09.955 [2024-04-18 12:06:00.359301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-04-18 12:06:00.359511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-04-18 12:06:00.359528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.955 qpair failed and we were unable to recover it. 00:30:09.955 [2024-04-18 12:06:00.359649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-04-18 12:06:00.359924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-04-18 12:06:00.359940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.955 qpair failed and we were unable to recover it. 00:30:09.955 [2024-04-18 12:06:00.360205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-04-18 12:06:00.360428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-04-18 12:06:00.360443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.955 qpair failed and we were unable to recover it. 00:30:09.955 [2024-04-18 12:06:00.360716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-04-18 12:06:00.361005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-04-18 12:06:00.361020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.955 qpair failed and we were unable to recover it. 00:30:09.955 [2024-04-18 12:06:00.361223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-04-18 12:06:00.361487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-04-18 12:06:00.361504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.955 qpair failed and we were unable to recover it. 00:30:09.955 [2024-04-18 12:06:00.361783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-04-18 12:06:00.362037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-04-18 12:06:00.362053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.955 qpair failed and we were unable to recover it. 00:30:09.955 [2024-04-18 12:06:00.362377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-04-18 12:06:00.362638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-04-18 12:06:00.362655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.955 qpair failed and we were unable to recover it. 00:30:09.955 [2024-04-18 12:06:00.362857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-04-18 12:06:00.363053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-04-18 12:06:00.363070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.955 qpair failed and we were unable to recover it. 00:30:09.955 [2024-04-18 12:06:00.363397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-04-18 12:06:00.363659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-04-18 12:06:00.363675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.955 qpair failed and we were unable to recover it. 00:30:09.955 [2024-04-18 12:06:00.363968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-04-18 12:06:00.364247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-04-18 12:06:00.364262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.955 qpair failed and we were unable to recover it. 00:30:09.955 [2024-04-18 12:06:00.364525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-04-18 12:06:00.364872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-04-18 12:06:00.364888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.955 qpair failed and we were unable to recover it. 00:30:09.955 [2024-04-18 12:06:00.365082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-04-18 12:06:00.365213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-04-18 12:06:00.365229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.955 qpair failed and we were unable to recover it. 00:30:09.955 [2024-04-18 12:06:00.365519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-04-18 12:06:00.365775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-04-18 12:06:00.365791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.955 qpair failed and we were unable to recover it. 00:30:09.955 [2024-04-18 12:06:00.366067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-04-18 12:06:00.366364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-04-18 12:06:00.366379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.955 qpair failed and we were unable to recover it. 00:30:09.955 [2024-04-18 12:06:00.366655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-04-18 12:06:00.367001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-04-18 12:06:00.367017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.955 qpair failed and we were unable to recover it. 00:30:09.955 [2024-04-18 12:06:00.367219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-04-18 12:06:00.367539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-04-18 12:06:00.367555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.955 qpair failed and we were unable to recover it. 00:30:09.955 [2024-04-18 12:06:00.367786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-04-18 12:06:00.367982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-04-18 12:06:00.367999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.955 qpair failed and we were unable to recover it. 00:30:09.955 [2024-04-18 12:06:00.368277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-04-18 12:06:00.368597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-04-18 12:06:00.368613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.955 qpair failed and we were unable to recover it. 00:30:09.956 [2024-04-18 12:06:00.368817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-04-18 12:06:00.369086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-04-18 12:06:00.369102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.956 qpair failed and we were unable to recover it. 00:30:09.956 [2024-04-18 12:06:00.369358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-04-18 12:06:00.369703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-04-18 12:06:00.369719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.956 qpair failed and we were unable to recover it. 00:30:09.956 [2024-04-18 12:06:00.369995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-04-18 12:06:00.370250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-04-18 12:06:00.370265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.956 qpair failed and we were unable to recover it. 00:30:09.956 [2024-04-18 12:06:00.370595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-04-18 12:06:00.370872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-04-18 12:06:00.370887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.956 qpair failed and we were unable to recover it. 00:30:09.956 [2024-04-18 12:06:00.371212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-04-18 12:06:00.371538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-04-18 12:06:00.371555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.956 qpair failed and we were unable to recover it. 00:30:09.956 [2024-04-18 12:06:00.371760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-04-18 12:06:00.372043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-04-18 12:06:00.372060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.956 qpair failed and we were unable to recover it. 00:30:09.956 [2024-04-18 12:06:00.372317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-04-18 12:06:00.372596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-04-18 12:06:00.372613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.956 qpair failed and we were unable to recover it. 00:30:09.956 [2024-04-18 12:06:00.372888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-04-18 12:06:00.373140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-04-18 12:06:00.373156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.956 qpair failed and we were unable to recover it. 00:30:09.956 [2024-04-18 12:06:00.373359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-04-18 12:06:00.373665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-04-18 12:06:00.373681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.956 qpair failed and we were unable to recover it. 00:30:09.956 [2024-04-18 12:06:00.373956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-04-18 12:06:00.374132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-04-18 12:06:00.374147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.956 qpair failed and we were unable to recover it. 00:30:09.956 [2024-04-18 12:06:00.374420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-04-18 12:06:00.374631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-04-18 12:06:00.374648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.956 qpair failed and we were unable to recover it. 00:30:09.956 [2024-04-18 12:06:00.374814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-04-18 12:06:00.375077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-04-18 12:06:00.375093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.956 qpair failed and we were unable to recover it. 00:30:09.956 [2024-04-18 12:06:00.375329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-04-18 12:06:00.375586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-04-18 12:06:00.375601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.956 qpair failed and we were unable to recover it. 00:30:09.956 [2024-04-18 12:06:00.375952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-04-18 12:06:00.376333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-04-18 12:06:00.376348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.956 qpair failed and we were unable to recover it. 00:30:09.956 [2024-04-18 12:06:00.376643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-04-18 12:06:00.376767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-04-18 12:06:00.376783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.956 qpair failed and we were unable to recover it. 00:30:09.956 [2024-04-18 12:06:00.377045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-04-18 12:06:00.377346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-04-18 12:06:00.377361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.956 qpair failed and we were unable to recover it. 00:30:09.956 [2024-04-18 12:06:00.377687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-04-18 12:06:00.377886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-04-18 12:06:00.377902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.956 qpair failed and we were unable to recover it. 00:30:09.956 [2024-04-18 12:06:00.378180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-04-18 12:06:00.378511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-04-18 12:06:00.378527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.956 qpair failed and we were unable to recover it. 00:30:09.956 [2024-04-18 12:06:00.378830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-04-18 12:06:00.379153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-04-18 12:06:00.379169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.956 qpair failed and we were unable to recover it. 00:30:09.956 [2024-04-18 12:06:00.379354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-04-18 12:06:00.379565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-04-18 12:06:00.379581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.956 qpair failed and we were unable to recover it. 00:30:09.956 [2024-04-18 12:06:00.379838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-04-18 12:06:00.380021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-04-18 12:06:00.380037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.956 qpair failed and we were unable to recover it. 00:30:09.956 [2024-04-18 12:06:00.380319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-04-18 12:06:00.380516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-04-18 12:06:00.380532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.956 qpair failed and we were unable to recover it. 00:30:09.956 [2024-04-18 12:06:00.380875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-04-18 12:06:00.381167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-04-18 12:06:00.381184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.956 qpair failed and we were unable to recover it. 00:30:09.956 [2024-04-18 12:06:00.381482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-04-18 12:06:00.381605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-04-18 12:06:00.381622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.956 qpair failed and we were unable to recover it. 00:30:09.956 [2024-04-18 12:06:00.381832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-04-18 12:06:00.382181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-04-18 12:06:00.382197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.956 qpair failed and we were unable to recover it. 00:30:09.956 [2024-04-18 12:06:00.382481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-04-18 12:06:00.382743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-04-18 12:06:00.382759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.956 qpair failed and we were unable to recover it. 00:30:09.956 [2024-04-18 12:06:00.382955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-04-18 12:06:00.383219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-04-18 12:06:00.383236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.956 qpair failed and we were unable to recover it. 00:30:09.956 [2024-04-18 12:06:00.383433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-04-18 12:06:00.383709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-04-18 12:06:00.383725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.956 qpair failed and we were unable to recover it. 00:30:09.957 [2024-04-18 12:06:00.384002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-04-18 12:06:00.384259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-04-18 12:06:00.384275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.957 qpair failed and we were unable to recover it. 00:30:09.957 [2024-04-18 12:06:00.384447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-04-18 12:06:00.384643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-04-18 12:06:00.384659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.957 qpair failed and we were unable to recover it. 00:30:09.957 [2024-04-18 12:06:00.384932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-04-18 12:06:00.385216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-04-18 12:06:00.385232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.957 qpair failed and we were unable to recover it. 00:30:09.957 [2024-04-18 12:06:00.385515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-04-18 12:06:00.385731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-04-18 12:06:00.385746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.957 qpair failed and we were unable to recover it. 00:30:09.957 [2024-04-18 12:06:00.386015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-04-18 12:06:00.386346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-04-18 12:06:00.386362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.957 qpair failed and we were unable to recover it. 00:30:09.957 [2024-04-18 12:06:00.386628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-04-18 12:06:00.386891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-04-18 12:06:00.386906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.957 qpair failed and we were unable to recover it. 00:30:09.957 [2024-04-18 12:06:00.387179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-04-18 12:06:00.387527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-04-18 12:06:00.387542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.957 qpair failed and we were unable to recover it. 00:30:09.957 [2024-04-18 12:06:00.387733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-04-18 12:06:00.387966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-04-18 12:06:00.387981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.957 qpair failed and we were unable to recover it. 00:30:09.957 [2024-04-18 12:06:00.388240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-04-18 12:06:00.388563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-04-18 12:06:00.388580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.957 qpair failed and we were unable to recover it. 00:30:09.957 [2024-04-18 12:06:00.388847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-04-18 12:06:00.389058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-04-18 12:06:00.389073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.957 qpair failed and we were unable to recover it. 00:30:09.957 [2024-04-18 12:06:00.389299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-04-18 12:06:00.389480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-04-18 12:06:00.389496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.957 qpair failed and we were unable to recover it. 00:30:09.957 [2024-04-18 12:06:00.389824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-04-18 12:06:00.390108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-04-18 12:06:00.390125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.957 qpair failed and we were unable to recover it. 00:30:09.957 [2024-04-18 12:06:00.390361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-04-18 12:06:00.390627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-04-18 12:06:00.390643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.957 qpair failed and we were unable to recover it. 00:30:09.957 [2024-04-18 12:06:00.390910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-04-18 12:06:00.391178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-04-18 12:06:00.391193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.957 qpair failed and we were unable to recover it. 00:30:09.957 [2024-04-18 12:06:00.391520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-04-18 12:06:00.391843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-04-18 12:06:00.391859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.957 qpair failed and we were unable to recover it. 00:30:09.957 [2024-04-18 12:06:00.392212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-04-18 12:06:00.392324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-04-18 12:06:00.392340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.957 qpair failed and we were unable to recover it. 00:30:09.957 [2024-04-18 12:06:00.392553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-04-18 12:06:00.392758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-04-18 12:06:00.392773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.957 qpair failed and we were unable to recover it. 00:30:09.957 [2024-04-18 12:06:00.393088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-04-18 12:06:00.393203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-04-18 12:06:00.393218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.957 qpair failed and we were unable to recover it. 00:30:09.957 [2024-04-18 12:06:00.393416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-04-18 12:06:00.393740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-04-18 12:06:00.393755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.957 qpair failed and we were unable to recover it. 00:30:09.957 [2024-04-18 12:06:00.394045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-04-18 12:06:00.394319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-04-18 12:06:00.394335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.957 qpair failed and we were unable to recover it. 00:30:09.957 [2024-04-18 12:06:00.394625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-04-18 12:06:00.394947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-04-18 12:06:00.394963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.957 qpair failed and we were unable to recover it. 00:30:09.957 [2024-04-18 12:06:00.395180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-04-18 12:06:00.395475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-04-18 12:06:00.395491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.957 qpair failed and we were unable to recover it. 00:30:09.957 [2024-04-18 12:06:00.395821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-04-18 12:06:00.396096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-04-18 12:06:00.396111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.957 qpair failed and we were unable to recover it. 00:30:09.957 [2024-04-18 12:06:00.396323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-04-18 12:06:00.396646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-04-18 12:06:00.396662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.957 qpair failed and we were unable to recover it. 00:30:09.957 [2024-04-18 12:06:00.396917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-04-18 12:06:00.397268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-04-18 12:06:00.397283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.957 qpair failed and we were unable to recover it. 00:30:09.957 [2024-04-18 12:06:00.397540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-04-18 12:06:00.397759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-04-18 12:06:00.397775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.957 qpair failed and we were unable to recover it. 00:30:09.957 [2024-04-18 12:06:00.398146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-04-18 12:06:00.398416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-04-18 12:06:00.398431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.957 qpair failed and we were unable to recover it. 00:30:09.957 [2024-04-18 12:06:00.398611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-04-18 12:06:00.398889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-04-18 12:06:00.398905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.957 qpair failed and we were unable to recover it. 00:30:09.957 [2024-04-18 12:06:00.399109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-04-18 12:06:00.399432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-04-18 12:06:00.399448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.958 qpair failed and we were unable to recover it. 00:30:09.958 [2024-04-18 12:06:00.399737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-04-18 12:06:00.400012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-04-18 12:06:00.400027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.958 qpair failed and we were unable to recover it. 00:30:09.958 [2024-04-18 12:06:00.400296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-04-18 12:06:00.400594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-04-18 12:06:00.400610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.958 qpair failed and we were unable to recover it. 00:30:09.958 [2024-04-18 12:06:00.400980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-04-18 12:06:00.401168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-04-18 12:06:00.401184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.958 qpair failed and we were unable to recover it. 00:30:09.958 [2024-04-18 12:06:00.401468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-04-18 12:06:00.401671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-04-18 12:06:00.401688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.958 qpair failed and we were unable to recover it. 00:30:09.958 [2024-04-18 12:06:00.402040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-04-18 12:06:00.402307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-04-18 12:06:00.402322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.958 qpair failed and we were unable to recover it. 00:30:09.958 [2024-04-18 12:06:00.402535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-04-18 12:06:00.402811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-04-18 12:06:00.402826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.958 qpair failed and we were unable to recover it. 00:30:09.958 [2024-04-18 12:06:00.403078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-04-18 12:06:00.403403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-04-18 12:06:00.403418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.958 qpair failed and we were unable to recover it. 00:30:09.958 [2024-04-18 12:06:00.403616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-04-18 12:06:00.403813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-04-18 12:06:00.403829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.958 qpair failed and we were unable to recover it. 00:30:09.958 [2024-04-18 12:06:00.404102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-04-18 12:06:00.404314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-04-18 12:06:00.404329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.958 qpair failed and we were unable to recover it. 00:30:09.958 [2024-04-18 12:06:00.404593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-04-18 12:06:00.404856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-04-18 12:06:00.404872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.958 qpair failed and we were unable to recover it. 00:30:09.958 [2024-04-18 12:06:00.405085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-04-18 12:06:00.405325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-04-18 12:06:00.405341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.958 qpair failed and we were unable to recover it. 00:30:09.958 [2024-04-18 12:06:00.405632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-04-18 12:06:00.405952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-04-18 12:06:00.405968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.958 qpair failed and we were unable to recover it. 00:30:09.958 [2024-04-18 12:06:00.406247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-04-18 12:06:00.406571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-04-18 12:06:00.406586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.958 qpair failed and we were unable to recover it. 00:30:09.958 [2024-04-18 12:06:00.406865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-04-18 12:06:00.407133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-04-18 12:06:00.407149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.958 qpair failed and we were unable to recover it. 00:30:09.958 [2024-04-18 12:06:00.407473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-04-18 12:06:00.407821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-04-18 12:06:00.407837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.958 qpair failed and we were unable to recover it. 00:30:09.958 [2024-04-18 12:06:00.408127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-04-18 12:06:00.408394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-04-18 12:06:00.408410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.958 qpair failed and we were unable to recover it. 00:30:09.958 [2024-04-18 12:06:00.408687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-04-18 12:06:00.409007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-04-18 12:06:00.409022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.958 qpair failed and we were unable to recover it. 00:30:09.958 [2024-04-18 12:06:00.409299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-04-18 12:06:00.409505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-04-18 12:06:00.409521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.958 qpair failed and we were unable to recover it. 00:30:09.958 [2024-04-18 12:06:00.409783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-04-18 12:06:00.410065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-04-18 12:06:00.410080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.958 qpair failed and we were unable to recover it. 00:30:09.958 [2024-04-18 12:06:00.410306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-04-18 12:06:00.410645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-04-18 12:06:00.410661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.958 qpair failed and we were unable to recover it. 00:30:09.958 [2024-04-18 12:06:00.410923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-04-18 12:06:00.411098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-04-18 12:06:00.411113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.958 qpair failed and we were unable to recover it. 00:30:09.958 [2024-04-18 12:06:00.411462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-04-18 12:06:00.411828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-04-18 12:06:00.411843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.958 qpair failed and we were unable to recover it. 00:30:09.958 [2024-04-18 12:06:00.412065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-04-18 12:06:00.412308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-04-18 12:06:00.412323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.958 qpair failed and we were unable to recover it. 00:30:09.958 [2024-04-18 12:06:00.412649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-04-18 12:06:00.412846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-04-18 12:06:00.412861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.958 qpair failed and we were unable to recover it. 00:30:09.958 [2024-04-18 12:06:00.413129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-04-18 12:06:00.413384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-04-18 12:06:00.413399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.958 qpair failed and we were unable to recover it. 00:30:09.958 [2024-04-18 12:06:00.413624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-04-18 12:06:00.413882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-04-18 12:06:00.413898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.958 qpair failed and we were unable to recover it. 00:30:09.958 [2024-04-18 12:06:00.414069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-04-18 12:06:00.414357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-04-18 12:06:00.414373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.959 qpair failed and we were unable to recover it. 00:30:09.959 [2024-04-18 12:06:00.414670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-04-18 12:06:00.415015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-04-18 12:06:00.415031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.959 qpair failed and we were unable to recover it. 00:30:09.959 [2024-04-18 12:06:00.415252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-04-18 12:06:00.415459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-04-18 12:06:00.415474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.959 qpair failed and we were unable to recover it. 00:30:09.959 [2024-04-18 12:06:00.415748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-04-18 12:06:00.415955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-04-18 12:06:00.415972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.959 qpair failed and we were unable to recover it. 00:30:09.959 [2024-04-18 12:06:00.416247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-04-18 12:06:00.416536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-04-18 12:06:00.416553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.959 qpair failed and we were unable to recover it. 00:30:09.959 [2024-04-18 12:06:00.416826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-04-18 12:06:00.417041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-04-18 12:06:00.417057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.959 qpair failed and we were unable to recover it. 00:30:09.959 [2024-04-18 12:06:00.417384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-04-18 12:06:00.417589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-04-18 12:06:00.417608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.959 qpair failed and we were unable to recover it. 00:30:09.959 [2024-04-18 12:06:00.417870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-04-18 12:06:00.418145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-04-18 12:06:00.418161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.959 qpair failed and we were unable to recover it. 00:30:09.959 [2024-04-18 12:06:00.418362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-04-18 12:06:00.418644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-04-18 12:06:00.418665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.959 qpair failed and we were unable to recover it. 00:30:09.959 [2024-04-18 12:06:00.418905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-04-18 12:06:00.419177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-04-18 12:06:00.419193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.959 qpair failed and we were unable to recover it. 00:30:09.959 [2024-04-18 12:06:00.419395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-04-18 12:06:00.419718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-04-18 12:06:00.419734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.959 qpair failed and we were unable to recover it. 00:30:09.959 [2024-04-18 12:06:00.420063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-04-18 12:06:00.420278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-04-18 12:06:00.420294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.959 qpair failed and we were unable to recover it. 00:30:09.959 [2024-04-18 12:06:00.420557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-04-18 12:06:00.420843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-04-18 12:06:00.420859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.959 qpair failed and we were unable to recover it. 00:30:09.959 [2024-04-18 12:06:00.421052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-04-18 12:06:00.421349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-04-18 12:06:00.421365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.959 qpair failed and we were unable to recover it. 00:30:09.959 [2024-04-18 12:06:00.421644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-04-18 12:06:00.421888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-04-18 12:06:00.421904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.959 qpair failed and we were unable to recover it. 00:30:09.959 [2024-04-18 12:06:00.422185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-04-18 12:06:00.422440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-04-18 12:06:00.422462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.959 qpair failed and we were unable to recover it. 00:30:09.959 [2024-04-18 12:06:00.422661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-04-18 12:06:00.423012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-04-18 12:06:00.423031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.959 qpair failed and we were unable to recover it. 00:30:09.959 [2024-04-18 12:06:00.423401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-04-18 12:06:00.423672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-04-18 12:06:00.423688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.959 qpair failed and we were unable to recover it. 00:30:09.959 [2024-04-18 12:06:00.423968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-04-18 12:06:00.424178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-04-18 12:06:00.424194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.959 qpair failed and we were unable to recover it. 00:30:09.959 [2024-04-18 12:06:00.424390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-04-18 12:06:00.424670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-04-18 12:06:00.424686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.959 qpair failed and we were unable to recover it. 00:30:09.959 [2024-04-18 12:06:00.425035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-04-18 12:06:00.425392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-04-18 12:06:00.425408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.959 qpair failed and we were unable to recover it. 00:30:09.959 [2024-04-18 12:06:00.425630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-04-18 12:06:00.425892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-04-18 12:06:00.425908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.959 qpair failed and we were unable to recover it. 00:30:09.959 [2024-04-18 12:06:00.426185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-04-18 12:06:00.426471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-04-18 12:06:00.426486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.959 qpair failed and we were unable to recover it. 00:30:09.959 [2024-04-18 12:06:00.426760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-04-18 12:06:00.426955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-04-18 12:06:00.426971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.959 qpair failed and we were unable to recover it. 00:30:09.959 [2024-04-18 12:06:00.427190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-04-18 12:06:00.427447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-04-18 12:06:00.427469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.960 qpair failed and we were unable to recover it. 00:30:09.960 [2024-04-18 12:06:00.427804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.960 [2024-04-18 12:06:00.428078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.960 [2024-04-18 12:06:00.428094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.960 qpair failed and we were unable to recover it. 00:30:09.960 [2024-04-18 12:06:00.428352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.960 [2024-04-18 12:06:00.428618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.960 [2024-04-18 12:06:00.428637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.960 qpair failed and we were unable to recover it. 00:30:09.960 [2024-04-18 12:06:00.428966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.960 [2024-04-18 12:06:00.429151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.960 [2024-04-18 12:06:00.429167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.960 qpair failed and we were unable to recover it. 00:30:09.960 [2024-04-18 12:06:00.429439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.960 [2024-04-18 12:06:00.429728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.960 [2024-04-18 12:06:00.429744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.960 qpair failed and we were unable to recover it. 00:30:09.960 [2024-04-18 12:06:00.430045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.960 [2024-04-18 12:06:00.430318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.960 [2024-04-18 12:06:00.430334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.960 qpair failed and we were unable to recover it. 00:30:09.960 [2024-04-18 12:06:00.430594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.960 [2024-04-18 12:06:00.430846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.960 [2024-04-18 12:06:00.430863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.960 qpair failed and we were unable to recover it. 00:30:09.960 [2024-04-18 12:06:00.431141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.960 [2024-04-18 12:06:00.431423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.960 [2024-04-18 12:06:00.431439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.960 qpair failed and we were unable to recover it. 00:30:09.960 [2024-04-18 12:06:00.431704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.960 [2024-04-18 12:06:00.432026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.960 [2024-04-18 12:06:00.432042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.960 qpair failed and we were unable to recover it. 00:30:09.960 [2024-04-18 12:06:00.432332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.960 [2024-04-18 12:06:00.432608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.960 [2024-04-18 12:06:00.432624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.960 qpair failed and we were unable to recover it. 00:30:09.960 [2024-04-18 12:06:00.432814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.960 [2024-04-18 12:06:00.433134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.960 [2024-04-18 12:06:00.433150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.960 qpair failed and we were unable to recover it. 00:30:09.960 [2024-04-18 12:06:00.433423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.960 [2024-04-18 12:06:00.433750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.960 [2024-04-18 12:06:00.433766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.960 qpair failed and we were unable to recover it. 00:30:09.960 [2024-04-18 12:06:00.434054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.960 [2024-04-18 12:06:00.434253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.960 [2024-04-18 12:06:00.434270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.960 qpair failed and we were unable to recover it. 00:30:09.960 [2024-04-18 12:06:00.434594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.960 [2024-04-18 12:06:00.434881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.960 [2024-04-18 12:06:00.434897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.960 qpair failed and we were unable to recover it. 00:30:09.960 [2024-04-18 12:06:00.435105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.960 [2024-04-18 12:06:00.435416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.960 [2024-04-18 12:06:00.435432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.960 qpair failed and we were unable to recover it. 00:30:09.960 [2024-04-18 12:06:00.435691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.960 [2024-04-18 12:06:00.435898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.960 [2024-04-18 12:06:00.435913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.960 qpair failed and we were unable to recover it. 00:30:09.960 [2024-04-18 12:06:00.436129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.960 [2024-04-18 12:06:00.436381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.960 [2024-04-18 12:06:00.436397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.960 qpair failed and we were unable to recover it. 00:30:09.960 [2024-04-18 12:06:00.436674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.960 [2024-04-18 12:06:00.436996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.960 [2024-04-18 12:06:00.437012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.960 qpair failed and we were unable to recover it. 00:30:09.960 [2024-04-18 12:06:00.437289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.960 [2024-04-18 12:06:00.437579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.960 [2024-04-18 12:06:00.437595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.960 qpair failed and we were unable to recover it. 00:30:09.960 [2024-04-18 12:06:00.437881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.960 [2024-04-18 12:06:00.438194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.960 [2024-04-18 12:06:00.438209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.960 qpair failed and we were unable to recover it. 00:30:09.960 [2024-04-18 12:06:00.438549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.960 [2024-04-18 12:06:00.438837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.960 [2024-04-18 12:06:00.438853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.960 qpair failed and we were unable to recover it. 00:30:09.960 [2024-04-18 12:06:00.439178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.960 [2024-04-18 12:06:00.439487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.960 [2024-04-18 12:06:00.439503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.960 qpair failed and we were unable to recover it. 00:30:09.960 [2024-04-18 12:06:00.439782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.960 [2024-04-18 12:06:00.440052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.960 [2024-04-18 12:06:00.440067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.960 qpair failed and we were unable to recover it. 00:30:09.960 [2024-04-18 12:06:00.440244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.960 [2024-04-18 12:06:00.440509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.960 [2024-04-18 12:06:00.440525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.960 qpair failed and we were unable to recover it. 00:30:09.960 [2024-04-18 12:06:00.440713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.960 [2024-04-18 12:06:00.440906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.960 [2024-04-18 12:06:00.440921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.960 qpair failed and we were unable to recover it. 00:30:09.960 [2024-04-18 12:06:00.441163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.960 [2024-04-18 12:06:00.441449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.960 [2024-04-18 12:06:00.441486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.960 qpair failed and we were unable to recover it. 00:30:09.960 [2024-04-18 12:06:00.441698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.960 [2024-04-18 12:06:00.441915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.960 [2024-04-18 12:06:00.441932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.960 qpair failed and we were unable to recover it. 00:30:09.960 [2024-04-18 12:06:00.442192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.960 [2024-04-18 12:06:00.442469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.960 [2024-04-18 12:06:00.442486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.960 qpair failed and we were unable to recover it. 00:30:09.960 [2024-04-18 12:06:00.442831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.960 [2024-04-18 12:06:00.443035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.960 [2024-04-18 12:06:00.443051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.960 qpair failed and we were unable to recover it. 00:30:09.961 [2024-04-18 12:06:00.443346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.961 [2024-04-18 12:06:00.443714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.961 [2024-04-18 12:06:00.443730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.961 qpair failed and we were unable to recover it. 00:30:09.961 [2024-04-18 12:06:00.444004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.961 [2024-04-18 12:06:00.444311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.961 [2024-04-18 12:06:00.444327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.961 qpair failed and we were unable to recover it. 00:30:09.961 [2024-04-18 12:06:00.444533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.961 [2024-04-18 12:06:00.444861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.961 [2024-04-18 12:06:00.444877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.961 qpair failed and we were unable to recover it. 00:30:09.961 [2024-04-18 12:06:00.445087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.961 [2024-04-18 12:06:00.445419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.961 [2024-04-18 12:06:00.445435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.961 qpair failed and we were unable to recover it. 00:30:09.961 [2024-04-18 12:06:00.445657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.961 [2024-04-18 12:06:00.445953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.961 [2024-04-18 12:06:00.445969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.961 qpair failed and we were unable to recover it. 00:30:09.961 [2024-04-18 12:06:00.446293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.961 [2024-04-18 12:06:00.446479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.961 [2024-04-18 12:06:00.446496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.961 qpair failed and we were unable to recover it. 00:30:09.961 [2024-04-18 12:06:00.446779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.961 [2024-04-18 12:06:00.447053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.961 [2024-04-18 12:06:00.447069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.961 qpair failed and we were unable to recover it. 00:30:09.961 [2024-04-18 12:06:00.447294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.961 [2024-04-18 12:06:00.447404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.961 [2024-04-18 12:06:00.447420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.961 qpair failed and we were unable to recover it. 00:30:09.961 [2024-04-18 12:06:00.447768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.961 [2024-04-18 12:06:00.447962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.961 [2024-04-18 12:06:00.447979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.961 qpair failed and we were unable to recover it. 00:30:09.961 [2024-04-18 12:06:00.448236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.961 [2024-04-18 12:06:00.448516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.961 [2024-04-18 12:06:00.448532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.961 qpair failed and we were unable to recover it. 00:30:09.961 [2024-04-18 12:06:00.448649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.961 [2024-04-18 12:06:00.448938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.961 [2024-04-18 12:06:00.448954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.961 qpair failed and we were unable to recover it. 00:30:09.961 [2024-04-18 12:06:00.449299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.961 [2024-04-18 12:06:00.449516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.961 [2024-04-18 12:06:00.449533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.961 qpair failed and we were unable to recover it. 00:30:09.961 [2024-04-18 12:06:00.449803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.961 [2024-04-18 12:06:00.450078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.961 [2024-04-18 12:06:00.450094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.961 qpair failed and we were unable to recover it. 00:30:09.961 [2024-04-18 12:06:00.450352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.961 [2024-04-18 12:06:00.450572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.961 [2024-04-18 12:06:00.450588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.961 qpair failed and we were unable to recover it. 00:30:09.961 [2024-04-18 12:06:00.450834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.961 [2024-04-18 12:06:00.450966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.961 [2024-04-18 12:06:00.450981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.961 qpair failed and we were unable to recover it. 00:30:09.961 [2024-04-18 12:06:00.451263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.961 [2024-04-18 12:06:00.451539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.961 [2024-04-18 12:06:00.451555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.961 qpair failed and we were unable to recover it. 00:30:09.961 [2024-04-18 12:06:00.451820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.961 [2024-04-18 12:06:00.452162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.961 [2024-04-18 12:06:00.452178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.961 qpair failed and we were unable to recover it. 00:30:09.961 [2024-04-18 12:06:00.452360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.961 [2024-04-18 12:06:00.452652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.961 [2024-04-18 12:06:00.452668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.961 qpair failed and we were unable to recover it. 00:30:09.961 [2024-04-18 12:06:00.453019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.961 [2024-04-18 12:06:00.453290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.961 [2024-04-18 12:06:00.453306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.961 qpair failed and we were unable to recover it. 00:30:09.961 [2024-04-18 12:06:00.453562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.961 [2024-04-18 12:06:00.453884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.961 [2024-04-18 12:06:00.453900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.961 qpair failed and we were unable to recover it. 00:30:09.961 [2024-04-18 12:06:00.454165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.961 [2024-04-18 12:06:00.454437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.961 [2024-04-18 12:06:00.454457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.961 qpair failed and we were unable to recover it. 00:30:09.961 [2024-04-18 12:06:00.454626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.961 [2024-04-18 12:06:00.454828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.961 [2024-04-18 12:06:00.454844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.961 qpair failed and we were unable to recover it. 00:30:09.961 [2024-04-18 12:06:00.455103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.961 [2024-04-18 12:06:00.455369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.961 [2024-04-18 12:06:00.455385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.961 qpair failed and we were unable to recover it. 00:30:09.961 [2024-04-18 12:06:00.455651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.961 [2024-04-18 12:06:00.455938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.961 [2024-04-18 12:06:00.455953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.961 qpair failed and we were unable to recover it. 00:30:09.961 [2024-04-18 12:06:00.456231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.961 [2024-04-18 12:06:00.456575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.961 [2024-04-18 12:06:00.456591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.961 qpair failed and we were unable to recover it. 00:30:09.961 [2024-04-18 12:06:00.456894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.961 [2024-04-18 12:06:00.457215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.961 [2024-04-18 12:06:00.457231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.961 qpair failed and we were unable to recover it. 00:30:09.961 [2024-04-18 12:06:00.457406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.961 [2024-04-18 12:06:00.457749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.961 [2024-04-18 12:06:00.457777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.961 qpair failed and we were unable to recover it. 00:30:09.961 [2024-04-18 12:06:00.458040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.961 [2024-04-18 12:06:00.458295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.961 [2024-04-18 12:06:00.458312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.961 qpair failed and we were unable to recover it. 00:30:09.961 [2024-04-18 12:06:00.458518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.961 [2024-04-18 12:06:00.458883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.962 [2024-04-18 12:06:00.458899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.962 qpair failed and we were unable to recover it. 00:30:09.962 [2024-04-18 12:06:00.459177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.962 [2024-04-18 12:06:00.459361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.962 [2024-04-18 12:06:00.459376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.962 qpair failed and we were unable to recover it. 00:30:09.962 [2024-04-18 12:06:00.459581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.962 [2024-04-18 12:06:00.459762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.962 [2024-04-18 12:06:00.459777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.962 qpair failed and we were unable to recover it. 00:30:09.962 [2024-04-18 12:06:00.460023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.962 [2024-04-18 12:06:00.460349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.962 [2024-04-18 12:06:00.460365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.962 qpair failed and we were unable to recover it. 00:30:09.962 [2024-04-18 12:06:00.460626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.962 [2024-04-18 12:06:00.460969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.962 [2024-04-18 12:06:00.460986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.962 qpair failed and we were unable to recover it. 00:30:09.962 [2024-04-18 12:06:00.461190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.962 [2024-04-18 12:06:00.461467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.962 [2024-04-18 12:06:00.461484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.962 qpair failed and we were unable to recover it. 00:30:09.962 [2024-04-18 12:06:00.461750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.962 [2024-04-18 12:06:00.462016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.962 [2024-04-18 12:06:00.462032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.962 qpair failed and we were unable to recover it. 00:30:09.962 [2024-04-18 12:06:00.462370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.962 [2024-04-18 12:06:00.462571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.962 [2024-04-18 12:06:00.462588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.962 qpair failed and we were unable to recover it. 00:30:09.962 [2024-04-18 12:06:00.462879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.962 [2024-04-18 12:06:00.463094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.962 [2024-04-18 12:06:00.463111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.962 qpair failed and we were unable to recover it. 00:30:09.962 [2024-04-18 12:06:00.463371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.962 [2024-04-18 12:06:00.463554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.962 [2024-04-18 12:06:00.463570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.962 qpair failed and we were unable to recover it. 00:30:09.962 [2024-04-18 12:06:00.463835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.962 [2024-04-18 12:06:00.464156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.962 [2024-04-18 12:06:00.464173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.962 qpair failed and we were unable to recover it. 00:30:09.962 [2024-04-18 12:06:00.464469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.962 [2024-04-18 12:06:00.464574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.962 [2024-04-18 12:06:00.464590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.962 qpair failed and we were unable to recover it. 00:30:09.962 [2024-04-18 12:06:00.464784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.962 [2024-04-18 12:06:00.465059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.962 [2024-04-18 12:06:00.465075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.962 qpair failed and we were unable to recover it. 00:30:09.962 [2024-04-18 12:06:00.465321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.962 [2024-04-18 12:06:00.465640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.962 [2024-04-18 12:06:00.465657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.962 qpair failed and we were unable to recover it. 00:30:09.962 [2024-04-18 12:06:00.465861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.962 [2024-04-18 12:06:00.466155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.962 [2024-04-18 12:06:00.466170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.962 qpair failed and we were unable to recover it. 00:30:09.962 [2024-04-18 12:06:00.466516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.962 [2024-04-18 12:06:00.466728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.962 [2024-04-18 12:06:00.466744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.962 qpair failed and we were unable to recover it. 00:30:09.962 [2024-04-18 12:06:00.467017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.962 [2024-04-18 12:06:00.467272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.962 [2024-04-18 12:06:00.467288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.962 qpair failed and we were unable to recover it. 00:30:09.962 [2024-04-18 12:06:00.467511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.962 [2024-04-18 12:06:00.467852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.962 [2024-04-18 12:06:00.467868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.962 qpair failed and we were unable to recover it. 00:30:09.962 [2024-04-18 12:06:00.468198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.962 [2024-04-18 12:06:00.468467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.962 [2024-04-18 12:06:00.468483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.962 qpair failed and we were unable to recover it. 00:30:09.962 [2024-04-18 12:06:00.468757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.962 [2024-04-18 12:06:00.469029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.962 [2024-04-18 12:06:00.469045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.962 qpair failed and we were unable to recover it. 00:30:09.962 [2024-04-18 12:06:00.469288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.962 [2024-04-18 12:06:00.469578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.962 [2024-04-18 12:06:00.469594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.962 qpair failed and we were unable to recover it. 00:30:09.962 [2024-04-18 12:06:00.469816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.962 [2024-04-18 12:06:00.470015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.962 [2024-04-18 12:06:00.470031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.962 qpair failed and we were unable to recover it. 00:30:09.962 [2024-04-18 12:06:00.470291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.962 [2024-04-18 12:06:00.470491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.962 [2024-04-18 12:06:00.470507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.962 qpair failed and we were unable to recover it. 00:30:09.962 [2024-04-18 12:06:00.470715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.962 [2024-04-18 12:06:00.470922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.962 [2024-04-18 12:06:00.470937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.962 qpair failed and we were unable to recover it. 00:30:09.962 [2024-04-18 12:06:00.471183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.962 [2024-04-18 12:06:00.471439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.962 [2024-04-18 12:06:00.471460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.962 qpair failed and we were unable to recover it. 00:30:09.962 [2024-04-18 12:06:00.471666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.962 [2024-04-18 12:06:00.471871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.962 [2024-04-18 12:06:00.471887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.962 qpair failed and we were unable to recover it. 00:30:09.962 [2024-04-18 12:06:00.472093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.962 [2024-04-18 12:06:00.472415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.962 [2024-04-18 12:06:00.472431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.962 qpair failed and we were unable to recover it. 00:30:09.962 [2024-04-18 12:06:00.472760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.962 [2024-04-18 12:06:00.473113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.962 [2024-04-18 12:06:00.473128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.962 qpair failed and we were unable to recover it. 00:30:09.962 [2024-04-18 12:06:00.473381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.963 [2024-04-18 12:06:00.473587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.963 [2024-04-18 12:06:00.473603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.963 qpair failed and we were unable to recover it. 00:30:09.963 [2024-04-18 12:06:00.473862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.963 [2024-04-18 12:06:00.474197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.963 [2024-04-18 12:06:00.474212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.963 qpair failed and we were unable to recover it. 00:30:09.963 [2024-04-18 12:06:00.474434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.963 [2024-04-18 12:06:00.474650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.963 [2024-04-18 12:06:00.474666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.963 qpair failed and we were unable to recover it. 00:30:09.963 [2024-04-18 12:06:00.474955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.963 [2024-04-18 12:06:00.475237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.963 [2024-04-18 12:06:00.475253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.963 qpair failed and we were unable to recover it. 00:30:09.963 [2024-04-18 12:06:00.475469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.963 [2024-04-18 12:06:00.475707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.963 [2024-04-18 12:06:00.475722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.963 qpair failed and we were unable to recover it. 00:30:09.963 [2024-04-18 12:06:00.475983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.963 [2024-04-18 12:06:00.476203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.963 [2024-04-18 12:06:00.476218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.963 qpair failed and we were unable to recover it. 00:30:09.963 [2024-04-18 12:06:00.476519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.963 [2024-04-18 12:06:00.476787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.963 [2024-04-18 12:06:00.476803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.963 qpair failed and we were unable to recover it. 00:30:09.963 [2024-04-18 12:06:00.477047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.963 [2024-04-18 12:06:00.477374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.963 [2024-04-18 12:06:00.477389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.963 qpair failed and we were unable to recover it. 00:30:09.963 [2024-04-18 12:06:00.477735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.963 [2024-04-18 12:06:00.478060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.963 [2024-04-18 12:06:00.478076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.963 qpair failed and we were unable to recover it. 00:30:09.963 [2024-04-18 12:06:00.478423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.963 [2024-04-18 12:06:00.478695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.963 [2024-04-18 12:06:00.478711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.963 qpair failed and we were unable to recover it. 00:30:09.963 [2024-04-18 12:06:00.479049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.963 [2024-04-18 12:06:00.479336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.963 [2024-04-18 12:06:00.479351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.963 qpair failed and we were unable to recover it. 00:30:09.963 [2024-04-18 12:06:00.479679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.963 [2024-04-18 12:06:00.480046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.963 [2024-04-18 12:06:00.480061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.963 qpair failed and we were unable to recover it. 00:30:09.963 [2024-04-18 12:06:00.480330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.963 [2024-04-18 12:06:00.480650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.963 [2024-04-18 12:06:00.480667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.963 qpair failed and we were unable to recover it. 00:30:09.963 [2024-04-18 12:06:00.481016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.963 [2024-04-18 12:06:00.481289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.963 [2024-04-18 12:06:00.481305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.963 qpair failed and we were unable to recover it. 00:30:09.963 [2024-04-18 12:06:00.481663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.963 [2024-04-18 12:06:00.481955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.963 [2024-04-18 12:06:00.481971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.963 qpair failed and we were unable to recover it. 00:30:09.963 [2024-04-18 12:06:00.482172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.963 [2024-04-18 12:06:00.482437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.963 [2024-04-18 12:06:00.482459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:09.963 qpair failed and we were unable to recover it. 00:30:10.235 [2024-04-18 12:06:00.482815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.235 [2024-04-18 12:06:00.483104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.235 [2024-04-18 12:06:00.483123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.235 qpair failed and we were unable to recover it. 00:30:10.235 [2024-04-18 12:06:00.483490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.235 [2024-04-18 12:06:00.483810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.235 [2024-04-18 12:06:00.483826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.235 qpair failed and we were unable to recover it. 00:30:10.235 [2024-04-18 12:06:00.484093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.235 [2024-04-18 12:06:00.484379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.235 [2024-04-18 12:06:00.484394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.235 qpair failed and we were unable to recover it. 00:30:10.235 [2024-04-18 12:06:00.484741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.235 [2024-04-18 12:06:00.485008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.235 [2024-04-18 12:06:00.485024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.235 qpair failed and we were unable to recover it. 00:30:10.235 [2024-04-18 12:06:00.485300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.235 [2024-04-18 12:06:00.485633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.235 [2024-04-18 12:06:00.485650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.235 qpair failed and we were unable to recover it. 00:30:10.235 [2024-04-18 12:06:00.485951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.235 [2024-04-18 12:06:00.486228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.235 [2024-04-18 12:06:00.486243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.235 qpair failed and we were unable to recover it. 00:30:10.235 [2024-04-18 12:06:00.486527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.235 [2024-04-18 12:06:00.486801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.236 [2024-04-18 12:06:00.486816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.236 qpair failed and we were unable to recover it. 00:30:10.236 [2024-04-18 12:06:00.487014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.236 [2024-04-18 12:06:00.487201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.236 [2024-04-18 12:06:00.487217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.236 qpair failed and we were unable to recover it. 00:30:10.236 [2024-04-18 12:06:00.487484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.236 [2024-04-18 12:06:00.487826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.236 [2024-04-18 12:06:00.487842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.236 qpair failed and we were unable to recover it. 00:30:10.236 [2024-04-18 12:06:00.488207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.236 [2024-04-18 12:06:00.488462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.236 [2024-04-18 12:06:00.488479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.236 qpair failed and we were unable to recover it. 00:30:10.236 [2024-04-18 12:06:00.488756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.236 [2024-04-18 12:06:00.488972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.236 [2024-04-18 12:06:00.488987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.236 qpair failed and we were unable to recover it. 00:30:10.236 [2024-04-18 12:06:00.489259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.236 [2024-04-18 12:06:00.489536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.236 [2024-04-18 12:06:00.489559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.236 qpair failed and we were unable to recover it. 00:30:10.236 [2024-04-18 12:06:00.489942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.236 [2024-04-18 12:06:00.490249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.236 [2024-04-18 12:06:00.490265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.236 qpair failed and we were unable to recover it. 00:30:10.236 [2024-04-18 12:06:00.490555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.236 [2024-04-18 12:06:00.490900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.236 [2024-04-18 12:06:00.490916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.236 qpair failed and we were unable to recover it. 00:30:10.236 [2024-04-18 12:06:00.491184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.236 [2024-04-18 12:06:00.491506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.236 [2024-04-18 12:06:00.491522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.236 qpair failed and we were unable to recover it. 00:30:10.236 [2024-04-18 12:06:00.491818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.236 [2024-04-18 12:06:00.492171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.236 [2024-04-18 12:06:00.492186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.236 qpair failed and we were unable to recover it. 00:30:10.236 [2024-04-18 12:06:00.492583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.236 [2024-04-18 12:06:00.492849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.236 [2024-04-18 12:06:00.492865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.236 qpair failed and we were unable to recover it. 00:30:10.236 [2024-04-18 12:06:00.493164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.236 [2024-04-18 12:06:00.493415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.236 [2024-04-18 12:06:00.493430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.236 qpair failed and we were unable to recover it. 00:30:10.236 [2024-04-18 12:06:00.493787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.236 [2024-04-18 12:06:00.494074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.236 [2024-04-18 12:06:00.494090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.236 qpair failed and we were unable to recover it. 00:30:10.236 [2024-04-18 12:06:00.494383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.236 [2024-04-18 12:06:00.494748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.236 [2024-04-18 12:06:00.494764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.236 qpair failed and we were unable to recover it. 00:30:10.236 [2024-04-18 12:06:00.495081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.236 [2024-04-18 12:06:00.495423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.236 [2024-04-18 12:06:00.495439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.236 qpair failed and we were unable to recover it. 00:30:10.236 [2024-04-18 12:06:00.495771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.236 [2024-04-18 12:06:00.496051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.236 [2024-04-18 12:06:00.496067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.236 qpair failed and we were unable to recover it. 00:30:10.236 [2024-04-18 12:06:00.496385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.236 [2024-04-18 12:06:00.496697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.236 [2024-04-18 12:06:00.496713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.236 qpair failed and we were unable to recover it. 00:30:10.236 [2024-04-18 12:06:00.496970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.236 [2024-04-18 12:06:00.497332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.236 [2024-04-18 12:06:00.497348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.236 qpair failed and we were unable to recover it. 00:30:10.236 [2024-04-18 12:06:00.497709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.236 [2024-04-18 12:06:00.498009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.236 [2024-04-18 12:06:00.498025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.236 qpair failed and we were unable to recover it. 00:30:10.236 [2024-04-18 12:06:00.498353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.236 [2024-04-18 12:06:00.498689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.236 [2024-04-18 12:06:00.498705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.236 qpair failed and we were unable to recover it. 00:30:10.236 [2024-04-18 12:06:00.498983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.236 [2024-04-18 12:06:00.499259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.236 [2024-04-18 12:06:00.499274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.236 qpair failed and we were unable to recover it. 00:30:10.236 [2024-04-18 12:06:00.499598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.236 [2024-04-18 12:06:00.499884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.236 [2024-04-18 12:06:00.499900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.236 qpair failed and we were unable to recover it. 00:30:10.236 [2024-04-18 12:06:00.500131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.236 [2024-04-18 12:06:00.500398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.236 [2024-04-18 12:06:00.500414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.236 qpair failed and we were unable to recover it. 00:30:10.236 [2024-04-18 12:06:00.500711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.236 [2024-04-18 12:06:00.500999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.236 [2024-04-18 12:06:00.501015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.236 qpair failed and we were unable to recover it. 00:30:10.236 [2024-04-18 12:06:00.501337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.236 [2024-04-18 12:06:00.501566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.236 [2024-04-18 12:06:00.501583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.236 qpair failed and we were unable to recover it. 00:30:10.236 [2024-04-18 12:06:00.501851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.236 [2024-04-18 12:06:00.502064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.236 [2024-04-18 12:06:00.502080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.236 qpair failed and we were unable to recover it. 00:30:10.236 [2024-04-18 12:06:00.502358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.236 [2024-04-18 12:06:00.502713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.236 [2024-04-18 12:06:00.502729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.236 qpair failed and we were unable to recover it. 00:30:10.237 [2024-04-18 12:06:00.503101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.237 [2024-04-18 12:06:00.503364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.237 [2024-04-18 12:06:00.503380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.237 qpair failed and we were unable to recover it. 00:30:10.237 [2024-04-18 12:06:00.503701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.237 [2024-04-18 12:06:00.503979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.237 [2024-04-18 12:06:00.503995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.237 qpair failed and we were unable to recover it. 00:30:10.237 [2024-04-18 12:06:00.504261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.237 [2024-04-18 12:06:00.504592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.237 [2024-04-18 12:06:00.504609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.237 qpair failed and we were unable to recover it. 00:30:10.237 [2024-04-18 12:06:00.504890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.237 [2024-04-18 12:06:00.505169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.237 [2024-04-18 12:06:00.505185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.237 qpair failed and we were unable to recover it. 00:30:10.237 [2024-04-18 12:06:00.505446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.237 [2024-04-18 12:06:00.505719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.237 [2024-04-18 12:06:00.505735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.237 qpair failed and we were unable to recover it. 00:30:10.237 [2024-04-18 12:06:00.505957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.237 [2024-04-18 12:06:00.506216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.237 [2024-04-18 12:06:00.506231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.237 qpair failed and we were unable to recover it. 00:30:10.237 [2024-04-18 12:06:00.506588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.237 [2024-04-18 12:06:00.506859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.237 [2024-04-18 12:06:00.506875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.237 qpair failed and we were unable to recover it. 00:30:10.237 [2024-04-18 12:06:00.507086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.237 [2024-04-18 12:06:00.507367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.237 [2024-04-18 12:06:00.507383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.237 qpair failed and we were unable to recover it. 00:30:10.237 [2024-04-18 12:06:00.507685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.237 [2024-04-18 12:06:00.508027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.237 [2024-04-18 12:06:00.508043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.237 qpair failed and we were unable to recover it. 00:30:10.237 [2024-04-18 12:06:00.508334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.237 [2024-04-18 12:06:00.508658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.237 [2024-04-18 12:06:00.508676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.237 qpair failed and we were unable to recover it. 00:30:10.237 [2024-04-18 12:06:00.508933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.237 [2024-04-18 12:06:00.509205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.237 [2024-04-18 12:06:00.509220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.237 qpair failed and we were unable to recover it. 00:30:10.237 [2024-04-18 12:06:00.509534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.237 [2024-04-18 12:06:00.509889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.237 [2024-04-18 12:06:00.509905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.237 qpair failed and we were unable to recover it. 00:30:10.237 [2024-04-18 12:06:00.510285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.237 [2024-04-18 12:06:00.510685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.237 [2024-04-18 12:06:00.510701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.237 qpair failed and we were unable to recover it. 00:30:10.237 [2024-04-18 12:06:00.511032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.237 [2024-04-18 12:06:00.511379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.237 [2024-04-18 12:06:00.511394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.237 qpair failed and we were unable to recover it. 00:30:10.237 [2024-04-18 12:06:00.511710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.237 [2024-04-18 12:06:00.511981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.237 [2024-04-18 12:06:00.511996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.237 qpair failed and we were unable to recover it. 00:30:10.237 [2024-04-18 12:06:00.512365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.237 [2024-04-18 12:06:00.512706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.237 [2024-04-18 12:06:00.512722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.237 qpair failed and we were unable to recover it. 00:30:10.237 [2024-04-18 12:06:00.513015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.237 [2024-04-18 12:06:00.513341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.237 [2024-04-18 12:06:00.513357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.237 qpair failed and we were unable to recover it. 00:30:10.237 [2024-04-18 12:06:00.513686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.237 [2024-04-18 12:06:00.513978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.237 [2024-04-18 12:06:00.513994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.237 qpair failed and we were unable to recover it. 00:30:10.237 [2024-04-18 12:06:00.514334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.237 [2024-04-18 12:06:00.514605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.237 [2024-04-18 12:06:00.514621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.237 qpair failed and we were unable to recover it. 00:30:10.237 [2024-04-18 12:06:00.514926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.237 [2024-04-18 12:06:00.515144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.237 [2024-04-18 12:06:00.515162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.237 qpair failed and we were unable to recover it. 00:30:10.237 [2024-04-18 12:06:00.515499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.237 [2024-04-18 12:06:00.515821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.237 [2024-04-18 12:06:00.515837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.237 qpair failed and we were unable to recover it. 00:30:10.237 [2024-04-18 12:06:00.516043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.237 [2024-04-18 12:06:00.516352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.237 [2024-04-18 12:06:00.516369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.237 qpair failed and we were unable to recover it. 00:30:10.237 [2024-04-18 12:06:00.516684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.237 [2024-04-18 12:06:00.516958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.237 [2024-04-18 12:06:00.516974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.237 qpair failed and we were unable to recover it. 00:30:10.237 [2024-04-18 12:06:00.517199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.237 [2024-04-18 12:06:00.517546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.237 [2024-04-18 12:06:00.517562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.237 qpair failed and we were unable to recover it. 00:30:10.237 [2024-04-18 12:06:00.517821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.237 [2024-04-18 12:06:00.518135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.237 [2024-04-18 12:06:00.518151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.237 qpair failed and we were unable to recover it. 00:30:10.237 [2024-04-18 12:06:00.518425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.237 [2024-04-18 12:06:00.518657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.237 [2024-04-18 12:06:00.518673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.237 qpair failed and we were unable to recover it. 00:30:10.237 [2024-04-18 12:06:00.519017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.237 [2024-04-18 12:06:00.519404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.237 [2024-04-18 12:06:00.519419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.237 qpair failed and we were unable to recover it. 00:30:10.237 [2024-04-18 12:06:00.519800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.237 [2024-04-18 12:06:00.520135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.237 [2024-04-18 12:06:00.520152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.237 qpair failed and we were unable to recover it. 00:30:10.237 [2024-04-18 12:06:00.520411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.237 [2024-04-18 12:06:00.520707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.237 [2024-04-18 12:06:00.520724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.238 qpair failed and we were unable to recover it. 00:30:10.238 [2024-04-18 12:06:00.521027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.238 [2024-04-18 12:06:00.521299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.238 [2024-04-18 12:06:00.521317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.238 qpair failed and we were unable to recover it. 00:30:10.238 [2024-04-18 12:06:00.521690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.238 [2024-04-18 12:06:00.522034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.238 [2024-04-18 12:06:00.522052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.238 qpair failed and we were unable to recover it. 00:30:10.238 [2024-04-18 12:06:00.522376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.238 [2024-04-18 12:06:00.522657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.238 [2024-04-18 12:06:00.522673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.238 qpair failed and we were unable to recover it. 00:30:10.238 [2024-04-18 12:06:00.523006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.238 [2024-04-18 12:06:00.523269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.238 [2024-04-18 12:06:00.523285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.238 qpair failed and we were unable to recover it. 00:30:10.238 [2024-04-18 12:06:00.523627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.238 [2024-04-18 12:06:00.523950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.238 [2024-04-18 12:06:00.523965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.238 qpair failed and we were unable to recover it. 00:30:10.238 [2024-04-18 12:06:00.524237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.238 [2024-04-18 12:06:00.524591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.238 [2024-04-18 12:06:00.524608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.238 qpair failed and we were unable to recover it. 00:30:10.238 [2024-04-18 12:06:00.524912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.238 [2024-04-18 12:06:00.525182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.238 [2024-04-18 12:06:00.525197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.238 qpair failed and we were unable to recover it. 00:30:10.238 [2024-04-18 12:06:00.525521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.238 [2024-04-18 12:06:00.525736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.238 [2024-04-18 12:06:00.525752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.238 qpair failed and we were unable to recover it. 00:30:10.238 [2024-04-18 12:06:00.526100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.238 [2024-04-18 12:06:00.526472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.238 [2024-04-18 12:06:00.526488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.238 qpair failed and we were unable to recover it. 00:30:10.238 [2024-04-18 12:06:00.526834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.238 [2024-04-18 12:06:00.527040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.238 [2024-04-18 12:06:00.527056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.238 qpair failed and we were unable to recover it. 00:30:10.238 [2024-04-18 12:06:00.527379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.238 [2024-04-18 12:06:00.527668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.238 [2024-04-18 12:06:00.527684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.238 qpair failed and we were unable to recover it. 00:30:10.238 [2024-04-18 12:06:00.527955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.238 [2024-04-18 12:06:00.528281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.238 [2024-04-18 12:06:00.528296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.238 qpair failed and we were unable to recover it. 00:30:10.238 [2024-04-18 12:06:00.528624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.238 [2024-04-18 12:06:00.528906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.238 [2024-04-18 12:06:00.528922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.238 qpair failed and we were unable to recover it. 00:30:10.238 [2024-04-18 12:06:00.529173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.238 [2024-04-18 12:06:00.529537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.238 [2024-04-18 12:06:00.529553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.238 qpair failed and we were unable to recover it. 00:30:10.238 [2024-04-18 12:06:00.529896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.238 [2024-04-18 12:06:00.530171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.238 [2024-04-18 12:06:00.530187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.238 qpair failed and we were unable to recover it. 00:30:10.238 [2024-04-18 12:06:00.530535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.238 [2024-04-18 12:06:00.530792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.238 [2024-04-18 12:06:00.530808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.238 qpair failed and we were unable to recover it. 00:30:10.238 [2024-04-18 12:06:00.531132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.238 [2024-04-18 12:06:00.531477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.238 [2024-04-18 12:06:00.531493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.238 qpair failed and we were unable to recover it. 00:30:10.238 [2024-04-18 12:06:00.531866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.238 [2024-04-18 12:06:00.532076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.238 [2024-04-18 12:06:00.532092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.238 qpair failed and we were unable to recover it. 00:30:10.238 [2024-04-18 12:06:00.532432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.238 [2024-04-18 12:06:00.532815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.238 [2024-04-18 12:06:00.532832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.238 qpair failed and we were unable to recover it. 00:30:10.238 [2024-04-18 12:06:00.533092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.238 [2024-04-18 12:06:00.533359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.238 [2024-04-18 12:06:00.533375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.238 qpair failed and we were unable to recover it. 00:30:10.238 [2024-04-18 12:06:00.533616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.238 [2024-04-18 12:06:00.533907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.238 [2024-04-18 12:06:00.533922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.238 qpair failed and we were unable to recover it. 00:30:10.238 [2024-04-18 12:06:00.534258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.238 [2024-04-18 12:06:00.534603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.238 [2024-04-18 12:06:00.534618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.238 qpair failed and we were unable to recover it. 00:30:10.238 [2024-04-18 12:06:00.534993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.238 [2024-04-18 12:06:00.535341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.238 [2024-04-18 12:06:00.535357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.238 qpair failed and we were unable to recover it. 00:30:10.238 [2024-04-18 12:06:00.535656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.238 [2024-04-18 12:06:00.535989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.238 [2024-04-18 12:06:00.536005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.238 qpair failed and we were unable to recover it. 00:30:10.238 [2024-04-18 12:06:00.536353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.238 [2024-04-18 12:06:00.536642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.238 [2024-04-18 12:06:00.536658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.238 qpair failed and we were unable to recover it. 00:30:10.238 [2024-04-18 12:06:00.536916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.238 [2024-04-18 12:06:00.537247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.238 [2024-04-18 12:06:00.537263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.238 qpair failed and we were unable to recover it. 00:30:10.239 [2024-04-18 12:06:00.537602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.239 [2024-04-18 12:06:00.537843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.239 [2024-04-18 12:06:00.537858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.239 qpair failed and we were unable to recover it. 00:30:10.239 [2024-04-18 12:06:00.538142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.239 [2024-04-18 12:06:00.538429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.239 [2024-04-18 12:06:00.538445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.239 qpair failed and we were unable to recover it. 00:30:10.239 [2024-04-18 12:06:00.538729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.239 [2024-04-18 12:06:00.538987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.239 [2024-04-18 12:06:00.539003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.239 qpair failed and we were unable to recover it. 00:30:10.239 [2024-04-18 12:06:00.539337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.239 [2024-04-18 12:06:00.539693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.239 [2024-04-18 12:06:00.539709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.239 qpair failed and we were unable to recover it. 00:30:10.239 [2024-04-18 12:06:00.540035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.239 [2024-04-18 12:06:00.540253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.239 [2024-04-18 12:06:00.540270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.239 qpair failed and we were unable to recover it. 00:30:10.239 [2024-04-18 12:06:00.540600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.239 [2024-04-18 12:06:00.540941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.239 [2024-04-18 12:06:00.540957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.239 qpair failed and we were unable to recover it. 00:30:10.239 [2024-04-18 12:06:00.541329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.239 [2024-04-18 12:06:00.541680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.239 [2024-04-18 12:06:00.541696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.239 qpair failed and we were unable to recover it. 00:30:10.239 [2024-04-18 12:06:00.541949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.239 [2024-04-18 12:06:00.542318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.239 [2024-04-18 12:06:00.542334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.239 qpair failed and we were unable to recover it. 00:30:10.239 [2024-04-18 12:06:00.542607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.239 [2024-04-18 12:06:00.542952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.239 [2024-04-18 12:06:00.542968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.239 qpair failed and we were unable to recover it. 00:30:10.239 [2024-04-18 12:06:00.543337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.239 [2024-04-18 12:06:00.543589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.239 [2024-04-18 12:06:00.543605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.239 qpair failed and we were unable to recover it. 00:30:10.239 [2024-04-18 12:06:00.543966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.239 [2024-04-18 12:06:00.544287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.239 [2024-04-18 12:06:00.544303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.239 qpair failed and we were unable to recover it. 00:30:10.239 [2024-04-18 12:06:00.544499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.239 [2024-04-18 12:06:00.544827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.239 [2024-04-18 12:06:00.544842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.239 qpair failed and we were unable to recover it. 00:30:10.239 [2024-04-18 12:06:00.545189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.239 [2024-04-18 12:06:00.545460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.239 [2024-04-18 12:06:00.545476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.239 qpair failed and we were unable to recover it. 00:30:10.239 [2024-04-18 12:06:00.545736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.239 [2024-04-18 12:06:00.546081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.239 [2024-04-18 12:06:00.546097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.239 qpair failed and we were unable to recover it. 00:30:10.239 [2024-04-18 12:06:00.546465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.239 [2024-04-18 12:06:00.546813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.239 [2024-04-18 12:06:00.546829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.239 qpair failed and we were unable to recover it. 00:30:10.239 [2024-04-18 12:06:00.547087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.239 [2024-04-18 12:06:00.547412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.239 [2024-04-18 12:06:00.547428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.239 qpair failed and we were unable to recover it. 00:30:10.239 [2024-04-18 12:06:00.547780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.239 [2024-04-18 12:06:00.548046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.239 [2024-04-18 12:06:00.548062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.239 qpair failed and we were unable to recover it. 00:30:10.239 [2024-04-18 12:06:00.548415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.239 [2024-04-18 12:06:00.548698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.239 [2024-04-18 12:06:00.548715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.239 qpair failed and we were unable to recover it. 00:30:10.239 [2024-04-18 12:06:00.549007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.239 [2024-04-18 12:06:00.549363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.239 [2024-04-18 12:06:00.549379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.239 qpair failed and we were unable to recover it. 00:30:10.239 [2024-04-18 12:06:00.549650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.239 [2024-04-18 12:06:00.549927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.239 [2024-04-18 12:06:00.549944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.239 qpair failed and we were unable to recover it. 00:30:10.239 [2024-04-18 12:06:00.550247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.239 [2024-04-18 12:06:00.550504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.239 [2024-04-18 12:06:00.550521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.239 qpair failed and we were unable to recover it. 00:30:10.239 [2024-04-18 12:06:00.550866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.239 [2024-04-18 12:06:00.551136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.239 [2024-04-18 12:06:00.551152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.239 qpair failed and we were unable to recover it. 00:30:10.239 [2024-04-18 12:06:00.551420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.239 [2024-04-18 12:06:00.551691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.239 [2024-04-18 12:06:00.551707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.239 qpair failed and we were unable to recover it. 00:30:10.239 [2024-04-18 12:06:00.551977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.239 [2024-04-18 12:06:00.552325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.239 [2024-04-18 12:06:00.552341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.239 qpair failed and we were unable to recover it. 00:30:10.239 [2024-04-18 12:06:00.552690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.239 [2024-04-18 12:06:00.552944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.239 [2024-04-18 12:06:00.552960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.239 qpair failed and we were unable to recover it. 00:30:10.239 [2024-04-18 12:06:00.553335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.239 [2024-04-18 12:06:00.553653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.239 [2024-04-18 12:06:00.553669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.239 qpair failed and we were unable to recover it. 00:30:10.239 [2024-04-18 12:06:00.553953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.239 [2024-04-18 12:06:00.554303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.239 [2024-04-18 12:06:00.554319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.239 qpair failed and we were unable to recover it. 00:30:10.239 [2024-04-18 12:06:00.554619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.239 [2024-04-18 12:06:00.554881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.239 [2024-04-18 12:06:00.554898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.239 qpair failed and we were unable to recover it. 00:30:10.239 [2024-04-18 12:06:00.555246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.239 [2024-04-18 12:06:00.555613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.240 [2024-04-18 12:06:00.555629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.240 qpair failed and we were unable to recover it. 00:30:10.240 [2024-04-18 12:06:00.555975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.240 [2024-04-18 12:06:00.556270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.240 [2024-04-18 12:06:00.556286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.240 qpair failed and we were unable to recover it. 00:30:10.240 [2024-04-18 12:06:00.556652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.240 [2024-04-18 12:06:00.556902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.240 [2024-04-18 12:06:00.556918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.240 qpair failed and we were unable to recover it. 00:30:10.240 [2024-04-18 12:06:00.557285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.240 [2024-04-18 12:06:00.557607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.240 [2024-04-18 12:06:00.557623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.240 qpair failed and we were unable to recover it. 00:30:10.240 [2024-04-18 12:06:00.557956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.240 [2024-04-18 12:06:00.558234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.240 [2024-04-18 12:06:00.558250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.240 qpair failed and we were unable to recover it. 00:30:10.240 [2024-04-18 12:06:00.558551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.240 [2024-04-18 12:06:00.558846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.240 [2024-04-18 12:06:00.558862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.240 qpair failed and we were unable to recover it. 00:30:10.240 [2024-04-18 12:06:00.559058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.240 [2024-04-18 12:06:00.559310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.240 [2024-04-18 12:06:00.559326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.240 qpair failed and we were unable to recover it. 00:30:10.240 [2024-04-18 12:06:00.559696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.240 [2024-04-18 12:06:00.560047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.240 [2024-04-18 12:06:00.560063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.240 qpair failed and we were unable to recover it. 00:30:10.240 [2024-04-18 12:06:00.560391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.240 [2024-04-18 12:06:00.560716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.240 [2024-04-18 12:06:00.560732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.240 qpair failed and we were unable to recover it. 00:30:10.240 [2024-04-18 12:06:00.560945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.240 [2024-04-18 12:06:00.561273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.240 [2024-04-18 12:06:00.561289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.240 qpair failed and we were unable to recover it. 00:30:10.240 [2024-04-18 12:06:00.561500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.240 [2024-04-18 12:06:00.561752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.240 [2024-04-18 12:06:00.561768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.240 qpair failed and we were unable to recover it. 00:30:10.240 [2024-04-18 12:06:00.562035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.240 [2024-04-18 12:06:00.562400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.240 [2024-04-18 12:06:00.562415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.240 qpair failed and we were unable to recover it. 00:30:10.240 [2024-04-18 12:06:00.562685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.240 [2024-04-18 12:06:00.562958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.240 [2024-04-18 12:06:00.562974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.240 qpair failed and we were unable to recover it. 00:30:10.240 [2024-04-18 12:06:00.563254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.240 [2024-04-18 12:06:00.563576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.240 [2024-04-18 12:06:00.563592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.240 qpair failed and we were unable to recover it. 00:30:10.240 [2024-04-18 12:06:00.563933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.240 [2024-04-18 12:06:00.564245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.240 [2024-04-18 12:06:00.564261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.240 qpair failed and we were unable to recover it. 00:30:10.240 [2024-04-18 12:06:00.564468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.240 [2024-04-18 12:06:00.564820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.240 [2024-04-18 12:06:00.564836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.240 qpair failed and we were unable to recover it. 00:30:10.240 [2024-04-18 12:06:00.565092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.240 [2024-04-18 12:06:00.565281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.240 [2024-04-18 12:06:00.565297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.240 qpair failed and we were unable to recover it. 00:30:10.240 [2024-04-18 12:06:00.565630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.240 [2024-04-18 12:06:00.565963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.240 [2024-04-18 12:06:00.565979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.240 qpair failed and we were unable to recover it. 00:30:10.240 [2024-04-18 12:06:00.566360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.240 [2024-04-18 12:06:00.566632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.240 [2024-04-18 12:06:00.566648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.240 qpair failed and we were unable to recover it. 00:30:10.240 [2024-04-18 12:06:00.566939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.240 [2024-04-18 12:06:00.567218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.240 [2024-04-18 12:06:00.567234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.240 qpair failed and we were unable to recover it. 00:30:10.240 [2024-04-18 12:06:00.567583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.240 [2024-04-18 12:06:00.567861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.240 [2024-04-18 12:06:00.567877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.240 qpair failed and we were unable to recover it. 00:30:10.240 [2024-04-18 12:06:00.568140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.240 [2024-04-18 12:06:00.568435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.240 [2024-04-18 12:06:00.568458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.240 qpair failed and we were unable to recover it. 00:30:10.240 [2024-04-18 12:06:00.568791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.240 [2024-04-18 12:06:00.569064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.241 [2024-04-18 12:06:00.569080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.241 qpair failed and we were unable to recover it. 00:30:10.241 [2024-04-18 12:06:00.569432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.241 [2024-04-18 12:06:00.569637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.241 [2024-04-18 12:06:00.569658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.241 qpair failed and we were unable to recover it. 00:30:10.241 [2024-04-18 12:06:00.569933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.241 [2024-04-18 12:06:00.570279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.241 [2024-04-18 12:06:00.570295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.241 qpair failed and we were unable to recover it. 00:30:10.241 [2024-04-18 12:06:00.570576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.241 [2024-04-18 12:06:00.570853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.241 [2024-04-18 12:06:00.570869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.241 qpair failed and we were unable to recover it. 00:30:10.241 [2024-04-18 12:06:00.571157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.241 [2024-04-18 12:06:00.571432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.241 [2024-04-18 12:06:00.571448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.241 qpair failed and we were unable to recover it. 00:30:10.241 [2024-04-18 12:06:00.571802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.241 [2024-04-18 12:06:00.572071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.241 [2024-04-18 12:06:00.572087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.241 qpair failed and we were unable to recover it. 00:30:10.241 [2024-04-18 12:06:00.572346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.241 [2024-04-18 12:06:00.572619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.241 [2024-04-18 12:06:00.572634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.241 qpair failed and we were unable to recover it. 00:30:10.241 [2024-04-18 12:06:00.572897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.241 [2024-04-18 12:06:00.573100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.241 [2024-04-18 12:06:00.573115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.241 qpair failed and we were unable to recover it. 00:30:10.241 [2024-04-18 12:06:00.573463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.241 [2024-04-18 12:06:00.573790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.241 [2024-04-18 12:06:00.573806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.241 qpair failed and we were unable to recover it. 00:30:10.241 [2024-04-18 12:06:00.574121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.241 [2024-04-18 12:06:00.574382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.241 [2024-04-18 12:06:00.574398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.241 qpair failed and we were unable to recover it. 00:30:10.241 [2024-04-18 12:06:00.574662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.241 [2024-04-18 12:06:00.574874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.241 [2024-04-18 12:06:00.574890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.241 qpair failed and we were unable to recover it. 00:30:10.241 [2024-04-18 12:06:00.575169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.241 [2024-04-18 12:06:00.575514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.241 [2024-04-18 12:06:00.575531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.241 qpair failed and we were unable to recover it. 00:30:10.241 [2024-04-18 12:06:00.575799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.241 [2024-04-18 12:06:00.576141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.241 [2024-04-18 12:06:00.576157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.241 qpair failed and we were unable to recover it. 00:30:10.241 [2024-04-18 12:06:00.576386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.241 [2024-04-18 12:06:00.576679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.241 [2024-04-18 12:06:00.576695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.241 qpair failed and we were unable to recover it. 00:30:10.241 [2024-04-18 12:06:00.576973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.241 [2024-04-18 12:06:00.577266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.241 [2024-04-18 12:06:00.577282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.241 qpair failed and we were unable to recover it. 00:30:10.241 [2024-04-18 12:06:00.577615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.241 [2024-04-18 12:06:00.577813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.241 [2024-04-18 12:06:00.577829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.241 qpair failed and we were unable to recover it. 00:30:10.241 [2024-04-18 12:06:00.578087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.241 [2024-04-18 12:06:00.578430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.241 [2024-04-18 12:06:00.578446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.241 qpair failed and we were unable to recover it. 00:30:10.241 [2024-04-18 12:06:00.578799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.241 [2024-04-18 12:06:00.579003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.241 [2024-04-18 12:06:00.579020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.241 qpair failed and we were unable to recover it. 00:30:10.241 [2024-04-18 12:06:00.579212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.241 [2024-04-18 12:06:00.579432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.241 [2024-04-18 12:06:00.579448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.241 qpair failed and we were unable to recover it. 00:30:10.241 [2024-04-18 12:06:00.579779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.241 [2024-04-18 12:06:00.580127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.241 [2024-04-18 12:06:00.580143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.241 qpair failed and we were unable to recover it. 00:30:10.241 [2024-04-18 12:06:00.580415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.241 [2024-04-18 12:06:00.580698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.241 [2024-04-18 12:06:00.580714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.241 qpair failed and we were unable to recover it. 00:30:10.241 [2024-04-18 12:06:00.580990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.241 [2024-04-18 12:06:00.581269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.241 [2024-04-18 12:06:00.581285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.241 qpair failed and we were unable to recover it. 00:30:10.241 [2024-04-18 12:06:00.581634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.241 [2024-04-18 12:06:00.581999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.241 [2024-04-18 12:06:00.582014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.241 qpair failed and we were unable to recover it. 00:30:10.241 [2024-04-18 12:06:00.582246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.241 [2024-04-18 12:06:00.582512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.241 [2024-04-18 12:06:00.582529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.241 qpair failed and we were unable to recover it. 00:30:10.241 [2024-04-18 12:06:00.582829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.241 [2024-04-18 12:06:00.583043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.241 [2024-04-18 12:06:00.583059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.241 qpair failed and we were unable to recover it. 00:30:10.241 [2024-04-18 12:06:00.583340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.241 [2024-04-18 12:06:00.583688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.241 [2024-04-18 12:06:00.583704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.241 qpair failed and we were unable to recover it. 00:30:10.241 [2024-04-18 12:06:00.584029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.241 [2024-04-18 12:06:00.584244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.241 [2024-04-18 12:06:00.584260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.241 qpair failed and we were unable to recover it. 00:30:10.241 [2024-04-18 12:06:00.584539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.242 [2024-04-18 12:06:00.584881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.242 [2024-04-18 12:06:00.584897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.242 qpair failed and we were unable to recover it. 00:30:10.242 [2024-04-18 12:06:00.585157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.242 [2024-04-18 12:06:00.585381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.242 [2024-04-18 12:06:00.585397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.242 qpair failed and we were unable to recover it. 00:30:10.242 [2024-04-18 12:06:00.585655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.242 [2024-04-18 12:06:00.585992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.242 [2024-04-18 12:06:00.586009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.242 qpair failed and we were unable to recover it. 00:30:10.242 [2024-04-18 12:06:00.586371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.242 [2024-04-18 12:06:00.586692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.242 [2024-04-18 12:06:00.586708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.242 qpair failed and we were unable to recover it. 00:30:10.242 [2024-04-18 12:06:00.587054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.242 [2024-04-18 12:06:00.587419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.242 [2024-04-18 12:06:00.587434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.242 qpair failed and we were unable to recover it. 00:30:10.242 [2024-04-18 12:06:00.587765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.242 [2024-04-18 12:06:00.588048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.242 [2024-04-18 12:06:00.588067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.242 qpair failed and we were unable to recover it. 00:30:10.242 [2024-04-18 12:06:00.588335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.242 [2024-04-18 12:06:00.588626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.242 [2024-04-18 12:06:00.588642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.242 qpair failed and we were unable to recover it. 00:30:10.242 [2024-04-18 12:06:00.588908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.242 [2024-04-18 12:06:00.589092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.242 [2024-04-18 12:06:00.589108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.242 qpair failed and we were unable to recover it. 00:30:10.242 [2024-04-18 12:06:00.589383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.242 [2024-04-18 12:06:00.589779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.242 [2024-04-18 12:06:00.589796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.242 qpair failed and we were unable to recover it. 00:30:10.242 [2024-04-18 12:06:00.590127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.242 [2024-04-18 12:06:00.590464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.242 [2024-04-18 12:06:00.590480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.242 qpair failed and we were unable to recover it. 00:30:10.242 [2024-04-18 12:06:00.590751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.242 [2024-04-18 12:06:00.591061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.242 [2024-04-18 12:06:00.591077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.242 qpair failed and we were unable to recover it. 00:30:10.242 [2024-04-18 12:06:00.591414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.242 [2024-04-18 12:06:00.591711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.242 [2024-04-18 12:06:00.591727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.242 qpair failed and we were unable to recover it. 00:30:10.242 [2024-04-18 12:06:00.592010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.242 [2024-04-18 12:06:00.592284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.242 [2024-04-18 12:06:00.592300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.242 qpair failed and we were unable to recover it. 00:30:10.242 [2024-04-18 12:06:00.592643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.242 [2024-04-18 12:06:00.592924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.242 [2024-04-18 12:06:00.592940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.242 qpair failed and we were unable to recover it. 00:30:10.242 [2024-04-18 12:06:00.593282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.242 [2024-04-18 12:06:00.593627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.242 [2024-04-18 12:06:00.593643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.242 qpair failed and we were unable to recover it. 00:30:10.242 [2024-04-18 12:06:00.593900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.242 [2024-04-18 12:06:00.594114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.242 [2024-04-18 12:06:00.594130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.242 qpair failed and we were unable to recover it. 00:30:10.242 [2024-04-18 12:06:00.594460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.242 [2024-04-18 12:06:00.594833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.242 [2024-04-18 12:06:00.594849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.242 qpair failed and we were unable to recover it. 00:30:10.242 [2024-04-18 12:06:00.595145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.242 [2024-04-18 12:06:00.595374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.242 [2024-04-18 12:06:00.595389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.242 qpair failed and we were unable to recover it. 00:30:10.242 [2024-04-18 12:06:00.595648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.242 [2024-04-18 12:06:00.595974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.242 [2024-04-18 12:06:00.595991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.242 qpair failed and we were unable to recover it. 00:30:10.242 [2024-04-18 12:06:00.596276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.242 [2024-04-18 12:06:00.596544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.242 [2024-04-18 12:06:00.596560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.242 qpair failed and we were unable to recover it. 00:30:10.242 [2024-04-18 12:06:00.596867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.242 [2024-04-18 12:06:00.597199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.242 [2024-04-18 12:06:00.597215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.242 qpair failed and we were unable to recover it. 00:30:10.242 [2024-04-18 12:06:00.597475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.242 [2024-04-18 12:06:00.597731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.242 [2024-04-18 12:06:00.597746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.242 qpair failed and we were unable to recover it. 00:30:10.242 [2024-04-18 12:06:00.598140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.242 [2024-04-18 12:06:00.598353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.242 [2024-04-18 12:06:00.598369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.242 qpair failed and we were unable to recover it. 00:30:10.242 [2024-04-18 12:06:00.598588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.242 [2024-04-18 12:06:00.598909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.242 [2024-04-18 12:06:00.598924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.242 qpair failed and we were unable to recover it. 00:30:10.242 [2024-04-18 12:06:00.599201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.242 [2024-04-18 12:06:00.599390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.242 [2024-04-18 12:06:00.599405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.242 qpair failed and we were unable to recover it. 00:30:10.242 [2024-04-18 12:06:00.599762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.242 [2024-04-18 12:06:00.600108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.242 [2024-04-18 12:06:00.600124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.242 qpair failed and we were unable to recover it. 00:30:10.242 [2024-04-18 12:06:00.600416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.242 [2024-04-18 12:06:00.600762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.242 [2024-04-18 12:06:00.600778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.242 qpair failed and we were unable to recover it. 00:30:10.242 [2024-04-18 12:06:00.601005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.242 [2024-04-18 12:06:00.601329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.242 [2024-04-18 12:06:00.601345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.242 qpair failed and we were unable to recover it. 00:30:10.242 [2024-04-18 12:06:00.601635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.242 [2024-04-18 12:06:00.601837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.242 [2024-04-18 12:06:00.601854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.242 qpair failed and we were unable to recover it. 00:30:10.243 [2024-04-18 12:06:00.602199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.243 [2024-04-18 12:06:00.602471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.243 [2024-04-18 12:06:00.602486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.243 qpair failed and we were unable to recover it. 00:30:10.243 [2024-04-18 12:06:00.602775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.243 [2024-04-18 12:06:00.603120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.243 [2024-04-18 12:06:00.603136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.243 qpair failed and we were unable to recover it. 00:30:10.243 [2024-04-18 12:06:00.603463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.243 [2024-04-18 12:06:00.603729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.243 [2024-04-18 12:06:00.603745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.243 qpair failed and we were unable to recover it. 00:30:10.243 [2024-04-18 12:06:00.604030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.243 [2024-04-18 12:06:00.604335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.243 [2024-04-18 12:06:00.604351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.243 qpair failed and we were unable to recover it. 00:30:10.243 [2024-04-18 12:06:00.604623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.243 [2024-04-18 12:06:00.604907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.243 [2024-04-18 12:06:00.604923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.243 qpair failed and we were unable to recover it. 00:30:10.243 [2024-04-18 12:06:00.605177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.243 [2024-04-18 12:06:00.605447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.243 [2024-04-18 12:06:00.605468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.243 qpair failed and we were unable to recover it. 00:30:10.243 [2024-04-18 12:06:00.605772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.243 [2024-04-18 12:06:00.606036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.243 [2024-04-18 12:06:00.606052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.243 qpair failed and we were unable to recover it. 00:30:10.243 [2024-04-18 12:06:00.606379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.243 [2024-04-18 12:06:00.606781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.243 [2024-04-18 12:06:00.606797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.243 qpair failed and we were unable to recover it. 00:30:10.243 [2024-04-18 12:06:00.607043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.243 [2024-04-18 12:06:00.607329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.243 [2024-04-18 12:06:00.607345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.243 qpair failed and we were unable to recover it. 00:30:10.243 [2024-04-18 12:06:00.607676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.243 [2024-04-18 12:06:00.607996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.243 [2024-04-18 12:06:00.608015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.243 qpair failed and we were unable to recover it. 00:30:10.243 [2024-04-18 12:06:00.608284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.243 [2024-04-18 12:06:00.608658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.243 [2024-04-18 12:06:00.608674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.243 qpair failed and we were unable to recover it. 00:30:10.243 [2024-04-18 12:06:00.609020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.243 [2024-04-18 12:06:00.609357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.243 [2024-04-18 12:06:00.609372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.243 qpair failed and we were unable to recover it. 00:30:10.243 [2024-04-18 12:06:00.609631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.243 [2024-04-18 12:06:00.609887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.243 [2024-04-18 12:06:00.609902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.243 qpair failed and we were unable to recover it. 00:30:10.243 [2024-04-18 12:06:00.610201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.243 [2024-04-18 12:06:00.610474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.243 [2024-04-18 12:06:00.610490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.243 qpair failed and we were unable to recover it. 00:30:10.243 [2024-04-18 12:06:00.610857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.243 [2024-04-18 12:06:00.611074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.243 [2024-04-18 12:06:00.611090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.243 qpair failed and we were unable to recover it. 00:30:10.243 [2024-04-18 12:06:00.611368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.243 [2024-04-18 12:06:00.611717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.243 [2024-04-18 12:06:00.611733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.243 qpair failed and we were unable to recover it. 00:30:10.243 [2024-04-18 12:06:00.612021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.243 [2024-04-18 12:06:00.612351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.243 [2024-04-18 12:06:00.612367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.243 qpair failed and we were unable to recover it. 00:30:10.243 [2024-04-18 12:06:00.612714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.243 [2024-04-18 12:06:00.613079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.243 [2024-04-18 12:06:00.613095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.243 qpair failed and we were unable to recover it. 00:30:10.243 [2024-04-18 12:06:00.613462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.243 [2024-04-18 12:06:00.613735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.243 [2024-04-18 12:06:00.613750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.243 qpair failed and we were unable to recover it. 00:30:10.243 [2024-04-18 12:06:00.614095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.243 [2024-04-18 12:06:00.614415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.243 [2024-04-18 12:06:00.614432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.243 qpair failed and we were unable to recover it. 00:30:10.243 [2024-04-18 12:06:00.614798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.243 [2024-04-18 12:06:00.615004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.243 [2024-04-18 12:06:00.615020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.243 qpair failed and we were unable to recover it. 00:30:10.243 [2024-04-18 12:06:00.615311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.243 [2024-04-18 12:06:00.615677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.243 [2024-04-18 12:06:00.615693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.243 qpair failed and we were unable to recover it. 00:30:10.243 [2024-04-18 12:06:00.616048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.243 [2024-04-18 12:06:00.616304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.243 [2024-04-18 12:06:00.616319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.243 qpair failed and we were unable to recover it. 00:30:10.243 [2024-04-18 12:06:00.616590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.243 [2024-04-18 12:06:00.616878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.243 [2024-04-18 12:06:00.616894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.243 qpair failed and we were unable to recover it. 00:30:10.243 [2024-04-18 12:06:00.617154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.243 [2024-04-18 12:06:00.617408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.243 [2024-04-18 12:06:00.617424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.243 qpair failed and we were unable to recover it. 00:30:10.243 [2024-04-18 12:06:00.617725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.243 [2024-04-18 12:06:00.618037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.243 [2024-04-18 12:06:00.618054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.243 qpair failed and we were unable to recover it. 00:30:10.243 [2024-04-18 12:06:00.618422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.243 [2024-04-18 12:06:00.618625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.243 [2024-04-18 12:06:00.618641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.243 qpair failed and we were unable to recover it. 00:30:10.244 [2024-04-18 12:06:00.618989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.244 [2024-04-18 12:06:00.619184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.244 [2024-04-18 12:06:00.619200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.244 qpair failed and we were unable to recover it. 00:30:10.244 [2024-04-18 12:06:00.619469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.244 [2024-04-18 12:06:00.619740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.244 [2024-04-18 12:06:00.619756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.244 qpair failed and we were unable to recover it. 00:30:10.244 [2024-04-18 12:06:00.620108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.244 [2024-04-18 12:06:00.620472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.244 [2024-04-18 12:06:00.620491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.244 qpair failed and we were unable to recover it. 00:30:10.244 [2024-04-18 12:06:00.620824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.244 [2024-04-18 12:06:00.621082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.244 [2024-04-18 12:06:00.621099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.244 qpair failed and we were unable to recover it. 00:30:10.244 [2024-04-18 12:06:00.621473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.244 [2024-04-18 12:06:00.621727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.244 [2024-04-18 12:06:00.621742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.244 qpair failed and we were unable to recover it. 00:30:10.244 [2024-04-18 12:06:00.622067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.244 [2024-04-18 12:06:00.622359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.244 [2024-04-18 12:06:00.622375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.244 qpair failed and we were unable to recover it. 00:30:10.244 [2024-04-18 12:06:00.622628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.244 [2024-04-18 12:06:00.622883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.244 [2024-04-18 12:06:00.622899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.244 qpair failed and we were unable to recover it. 00:30:10.244 [2024-04-18 12:06:00.623110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.244 [2024-04-18 12:06:00.623478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.244 [2024-04-18 12:06:00.623494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.244 qpair failed and we were unable to recover it. 00:30:10.244 [2024-04-18 12:06:00.623771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.244 [2024-04-18 12:06:00.624047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.244 [2024-04-18 12:06:00.624064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.244 qpair failed and we were unable to recover it. 00:30:10.244 [2024-04-18 12:06:00.624304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.244 [2024-04-18 12:06:00.624558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.244 [2024-04-18 12:06:00.624574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.244 qpair failed and we were unable to recover it. 00:30:10.244 [2024-04-18 12:06:00.624839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.244 [2024-04-18 12:06:00.625132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.244 [2024-04-18 12:06:00.625148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.244 qpair failed and we were unable to recover it. 00:30:10.244 [2024-04-18 12:06:00.625523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.244 [2024-04-18 12:06:00.625826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.244 [2024-04-18 12:06:00.625842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.244 qpair failed and we were unable to recover it. 00:30:10.244 [2024-04-18 12:06:00.626115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.244 [2024-04-18 12:06:00.626387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.244 [2024-04-18 12:06:00.626403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.244 qpair failed and we were unable to recover it. 00:30:10.244 [2024-04-18 12:06:00.626736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.244 [2024-04-18 12:06:00.627081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.244 [2024-04-18 12:06:00.627097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.244 qpair failed and we were unable to recover it. 00:30:10.244 [2024-04-18 12:06:00.627356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.244 [2024-04-18 12:06:00.627617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.244 [2024-04-18 12:06:00.627633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.244 qpair failed and we were unable to recover it. 00:30:10.244 [2024-04-18 12:06:00.627859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.244 [2024-04-18 12:06:00.628181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.244 [2024-04-18 12:06:00.628197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.244 qpair failed and we were unable to recover it. 00:30:10.244 [2024-04-18 12:06:00.628544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.244 [2024-04-18 12:06:00.628815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.244 [2024-04-18 12:06:00.628831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.244 qpair failed and we were unable to recover it. 00:30:10.244 [2024-04-18 12:06:00.629115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.244 [2024-04-18 12:06:00.629437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.244 [2024-04-18 12:06:00.629457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.244 qpair failed and we were unable to recover it. 00:30:10.244 [2024-04-18 12:06:00.629780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.244 [2024-04-18 12:06:00.630065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.244 [2024-04-18 12:06:00.630081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.244 qpair failed and we were unable to recover it. 00:30:10.244 [2024-04-18 12:06:00.630447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.244 [2024-04-18 12:06:00.630655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.244 [2024-04-18 12:06:00.630671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.244 qpair failed and we were unable to recover it. 00:30:10.244 [2024-04-18 12:06:00.630963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.244 [2024-04-18 12:06:00.631225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.244 [2024-04-18 12:06:00.631241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.244 qpair failed and we were unable to recover it. 00:30:10.244 [2024-04-18 12:06:00.631587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.244 [2024-04-18 12:06:00.631931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.244 [2024-04-18 12:06:00.631947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.244 qpair failed and we were unable to recover it. 00:30:10.244 [2024-04-18 12:06:00.632355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.244 [2024-04-18 12:06:00.632611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.244 [2024-04-18 12:06:00.632627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.244 qpair failed and we were unable to recover it. 00:30:10.244 [2024-04-18 12:06:00.632909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.244 [2024-04-18 12:06:00.633188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.244 [2024-04-18 12:06:00.633204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.244 qpair failed and we were unable to recover it. 00:30:10.244 [2024-04-18 12:06:00.633491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.244 [2024-04-18 12:06:00.633857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.244 [2024-04-18 12:06:00.633874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.244 qpair failed and we were unable to recover it. 00:30:10.244 [2024-04-18 12:06:00.634220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.244 [2024-04-18 12:06:00.634602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.244 [2024-04-18 12:06:00.634618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.244 qpair failed and we were unable to recover it. 00:30:10.244 [2024-04-18 12:06:00.634975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.244 [2024-04-18 12:06:00.635247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.244 [2024-04-18 12:06:00.635263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.244 qpair failed and we were unable to recover it. 00:30:10.244 [2024-04-18 12:06:00.635602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.244 [2024-04-18 12:06:00.635888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.244 [2024-04-18 12:06:00.635904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.244 qpair failed and we were unable to recover it. 00:30:10.244 [2024-04-18 12:06:00.636228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.244 [2024-04-18 12:06:00.636507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.244 [2024-04-18 12:06:00.636523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.245 qpair failed and we were unable to recover it. 00:30:10.245 [2024-04-18 12:06:00.636874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.245 [2024-04-18 12:06:00.637245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.245 [2024-04-18 12:06:00.637261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.245 qpair failed and we were unable to recover it. 00:30:10.245 [2024-04-18 12:06:00.637610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.245 [2024-04-18 12:06:00.637909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.245 [2024-04-18 12:06:00.637925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.245 qpair failed and we were unable to recover it. 00:30:10.245 [2024-04-18 12:06:00.638263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.245 [2024-04-18 12:06:00.638520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.245 [2024-04-18 12:06:00.638536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.245 qpair failed and we were unable to recover it. 00:30:10.245 [2024-04-18 12:06:00.638863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.245 [2024-04-18 12:06:00.639185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.245 [2024-04-18 12:06:00.639202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.245 qpair failed and we were unable to recover it. 00:30:10.245 [2024-04-18 12:06:00.639433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.245 [2024-04-18 12:06:00.639725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.245 [2024-04-18 12:06:00.639741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.245 qpair failed and we were unable to recover it. 00:30:10.245 [2024-04-18 12:06:00.640072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.245 [2024-04-18 12:06:00.640354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.245 [2024-04-18 12:06:00.640373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.245 qpair failed and we were unable to recover it. 00:30:10.245 [2024-04-18 12:06:00.640652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.245 [2024-04-18 12:06:00.640917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.245 [2024-04-18 12:06:00.640933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.245 qpair failed and we were unable to recover it. 00:30:10.245 [2024-04-18 12:06:00.641230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.245 [2024-04-18 12:06:00.641575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.245 [2024-04-18 12:06:00.641590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.245 qpair failed and we were unable to recover it. 00:30:10.245 [2024-04-18 12:06:00.641915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.245 [2024-04-18 12:06:00.642224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.245 [2024-04-18 12:06:00.642240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.245 qpair failed and we were unable to recover it. 00:30:10.245 [2024-04-18 12:06:00.642584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.245 [2024-04-18 12:06:00.642928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.245 [2024-04-18 12:06:00.642944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.245 qpair failed and we were unable to recover it. 00:30:10.245 [2024-04-18 12:06:00.643314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.245 [2024-04-18 12:06:00.643686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.245 [2024-04-18 12:06:00.643702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.245 qpair failed and we were unable to recover it. 00:30:10.245 [2024-04-18 12:06:00.644053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.245 [2024-04-18 12:06:00.644418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.245 [2024-04-18 12:06:00.644434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.245 qpair failed and we were unable to recover it. 00:30:10.245 [2024-04-18 12:06:00.644708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.245 [2024-04-18 12:06:00.644998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.245 [2024-04-18 12:06:00.645015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.245 qpair failed and we were unable to recover it. 00:30:10.245 [2024-04-18 12:06:00.645291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.245 [2024-04-18 12:06:00.645577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.245 [2024-04-18 12:06:00.645593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.245 qpair failed and we were unable to recover it. 00:30:10.245 [2024-04-18 12:06:00.645924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.245 [2024-04-18 12:06:00.646198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.245 [2024-04-18 12:06:00.646215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.245 qpair failed and we were unable to recover it. 00:30:10.245 [2024-04-18 12:06:00.646568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.245 [2024-04-18 12:06:00.646785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.245 [2024-04-18 12:06:00.646801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.245 qpair failed and we were unable to recover it. 00:30:10.245 [2024-04-18 12:06:00.647134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.245 [2024-04-18 12:06:00.647481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.245 [2024-04-18 12:06:00.647497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.245 qpair failed and we were unable to recover it. 00:30:10.245 [2024-04-18 12:06:00.647869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.245 [2024-04-18 12:06:00.648078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.245 [2024-04-18 12:06:00.648098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.245 qpair failed and we were unable to recover it. 00:30:10.245 [2024-04-18 12:06:00.648446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.245 [2024-04-18 12:06:00.648688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.245 [2024-04-18 12:06:00.648704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.245 qpair failed and we were unable to recover it. 00:30:10.245 [2024-04-18 12:06:00.649049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.245 [2024-04-18 12:06:00.649371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.245 [2024-04-18 12:06:00.649387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.245 qpair failed and we were unable to recover it. 00:30:10.245 [2024-04-18 12:06:00.649652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.245 [2024-04-18 12:06:00.649934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.245 [2024-04-18 12:06:00.649949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.245 qpair failed and we were unable to recover it. 00:30:10.245 [2024-04-18 12:06:00.650225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.245 [2024-04-18 12:06:00.650548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.245 [2024-04-18 12:06:00.650564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.245 qpair failed and we were unable to recover it. 00:30:10.245 [2024-04-18 12:06:00.650830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.245 [2024-04-18 12:06:00.651098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.245 [2024-04-18 12:06:00.651114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.245 qpair failed and we were unable to recover it. 00:30:10.245 [2024-04-18 12:06:00.651482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.245 [2024-04-18 12:06:00.651735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.245 [2024-04-18 12:06:00.651751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.245 qpair failed and we were unable to recover it. 00:30:10.245 [2024-04-18 12:06:00.651973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.246 [2024-04-18 12:06:00.652188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.246 [2024-04-18 12:06:00.652204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.246 qpair failed and we were unable to recover it. 00:30:10.246 [2024-04-18 12:06:00.652475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.246 [2024-04-18 12:06:00.652795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.246 [2024-04-18 12:06:00.652811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.246 qpair failed and we were unable to recover it. 00:30:10.246 [2024-04-18 12:06:00.653109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.246 [2024-04-18 12:06:00.653402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.246 [2024-04-18 12:06:00.653417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.246 qpair failed and we were unable to recover it. 00:30:10.246 [2024-04-18 12:06:00.653741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.246 [2024-04-18 12:06:00.654006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.246 [2024-04-18 12:06:00.654022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.246 qpair failed and we were unable to recover it. 00:30:10.246 [2024-04-18 12:06:00.654326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.246 [2024-04-18 12:06:00.654588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.246 [2024-04-18 12:06:00.654605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.246 qpair failed and we were unable to recover it. 00:30:10.246 [2024-04-18 12:06:00.654866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.246 [2024-04-18 12:06:00.655089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.246 [2024-04-18 12:06:00.655105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.246 qpair failed and we were unable to recover it. 00:30:10.246 [2024-04-18 12:06:00.655486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.246 [2024-04-18 12:06:00.655832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.246 [2024-04-18 12:06:00.655848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.246 qpair failed and we were unable to recover it. 00:30:10.246 [2024-04-18 12:06:00.656123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.246 [2024-04-18 12:06:00.656482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.246 [2024-04-18 12:06:00.656498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.246 qpair failed and we were unable to recover it. 00:30:10.246 [2024-04-18 12:06:00.656773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.246 [2024-04-18 12:06:00.657087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.246 [2024-04-18 12:06:00.657103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.246 qpair failed and we were unable to recover it. 00:30:10.246 [2024-04-18 12:06:00.657464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.246 [2024-04-18 12:06:00.657806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.246 [2024-04-18 12:06:00.657822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.246 qpair failed and we were unable to recover it. 00:30:10.246 [2024-04-18 12:06:00.658085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.246 [2024-04-18 12:06:00.658414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.246 [2024-04-18 12:06:00.658430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.246 qpair failed and we were unable to recover it. 00:30:10.246 [2024-04-18 12:06:00.658691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.246 [2024-04-18 12:06:00.659078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.246 [2024-04-18 12:06:00.659095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.246 qpair failed and we were unable to recover it. 00:30:10.246 [2024-04-18 12:06:00.659418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.246 [2024-04-18 12:06:00.659670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.246 [2024-04-18 12:06:00.659686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.246 qpair failed and we were unable to recover it. 00:30:10.246 [2024-04-18 12:06:00.659965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.246 [2024-04-18 12:06:00.660161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.246 [2024-04-18 12:06:00.660177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.246 qpair failed and we were unable to recover it. 00:30:10.246 [2024-04-18 12:06:00.660444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.246 [2024-04-18 12:06:00.660722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.246 [2024-04-18 12:06:00.660738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.246 qpair failed and we were unable to recover it. 00:30:10.246 [2024-04-18 12:06:00.661063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.246 [2024-04-18 12:06:00.661408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.246 [2024-04-18 12:06:00.661424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.246 qpair failed and we were unable to recover it. 00:30:10.246 [2024-04-18 12:06:00.661746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.246 [2024-04-18 12:06:00.662070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.246 [2024-04-18 12:06:00.662086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.246 qpair failed and we were unable to recover it. 00:30:10.246 [2024-04-18 12:06:00.662439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.246 [2024-04-18 12:06:00.662812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.246 [2024-04-18 12:06:00.662828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.246 qpair failed and we were unable to recover it. 00:30:10.246 [2024-04-18 12:06:00.663177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.246 [2024-04-18 12:06:00.663439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.246 [2024-04-18 12:06:00.663465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.246 qpair failed and we were unable to recover it. 00:30:10.246 [2024-04-18 12:06:00.663748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.246 [2024-04-18 12:06:00.664096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.246 [2024-04-18 12:06:00.664111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.246 qpair failed and we were unable to recover it. 00:30:10.246 [2024-04-18 12:06:00.664368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.246 [2024-04-18 12:06:00.664666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.246 [2024-04-18 12:06:00.664682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.246 qpair failed and we were unable to recover it. 00:30:10.246 [2024-04-18 12:06:00.664943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.246 [2024-04-18 12:06:00.665265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.246 [2024-04-18 12:06:00.665281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.246 qpair failed and we were unable to recover it. 00:30:10.246 [2024-04-18 12:06:00.665607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.246 [2024-04-18 12:06:00.665893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.246 [2024-04-18 12:06:00.665909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.246 qpair failed and we were unable to recover it. 00:30:10.246 [2024-04-18 12:06:00.666179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.246 [2024-04-18 12:06:00.666522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.246 [2024-04-18 12:06:00.666537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.246 qpair failed and we were unable to recover it. 00:30:10.246 [2024-04-18 12:06:00.666850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.246 [2024-04-18 12:06:00.667201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.246 [2024-04-18 12:06:00.667217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.246 qpair failed and we were unable to recover it. 00:30:10.246 [2024-04-18 12:06:00.667547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.246 [2024-04-18 12:06:00.667870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.246 [2024-04-18 12:06:00.667886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.246 qpair failed and we were unable to recover it. 00:30:10.246 [2024-04-18 12:06:00.668157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.246 [2024-04-18 12:06:00.668502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.246 [2024-04-18 12:06:00.668519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.246 qpair failed and we were unable to recover it. 00:30:10.246 [2024-04-18 12:06:00.668843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.246 [2024-04-18 12:06:00.669186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.246 [2024-04-18 12:06:00.669202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.246 qpair failed and we were unable to recover it. 00:30:10.246 [2024-04-18 12:06:00.669573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.246 [2024-04-18 12:06:00.669847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.246 [2024-04-18 12:06:00.669863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.246 qpair failed and we were unable to recover it. 00:30:10.246 [2024-04-18 12:06:00.670151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.247 [2024-04-18 12:06:00.670495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.247 [2024-04-18 12:06:00.670511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.247 qpair failed and we were unable to recover it. 00:30:10.247 [2024-04-18 12:06:00.670790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.247 [2024-04-18 12:06:00.671000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.247 [2024-04-18 12:06:00.671016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.247 qpair failed and we were unable to recover it. 00:30:10.247 [2024-04-18 12:06:00.671319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.247 [2024-04-18 12:06:00.671639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.247 [2024-04-18 12:06:00.671655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.247 qpair failed and we were unable to recover it. 00:30:10.247 [2024-04-18 12:06:00.671856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.247 [2024-04-18 12:06:00.672178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.247 [2024-04-18 12:06:00.672194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.247 qpair failed and we were unable to recover it. 00:30:10.247 [2024-04-18 12:06:00.672522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.247 [2024-04-18 12:06:00.672738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.247 [2024-04-18 12:06:00.672754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.247 qpair failed and we were unable to recover it. 00:30:10.247 [2024-04-18 12:06:00.673009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.247 [2024-04-18 12:06:00.673380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.247 [2024-04-18 12:06:00.673395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.247 qpair failed and we were unable to recover it. 00:30:10.247 [2024-04-18 12:06:00.673747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.247 [2024-04-18 12:06:00.674016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.247 [2024-04-18 12:06:00.674032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.247 qpair failed and we were unable to recover it. 00:30:10.247 [2024-04-18 12:06:00.674377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.247 [2024-04-18 12:06:00.674646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.247 [2024-04-18 12:06:00.674663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.247 qpair failed and we were unable to recover it. 00:30:10.247 [2024-04-18 12:06:00.675043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.247 [2024-04-18 12:06:00.675297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.247 [2024-04-18 12:06:00.675313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.247 qpair failed and we were unable to recover it. 00:30:10.247 [2024-04-18 12:06:00.675710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.247 [2024-04-18 12:06:00.675985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.247 [2024-04-18 12:06:00.676000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.247 qpair failed and we were unable to recover it. 00:30:10.247 [2024-04-18 12:06:00.676281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.247 [2024-04-18 12:06:00.676538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.247 [2024-04-18 12:06:00.676554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.247 qpair failed and we were unable to recover it. 00:30:10.247 [2024-04-18 12:06:00.676834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.247 [2024-04-18 12:06:00.677129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.247 [2024-04-18 12:06:00.677145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.247 qpair failed and we were unable to recover it. 00:30:10.247 [2024-04-18 12:06:00.677424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.247 [2024-04-18 12:06:00.677639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.247 [2024-04-18 12:06:00.677656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.247 qpair failed and we were unable to recover it. 00:30:10.247 [2024-04-18 12:06:00.678012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.247 [2024-04-18 12:06:00.678406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.247 [2024-04-18 12:06:00.678422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.247 qpair failed and we were unable to recover it. 00:30:10.247 [2024-04-18 12:06:00.678751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.247 [2024-04-18 12:06:00.679015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.247 [2024-04-18 12:06:00.679031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.247 qpair failed and we were unable to recover it. 00:30:10.247 [2024-04-18 12:06:00.679304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.247 [2024-04-18 12:06:00.679653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.247 [2024-04-18 12:06:00.679669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.247 qpair failed and we were unable to recover it. 00:30:10.247 [2024-04-18 12:06:00.679885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.247 [2024-04-18 12:06:00.680232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.247 [2024-04-18 12:06:00.680248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.247 qpair failed and we were unable to recover it. 00:30:10.247 [2024-04-18 12:06:00.680460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.247 [2024-04-18 12:06:00.680779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.247 [2024-04-18 12:06:00.680795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.247 qpair failed and we were unable to recover it. 00:30:10.247 [2024-04-18 12:06:00.680985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.247 [2024-04-18 12:06:00.681258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.247 [2024-04-18 12:06:00.681274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.247 qpair failed and we were unable to recover it. 00:30:10.247 [2024-04-18 12:06:00.681606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.247 [2024-04-18 12:06:00.681819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.247 [2024-04-18 12:06:00.681835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.247 qpair failed and we were unable to recover it. 00:30:10.247 [2024-04-18 12:06:00.682091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.247 [2024-04-18 12:06:00.682436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.247 [2024-04-18 12:06:00.682456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.247 qpair failed and we were unable to recover it. 00:30:10.247 [2024-04-18 12:06:00.682784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.247 [2024-04-18 12:06:00.683132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.247 [2024-04-18 12:06:00.683148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.247 qpair failed and we were unable to recover it. 00:30:10.247 [2024-04-18 12:06:00.683473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.247 [2024-04-18 12:06:00.683753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.247 [2024-04-18 12:06:00.683769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.247 qpair failed and we were unable to recover it. 00:30:10.247 [2024-04-18 12:06:00.684137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.247 [2024-04-18 12:06:00.684417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.247 [2024-04-18 12:06:00.684433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.247 qpair failed and we were unable to recover it. 00:30:10.247 [2024-04-18 12:06:00.684698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.247 [2024-04-18 12:06:00.684967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.247 [2024-04-18 12:06:00.684982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.247 qpair failed and we were unable to recover it. 00:30:10.247 [2024-04-18 12:06:00.685324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.247 [2024-04-18 12:06:00.685538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.247 [2024-04-18 12:06:00.685555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.247 qpair failed and we were unable to recover it. 00:30:10.247 [2024-04-18 12:06:00.685909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.247 [2024-04-18 12:06:00.686180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.247 [2024-04-18 12:06:00.686196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.247 qpair failed and we were unable to recover it. 00:30:10.247 [2024-04-18 12:06:00.686553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.247 [2024-04-18 12:06:00.686860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.247 [2024-04-18 12:06:00.686877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.247 qpair failed and we were unable to recover it. 00:30:10.247 [2024-04-18 12:06:00.687153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.247 [2024-04-18 12:06:00.687500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.247 [2024-04-18 12:06:00.687517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.247 qpair failed and we were unable to recover it. 00:30:10.248 [2024-04-18 12:06:00.687792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.248 [2024-04-18 12:06:00.688057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.248 [2024-04-18 12:06:00.688073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.248 qpair failed and we were unable to recover it. 00:30:10.248 [2024-04-18 12:06:00.688398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.248 [2024-04-18 12:06:00.688719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.248 [2024-04-18 12:06:00.688735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.248 qpair failed and we were unable to recover it. 00:30:10.248 [2024-04-18 12:06:00.688990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.248 [2024-04-18 12:06:00.689335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.248 [2024-04-18 12:06:00.689351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.248 qpair failed and we were unable to recover it. 00:30:10.248 [2024-04-18 12:06:00.689642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.248 [2024-04-18 12:06:00.689962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.248 [2024-04-18 12:06:00.689978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.248 qpair failed and we were unable to recover it. 00:30:10.248 [2024-04-18 12:06:00.690258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.248 [2024-04-18 12:06:00.690618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.248 [2024-04-18 12:06:00.690634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.248 qpair failed and we were unable to recover it. 00:30:10.248 [2024-04-18 12:06:00.690916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.248 [2024-04-18 12:06:00.691217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.248 [2024-04-18 12:06:00.691234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.248 qpair failed and we were unable to recover it. 00:30:10.248 [2024-04-18 12:06:00.691593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.248 [2024-04-18 12:06:00.691962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.248 [2024-04-18 12:06:00.691978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.248 qpair failed and we were unable to recover it. 00:30:10.248 [2024-04-18 12:06:00.692241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.248 [2024-04-18 12:06:00.692513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.248 [2024-04-18 12:06:00.692529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.248 qpair failed and we were unable to recover it. 00:30:10.248 [2024-04-18 12:06:00.692883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.248 [2024-04-18 12:06:00.693157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.248 [2024-04-18 12:06:00.693173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.248 qpair failed and we were unable to recover it. 00:30:10.248 [2024-04-18 12:06:00.693390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.248 [2024-04-18 12:06:00.693774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.248 [2024-04-18 12:06:00.693790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.248 qpair failed and we were unable to recover it. 00:30:10.248 [2024-04-18 12:06:00.694063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.248 [2024-04-18 12:06:00.694391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.248 [2024-04-18 12:06:00.694407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.248 qpair failed and we were unable to recover it. 00:30:10.248 [2024-04-18 12:06:00.694695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.248 [2024-04-18 12:06:00.695060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.248 [2024-04-18 12:06:00.695075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.248 qpair failed and we were unable to recover it. 00:30:10.248 [2024-04-18 12:06:00.695299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.248 [2024-04-18 12:06:00.695645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.248 [2024-04-18 12:06:00.695661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.248 qpair failed and we were unable to recover it. 00:30:10.248 [2024-04-18 12:06:00.696033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.248 [2024-04-18 12:06:00.696285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.248 [2024-04-18 12:06:00.696301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.248 qpair failed and we were unable to recover it. 00:30:10.248 [2024-04-18 12:06:00.696627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.248 [2024-04-18 12:06:00.696969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.248 [2024-04-18 12:06:00.696985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.248 qpair failed and we were unable to recover it. 00:30:10.248 [2024-04-18 12:06:00.697345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.248 [2024-04-18 12:06:00.697665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.248 [2024-04-18 12:06:00.697682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.248 qpair failed and we were unable to recover it. 00:30:10.248 [2024-04-18 12:06:00.698029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.248 [2024-04-18 12:06:00.698394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.248 [2024-04-18 12:06:00.698410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.248 qpair failed and we were unable to recover it. 00:30:10.248 [2024-04-18 12:06:00.698686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.248 [2024-04-18 12:06:00.698948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.248 [2024-04-18 12:06:00.698964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.248 qpair failed and we were unable to recover it. 00:30:10.248 [2024-04-18 12:06:00.699289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.248 [2024-04-18 12:06:00.699590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.248 [2024-04-18 12:06:00.699606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.248 qpair failed and we were unable to recover it. 00:30:10.248 [2024-04-18 12:06:00.699880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.248 [2024-04-18 12:06:00.700139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.248 [2024-04-18 12:06:00.700155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.248 qpair failed and we were unable to recover it. 00:30:10.248 [2024-04-18 12:06:00.700483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.248 [2024-04-18 12:06:00.700762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.248 [2024-04-18 12:06:00.700778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.248 qpair failed and we were unable to recover it. 00:30:10.248 [2024-04-18 12:06:00.701053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.248 [2024-04-18 12:06:00.701349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.248 [2024-04-18 12:06:00.701365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.248 qpair failed and we were unable to recover it. 00:30:10.248 [2024-04-18 12:06:00.701576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.248 [2024-04-18 12:06:00.701865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.248 [2024-04-18 12:06:00.701883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.248 qpair failed and we were unable to recover it. 00:30:10.248 [2024-04-18 12:06:00.702171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.248 [2024-04-18 12:06:00.702516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.248 [2024-04-18 12:06:00.702532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.248 qpair failed and we were unable to recover it. 00:30:10.248 [2024-04-18 12:06:00.702900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.248 [2024-04-18 12:06:00.703167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.248 [2024-04-18 12:06:00.703182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.248 qpair failed and we were unable to recover it. 00:30:10.248 [2024-04-18 12:06:00.703465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.248 [2024-04-18 12:06:00.703793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.248 [2024-04-18 12:06:00.703808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.248 qpair failed and we were unable to recover it. 00:30:10.248 [2024-04-18 12:06:00.704136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.248 [2024-04-18 12:06:00.704413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.248 [2024-04-18 12:06:00.704429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.248 qpair failed and we were unable to recover it. 00:30:10.248 [2024-04-18 12:06:00.704758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.248 [2024-04-18 12:06:00.705021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.248 [2024-04-18 12:06:00.705037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.248 qpair failed and we were unable to recover it. 00:30:10.248 [2024-04-18 12:06:00.705248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.248 [2024-04-18 12:06:00.705526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.248 [2024-04-18 12:06:00.705542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.249 qpair failed and we were unable to recover it. 00:30:10.249 [2024-04-18 12:06:00.705819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.249 [2024-04-18 12:06:00.706124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.249 [2024-04-18 12:06:00.706140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.249 qpair failed and we were unable to recover it. 00:30:10.249 [2024-04-18 12:06:00.706419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.249 [2024-04-18 12:06:00.706765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.249 [2024-04-18 12:06:00.706781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.249 qpair failed and we were unable to recover it. 00:30:10.249 [2024-04-18 12:06:00.706994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.249 [2024-04-18 12:06:00.707361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.249 [2024-04-18 12:06:00.707377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.249 qpair failed and we were unable to recover it. 00:30:10.249 [2024-04-18 12:06:00.707730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.249 [2024-04-18 12:06:00.707994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.249 [2024-04-18 12:06:00.708012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.249 qpair failed and we were unable to recover it. 00:30:10.249 [2024-04-18 12:06:00.708299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.249 [2024-04-18 12:06:00.708625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.249 [2024-04-18 12:06:00.708642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.249 qpair failed and we were unable to recover it. 00:30:10.249 [2024-04-18 12:06:00.708966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.249 [2024-04-18 12:06:00.709284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.249 [2024-04-18 12:06:00.709300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.249 qpair failed and we were unable to recover it. 00:30:10.249 [2024-04-18 12:06:00.709580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.249 [2024-04-18 12:06:00.709871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.249 [2024-04-18 12:06:00.709887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.249 qpair failed and we were unable to recover it. 00:30:10.249 [2024-04-18 12:06:00.710164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.249 [2024-04-18 12:06:00.710462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.249 [2024-04-18 12:06:00.710478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.249 qpair failed and we were unable to recover it. 00:30:10.249 [2024-04-18 12:06:00.710752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.249 [2024-04-18 12:06:00.711075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.249 [2024-04-18 12:06:00.711091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.249 qpair failed and we were unable to recover it. 00:30:10.249 [2024-04-18 12:06:00.711444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.249 [2024-04-18 12:06:00.711820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.249 [2024-04-18 12:06:00.711837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.249 qpair failed and we were unable to recover it. 00:30:10.249 [2024-04-18 12:06:00.712078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.249 [2024-04-18 12:06:00.712401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.249 [2024-04-18 12:06:00.712417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.249 qpair failed and we were unable to recover it. 00:30:10.249 [2024-04-18 12:06:00.712703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.249 [2024-04-18 12:06:00.713050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.249 [2024-04-18 12:06:00.713066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.249 qpair failed and we were unable to recover it. 00:30:10.249 [2024-04-18 12:06:00.713392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.249 [2024-04-18 12:06:00.713710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.249 [2024-04-18 12:06:00.713726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.249 qpair failed and we were unable to recover it. 00:30:10.249 [2024-04-18 12:06:00.714127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.249 [2024-04-18 12:06:00.714458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.249 [2024-04-18 12:06:00.714476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.249 qpair failed and we were unable to recover it. 00:30:10.249 [2024-04-18 12:06:00.714780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.249 [2024-04-18 12:06:00.715044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.249 [2024-04-18 12:06:00.715060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.249 qpair failed and we were unable to recover it. 00:30:10.249 [2024-04-18 12:06:00.715269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.249 [2024-04-18 12:06:00.715616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.249 [2024-04-18 12:06:00.715633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.249 qpair failed and we were unable to recover it. 00:30:10.249 [2024-04-18 12:06:00.715957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.249 [2024-04-18 12:06:00.716250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.249 [2024-04-18 12:06:00.716266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.249 qpair failed and we were unable to recover it. 00:30:10.249 [2024-04-18 12:06:00.716540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.249 [2024-04-18 12:06:00.716805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.249 [2024-04-18 12:06:00.716822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.249 qpair failed and we were unable to recover it. 00:30:10.249 [2024-04-18 12:06:00.717150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.249 [2024-04-18 12:06:00.717431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.249 [2024-04-18 12:06:00.717447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.249 qpair failed and we were unable to recover it. 00:30:10.249 [2024-04-18 12:06:00.717729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.249 [2024-04-18 12:06:00.718073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.249 [2024-04-18 12:06:00.718089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.249 qpair failed and we were unable to recover it. 00:30:10.249 [2024-04-18 12:06:00.718380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.249 [2024-04-18 12:06:00.718727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.249 [2024-04-18 12:06:00.718744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.249 qpair failed and we were unable to recover it. 00:30:10.249 [2024-04-18 12:06:00.719014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.249 [2024-04-18 12:06:00.719382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.249 [2024-04-18 12:06:00.719397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.249 qpair failed and we were unable to recover it. 00:30:10.249 [2024-04-18 12:06:00.719703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.249 [2024-04-18 12:06:00.719905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.249 [2024-04-18 12:06:00.719921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.249 qpair failed and we were unable to recover it. 00:30:10.249 [2024-04-18 12:06:00.720270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.249 [2024-04-18 12:06:00.720642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.249 [2024-04-18 12:06:00.720661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.249 qpair failed and we were unable to recover it. 00:30:10.249 [2024-04-18 12:06:00.720937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.249 [2024-04-18 12:06:00.721152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.250 [2024-04-18 12:06:00.721168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.250 qpair failed and we were unable to recover it. 00:30:10.250 [2024-04-18 12:06:00.721462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.250 [2024-04-18 12:06:00.721744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.250 [2024-04-18 12:06:00.721759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.250 qpair failed and we were unable to recover it. 00:30:10.250 [2024-04-18 12:06:00.722034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.250 [2024-04-18 12:06:00.722300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.250 [2024-04-18 12:06:00.722315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.250 qpair failed and we were unable to recover it. 00:30:10.250 [2024-04-18 12:06:00.722517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.250 [2024-04-18 12:06:00.722790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.250 [2024-04-18 12:06:00.722806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.250 qpair failed and we were unable to recover it. 00:30:10.250 [2024-04-18 12:06:00.723193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.250 [2024-04-18 12:06:00.723458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.250 [2024-04-18 12:06:00.723474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.250 qpair failed and we were unable to recover it. 00:30:10.250 [2024-04-18 12:06:00.723740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.250 [2024-04-18 12:06:00.723994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.250 [2024-04-18 12:06:00.724010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.250 qpair failed and we were unable to recover it. 00:30:10.250 [2024-04-18 12:06:00.724272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.250 [2024-04-18 12:06:00.724617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.250 [2024-04-18 12:06:00.724633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.250 qpair failed and we were unable to recover it. 00:30:10.250 [2024-04-18 12:06:00.724936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.250 [2024-04-18 12:06:00.725179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.250 [2024-04-18 12:06:00.725195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.250 qpair failed and we were unable to recover it. 00:30:10.250 [2024-04-18 12:06:00.725539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.250 [2024-04-18 12:06:00.725866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.250 [2024-04-18 12:06:00.725882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.250 qpair failed and we were unable to recover it. 00:30:10.250 [2024-04-18 12:06:00.726089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.250 [2024-04-18 12:06:00.726430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.250 [2024-04-18 12:06:00.726448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.250 qpair failed and we were unable to recover it. 00:30:10.250 [2024-04-18 12:06:00.726815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.250 [2024-04-18 12:06:00.727139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.250 [2024-04-18 12:06:00.727160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.250 qpair failed and we were unable to recover it. 00:30:10.250 [2024-04-18 12:06:00.727431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.250 [2024-04-18 12:06:00.727761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.250 [2024-04-18 12:06:00.727777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.250 qpair failed and we were unable to recover it. 00:30:10.250 [2024-04-18 12:06:00.728130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.250 [2024-04-18 12:06:00.728410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.250 [2024-04-18 12:06:00.728426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.250 qpair failed and we were unable to recover it. 00:30:10.250 [2024-04-18 12:06:00.728720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.250 [2024-04-18 12:06:00.729072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.250 [2024-04-18 12:06:00.729088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.250 qpair failed and we were unable to recover it. 00:30:10.250 [2024-04-18 12:06:00.729322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.250 [2024-04-18 12:06:00.729652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.250 [2024-04-18 12:06:00.729668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.250 qpair failed and we were unable to recover it. 00:30:10.250 [2024-04-18 12:06:00.729946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.250 [2024-04-18 12:06:00.730279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.250 [2024-04-18 12:06:00.730295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.250 qpair failed and we were unable to recover it. 00:30:10.250 [2024-04-18 12:06:00.730578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.250 [2024-04-18 12:06:00.730853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.250 [2024-04-18 12:06:00.730869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.250 qpair failed and we were unable to recover it. 00:30:10.250 [2024-04-18 12:06:00.731219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.250 [2024-04-18 12:06:00.731540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.250 [2024-04-18 12:06:00.731557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.250 qpair failed and we were unable to recover it. 00:30:10.250 [2024-04-18 12:06:00.731829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.250 [2024-04-18 12:06:00.732103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.250 [2024-04-18 12:06:00.732118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.250 qpair failed and we were unable to recover it. 00:30:10.250 [2024-04-18 12:06:00.732337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.250 [2024-04-18 12:06:00.732683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.250 [2024-04-18 12:06:00.732699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.250 qpair failed and we were unable to recover it. 00:30:10.250 [2024-04-18 12:06:00.733027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.250 [2024-04-18 12:06:00.733302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.250 [2024-04-18 12:06:00.733318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.250 qpair failed and we were unable to recover it. 00:30:10.250 [2024-04-18 12:06:00.733692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.250 [2024-04-18 12:06:00.733890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.250 [2024-04-18 12:06:00.733906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.250 qpair failed and we were unable to recover it. 00:30:10.250 [2024-04-18 12:06:00.734102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.250 [2024-04-18 12:06:00.734324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.250 [2024-04-18 12:06:00.734340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.250 qpair failed and we were unable to recover it. 00:30:10.250 [2024-04-18 12:06:00.734706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.250 [2024-04-18 12:06:00.734977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.250 [2024-04-18 12:06:00.734993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.250 qpair failed and we were unable to recover it. 00:30:10.250 [2024-04-18 12:06:00.735359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.250 [2024-04-18 12:06:00.735659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.250 [2024-04-18 12:06:00.735675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.250 qpair failed and we were unable to recover it. 00:30:10.250 [2024-04-18 12:06:00.736000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.250 [2024-04-18 12:06:00.736187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.250 [2024-04-18 12:06:00.736203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.250 qpair failed and we were unable to recover it. 00:30:10.250 [2024-04-18 12:06:00.736424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.250 [2024-04-18 12:06:00.736693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.250 [2024-04-18 12:06:00.736709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.250 qpair failed and we were unable to recover it. 00:30:10.250 [2024-04-18 12:06:00.736987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.250 [2024-04-18 12:06:00.737323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.250 [2024-04-18 12:06:00.737338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.250 qpair failed and we were unable to recover it. 00:30:10.250 [2024-04-18 12:06:00.737605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.250 [2024-04-18 12:06:00.737823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.250 [2024-04-18 12:06:00.737839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.250 qpair failed and we were unable to recover it. 00:30:10.250 [2024-04-18 12:06:00.738116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.250 [2024-04-18 12:06:00.738377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.251 [2024-04-18 12:06:00.738394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.251 qpair failed and we were unable to recover it. 00:30:10.251 [2024-04-18 12:06:00.738743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.251 [2024-04-18 12:06:00.739001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.251 [2024-04-18 12:06:00.739017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.251 qpair failed and we were unable to recover it. 00:30:10.251 [2024-04-18 12:06:00.739306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.251 [2024-04-18 12:06:00.739649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.251 [2024-04-18 12:06:00.739665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.251 qpair failed and we were unable to recover it. 00:30:10.251 [2024-04-18 12:06:00.739954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.251 [2024-04-18 12:06:00.740225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.251 [2024-04-18 12:06:00.740241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.251 qpair failed and we were unable to recover it. 00:30:10.251 [2024-04-18 12:06:00.740590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.251 [2024-04-18 12:06:00.740845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.251 [2024-04-18 12:06:00.740861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.251 qpair failed and we were unable to recover it. 00:30:10.251 [2024-04-18 12:06:00.741163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.251 [2024-04-18 12:06:00.741418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.251 [2024-04-18 12:06:00.741434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.251 qpair failed and we were unable to recover it. 00:30:10.251 [2024-04-18 12:06:00.741791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.251 [2024-04-18 12:06:00.742139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.251 [2024-04-18 12:06:00.742155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.251 qpair failed and we were unable to recover it. 00:30:10.251 [2024-04-18 12:06:00.742496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.251 [2024-04-18 12:06:00.742768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.251 [2024-04-18 12:06:00.742784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.251 qpair failed and we were unable to recover it. 00:30:10.251 [2024-04-18 12:06:00.743139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.251 [2024-04-18 12:06:00.743413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.251 [2024-04-18 12:06:00.743429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.251 qpair failed and we were unable to recover it. 00:30:10.251 [2024-04-18 12:06:00.743710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.251 [2024-04-18 12:06:00.743939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.251 [2024-04-18 12:06:00.743955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.251 qpair failed and we were unable to recover it. 00:30:10.251 [2024-04-18 12:06:00.744261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.251 [2024-04-18 12:06:00.744594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.251 [2024-04-18 12:06:00.744610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.251 qpair failed and we were unable to recover it. 00:30:10.251 [2024-04-18 12:06:00.744960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.251 [2024-04-18 12:06:00.745295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.251 [2024-04-18 12:06:00.745314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.251 qpair failed and we were unable to recover it. 00:30:10.251 [2024-04-18 12:06:00.745583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.251 [2024-04-18 12:06:00.745940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.251 [2024-04-18 12:06:00.745955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.251 qpair failed and we were unable to recover it. 00:30:10.251 [2024-04-18 12:06:00.746282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.251 [2024-04-18 12:06:00.746576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.251 [2024-04-18 12:06:00.746592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.251 qpair failed and we were unable to recover it. 00:30:10.251 [2024-04-18 12:06:00.746819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.251 [2024-04-18 12:06:00.747096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.251 [2024-04-18 12:06:00.747112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.251 qpair failed and we were unable to recover it. 00:30:10.251 [2024-04-18 12:06:00.747390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.251 [2024-04-18 12:06:00.747612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.251 [2024-04-18 12:06:00.747628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.251 qpair failed and we were unable to recover it. 00:30:10.251 [2024-04-18 12:06:00.747967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.251 [2024-04-18 12:06:00.748170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.251 [2024-04-18 12:06:00.748186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.251 qpair failed and we were unable to recover it. 00:30:10.251 [2024-04-18 12:06:00.748333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.251 [2024-04-18 12:06:00.748541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.251 [2024-04-18 12:06:00.748557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.251 qpair failed and we were unable to recover it. 00:30:10.251 [2024-04-18 12:06:00.748931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.251 [2024-04-18 12:06:00.749194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.251 [2024-04-18 12:06:00.749210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.251 qpair failed and we were unable to recover it. 00:30:10.251 [2024-04-18 12:06:00.749525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.251 [2024-04-18 12:06:00.749798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.251 [2024-04-18 12:06:00.749814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.251 qpair failed and we were unable to recover it. 00:30:10.251 [2024-04-18 12:06:00.750108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.251 [2024-04-18 12:06:00.750359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.251 [2024-04-18 12:06:00.750375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.251 qpair failed and we were unable to recover it. 00:30:10.251 [2024-04-18 12:06:00.750582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.251 [2024-04-18 12:06:00.750885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.251 [2024-04-18 12:06:00.750901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.251 qpair failed and we were unable to recover it. 00:30:10.251 [2024-04-18 12:06:00.751175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.251 [2024-04-18 12:06:00.751517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.251 [2024-04-18 12:06:00.751533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.251 qpair failed and we were unable to recover it. 00:30:10.251 [2024-04-18 12:06:00.751816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.251 [2024-04-18 12:06:00.752140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.251 [2024-04-18 12:06:00.752156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.251 qpair failed and we were unable to recover it. 00:30:10.251 [2024-04-18 12:06:00.752479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.251 [2024-04-18 12:06:00.752808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.251 [2024-04-18 12:06:00.752824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.251 qpair failed and we were unable to recover it. 00:30:10.251 [2024-04-18 12:06:00.753154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.251 [2024-04-18 12:06:00.753405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.251 [2024-04-18 12:06:00.753421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.251 qpair failed and we were unable to recover it. 00:30:10.251 [2024-04-18 12:06:00.753809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.251 [2024-04-18 12:06:00.754034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.251 [2024-04-18 12:06:00.754050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.251 qpair failed and we were unable to recover it. 00:30:10.251 [2024-04-18 12:06:00.754356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.251 [2024-04-18 12:06:00.754703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.251 [2024-04-18 12:06:00.754719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.251 qpair failed and we were unable to recover it. 00:30:10.251 [2024-04-18 12:06:00.754934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.251 [2024-04-18 12:06:00.755267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.251 [2024-04-18 12:06:00.755282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.251 qpair failed and we were unable to recover it. 00:30:10.251 [2024-04-18 12:06:00.755491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.251 [2024-04-18 12:06:00.755808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.252 [2024-04-18 12:06:00.755824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.252 qpair failed and we were unable to recover it. 00:30:10.252 [2024-04-18 12:06:00.756092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.252 [2024-04-18 12:06:00.756431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.252 [2024-04-18 12:06:00.756447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.252 qpair failed and we were unable to recover it. 00:30:10.252 [2024-04-18 12:06:00.756754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.252 [2024-04-18 12:06:00.757022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.252 [2024-04-18 12:06:00.757038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.252 qpair failed and we were unable to recover it. 00:30:10.252 [2024-04-18 12:06:00.757390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.252 [2024-04-18 12:06:00.757644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.252 [2024-04-18 12:06:00.757660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.252 qpair failed and we were unable to recover it. 00:30:10.252 [2024-04-18 12:06:00.757954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.252 [2024-04-18 12:06:00.758285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.252 [2024-04-18 12:06:00.758300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.252 qpair failed and we were unable to recover it. 00:30:10.252 [2024-04-18 12:06:00.758643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.252 [2024-04-18 12:06:00.758967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.252 [2024-04-18 12:06:00.758983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.252 qpair failed and we were unable to recover it. 00:30:10.252 [2024-04-18 12:06:00.759284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.252 [2024-04-18 12:06:00.759508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.252 [2024-04-18 12:06:00.759525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.252 qpair failed and we were unable to recover it. 00:30:10.252 [2024-04-18 12:06:00.759893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.252 [2024-04-18 12:06:00.760192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.252 [2024-04-18 12:06:00.760208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.252 qpair failed and we were unable to recover it. 00:30:10.252 [2024-04-18 12:06:00.760558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.252 [2024-04-18 12:06:00.760722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.252 [2024-04-18 12:06:00.760738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.252 qpair failed and we were unable to recover it. 00:30:10.252 [2024-04-18 12:06:00.761027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.252 [2024-04-18 12:06:00.761281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.252 [2024-04-18 12:06:00.761296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.252 qpair failed and we were unable to recover it. 00:30:10.252 [2024-04-18 12:06:00.761641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.252 [2024-04-18 12:06:00.761909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.252 [2024-04-18 12:06:00.761924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.252 qpair failed and we were unable to recover it. 00:30:10.252 [2024-04-18 12:06:00.762195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.252 [2024-04-18 12:06:00.762456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.252 [2024-04-18 12:06:00.762472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.252 qpair failed and we were unable to recover it. 00:30:10.252 [2024-04-18 12:06:00.762731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.252 [2024-04-18 12:06:00.762999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.252 [2024-04-18 12:06:00.763015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.252 qpair failed and we were unable to recover it. 00:30:10.252 [2024-04-18 12:06:00.763298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.252 [2024-04-18 12:06:00.763634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.252 [2024-04-18 12:06:00.763650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.252 qpair failed and we were unable to recover it. 00:30:10.252 [2024-04-18 12:06:00.763937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.252 [2024-04-18 12:06:00.764265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.252 [2024-04-18 12:06:00.764282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.252 qpair failed and we were unable to recover it. 00:30:10.252 [2024-04-18 12:06:00.764572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.252 [2024-04-18 12:06:00.764866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.252 [2024-04-18 12:06:00.764881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.252 qpair failed and we were unable to recover it. 00:30:10.252 [2024-04-18 12:06:00.765137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.252 [2024-04-18 12:06:00.765490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.252 [2024-04-18 12:06:00.765506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.252 qpair failed and we were unable to recover it. 00:30:10.252 [2024-04-18 12:06:00.765802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.252 [2024-04-18 12:06:00.766092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.252 [2024-04-18 12:06:00.766108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.252 qpair failed and we were unable to recover it. 00:30:10.252 [2024-04-18 12:06:00.766446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.252 [2024-04-18 12:06:00.766785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.252 [2024-04-18 12:06:00.766801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.252 qpair failed and we were unable to recover it. 00:30:10.252 [2024-04-18 12:06:00.767174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.252 [2024-04-18 12:06:00.767399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.252 [2024-04-18 12:06:00.767415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.252 qpair failed and we were unable to recover it. 00:30:10.252 [2024-04-18 12:06:00.767767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.252 [2024-04-18 12:06:00.768033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.252 [2024-04-18 12:06:00.768049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.252 qpair failed and we were unable to recover it. 00:30:10.252 [2024-04-18 12:06:00.768314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.252 [2024-04-18 12:06:00.768669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.252 [2024-04-18 12:06:00.768685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.252 qpair failed and we were unable to recover it. 00:30:10.252 [2024-04-18 12:06:00.769048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.252 [2024-04-18 12:06:00.769428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.252 [2024-04-18 12:06:00.769459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:10.252 qpair failed and we were unable to recover it. 00:30:10.252 [2024-04-18 12:06:00.769764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.252 [2024-04-18 12:06:00.770081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.252 [2024-04-18 12:06:00.770109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:10.252 qpair failed and we were unable to recover it. 00:30:10.252 [2024-04-18 12:06:00.770404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.252 [2024-04-18 12:06:00.770745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.252 [2024-04-18 12:06:00.770767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:10.252 qpair failed and we were unable to recover it. 00:30:10.252 [2024-04-18 12:06:00.771090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.252 [2024-04-18 12:06:00.771455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.252 [2024-04-18 12:06:00.771477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:10.252 qpair failed and we were unable to recover it. 00:30:10.521 [2024-04-18 12:06:00.771781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.521 [2024-04-18 12:06:00.772138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.521 [2024-04-18 12:06:00.772159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:10.521 qpair failed and we were unable to recover it. 00:30:10.521 [2024-04-18 12:06:00.772443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.521 [2024-04-18 12:06:00.772767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.521 [2024-04-18 12:06:00.772788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:10.521 qpair failed and we were unable to recover it. 00:30:10.521 [2024-04-18 12:06:00.773075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.521 [2024-04-18 12:06:00.773424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.521 [2024-04-18 12:06:00.773446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:10.521 qpair failed and we were unable to recover it. 00:30:10.521 [2024-04-18 12:06:00.773761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.521 [2024-04-18 12:06:00.774114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.521 [2024-04-18 12:06:00.774134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:10.521 qpair failed and we were unable to recover it. 00:30:10.521 [2024-04-18 12:06:00.774500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.521 [2024-04-18 12:06:00.774790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.521 [2024-04-18 12:06:00.774811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:10.521 qpair failed and we were unable to recover it. 00:30:10.521 [2024-04-18 12:06:00.775168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.521 [2024-04-18 12:06:00.775388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.521 [2024-04-18 12:06:00.775409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:10.521 qpair failed and we were unable to recover it. 00:30:10.521 [2024-04-18 12:06:00.775637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.521 [2024-04-18 12:06:00.775993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.521 [2024-04-18 12:06:00.776014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:10.521 qpair failed and we were unable to recover it. 00:30:10.521 [2024-04-18 12:06:00.776385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.521 [2024-04-18 12:06:00.776661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.521 [2024-04-18 12:06:00.776682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:10.521 qpair failed and we were unable to recover it. 00:30:10.521 [2024-04-18 12:06:00.776966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.521 [2024-04-18 12:06:00.777359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.521 [2024-04-18 12:06:00.777380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:10.521 qpair failed and we were unable to recover it. 00:30:10.521 [2024-04-18 12:06:00.777747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.521 [2024-04-18 12:06:00.778106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.521 [2024-04-18 12:06:00.778126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:10.521 qpair failed and we were unable to recover it. 00:30:10.521 [2024-04-18 12:06:00.778466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.521 [2024-04-18 12:06:00.778751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.521 [2024-04-18 12:06:00.778772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:10.521 qpair failed and we were unable to recover it. 00:30:10.521 [2024-04-18 12:06:00.779046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.521 [2024-04-18 12:06:00.779347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.521 [2024-04-18 12:06:00.779368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:10.521 qpair failed and we were unable to recover it. 00:30:10.521 [2024-04-18 12:06:00.779745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.521 [2024-04-18 12:06:00.779977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.521 [2024-04-18 12:06:00.779998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:10.521 qpair failed and we were unable to recover it. 00:30:10.521 [2024-04-18 12:06:00.780333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.521 [2024-04-18 12:06:00.780708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.521 [2024-04-18 12:06:00.780730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:10.521 qpair failed and we were unable to recover it. 00:30:10.521 [2024-04-18 12:06:00.781062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.521 [2024-04-18 12:06:00.781443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.521 [2024-04-18 12:06:00.781469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:10.521 qpair failed and we were unable to recover it. 00:30:10.521 [2024-04-18 12:06:00.781808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.521 [2024-04-18 12:06:00.782149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.521 [2024-04-18 12:06:00.782170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:10.521 qpair failed and we were unable to recover it. 00:30:10.521 [2024-04-18 12:06:00.782551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.521 [2024-04-18 12:06:00.782840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.521 [2024-04-18 12:06:00.782861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:10.521 qpair failed and we were unable to recover it. 00:30:10.521 [2024-04-18 12:06:00.783183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.521 [2024-04-18 12:06:00.783399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.521 [2024-04-18 12:06:00.783421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:10.521 qpair failed and we were unable to recover it. 00:30:10.521 [2024-04-18 12:06:00.783772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.521 [2024-04-18 12:06:00.784156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.521 [2024-04-18 12:06:00.784177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:10.521 qpair failed and we were unable to recover it. 00:30:10.521 [2024-04-18 12:06:00.784509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.521 [2024-04-18 12:06:00.784783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.521 [2024-04-18 12:06:00.784803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:10.521 qpair failed and we were unable to recover it. 00:30:10.521 [2024-04-18 12:06:00.785087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.521 [2024-04-18 12:06:00.785364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.521 [2024-04-18 12:06:00.785386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:10.521 qpair failed and we were unable to recover it. 00:30:10.521 [2024-04-18 12:06:00.785747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.521 [2024-04-18 12:06:00.786015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.521 [2024-04-18 12:06:00.786037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:10.522 qpair failed and we were unable to recover it. 00:30:10.522 [2024-04-18 12:06:00.786404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.522 [2024-04-18 12:06:00.786800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.522 [2024-04-18 12:06:00.786822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:10.522 qpair failed and we were unable to recover it. 00:30:10.522 [2024-04-18 12:06:00.787105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.522 [2024-04-18 12:06:00.787384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.522 [2024-04-18 12:06:00.787405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:10.522 qpair failed and we were unable to recover it. 00:30:10.522 [2024-04-18 12:06:00.787690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.522 [2024-04-18 12:06:00.788046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.522 [2024-04-18 12:06:00.788067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:10.522 qpair failed and we were unable to recover it. 00:30:10.522 [2024-04-18 12:06:00.788352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.522 [2024-04-18 12:06:00.788577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.522 [2024-04-18 12:06:00.788598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:10.522 qpair failed and we were unable to recover it. 00:30:10.522 [2024-04-18 12:06:00.788832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.522 [2024-04-18 12:06:00.789167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.522 [2024-04-18 12:06:00.789187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:10.522 qpair failed and we were unable to recover it. 00:30:10.522 [2024-04-18 12:06:00.789545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.522 [2024-04-18 12:06:00.789846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.522 [2024-04-18 12:06:00.789867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:10.522 qpair failed and we were unable to recover it. 00:30:10.522 [2024-04-18 12:06:00.790122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.522 [2024-04-18 12:06:00.790480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.522 [2024-04-18 12:06:00.790502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:10.522 qpair failed and we were unable to recover it. 00:30:10.522 [2024-04-18 12:06:00.790793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.522 [2024-04-18 12:06:00.791148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.522 [2024-04-18 12:06:00.791169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:10.522 qpair failed and we were unable to recover it. 00:30:10.522 [2024-04-18 12:06:00.791514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.522 [2024-04-18 12:06:00.793844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.522 [2024-04-18 12:06:00.793888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:10.522 qpair failed and we were unable to recover it. 00:30:10.522 [2024-04-18 12:06:00.794016] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000002240 is same with the state(5) to be set 00:30:10.522 [2024-04-18 12:06:00.794440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.522 [2024-04-18 12:06:00.794808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.522 [2024-04-18 12:06:00.794825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.522 qpair failed and we were unable to recover it. 00:30:10.522 [2024-04-18 12:06:00.795155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.522 [2024-04-18 12:06:00.795487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.522 [2024-04-18 12:06:00.795503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.522 qpair failed and we were unable to recover it. 00:30:10.522 [2024-04-18 12:06:00.795890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.522 [2024-04-18 12:06:00.796242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.522 [2024-04-18 12:06:00.796258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.522 qpair failed and we were unable to recover it. 00:30:10.522 [2024-04-18 12:06:00.796589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.522 [2024-04-18 12:06:00.796844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.522 [2024-04-18 12:06:00.796860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.522 qpair failed and we were unable to recover it. 00:30:10.522 [2024-04-18 12:06:00.797077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.522 [2024-04-18 12:06:00.797297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.522 [2024-04-18 12:06:00.797313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.522 qpair failed and we were unable to recover it. 00:30:10.522 [2024-04-18 12:06:00.797691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.522 [2024-04-18 12:06:00.797897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.522 [2024-04-18 12:06:00.797913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.522 qpair failed and we were unable to recover it. 00:30:10.522 [2024-04-18 12:06:00.798229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.522 [2024-04-18 12:06:00.798529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.522 [2024-04-18 12:06:00.798545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.522 qpair failed and we were unable to recover it. 00:30:10.522 [2024-04-18 12:06:00.798870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.522 [2024-04-18 12:06:00.799109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.522 [2024-04-18 12:06:00.799125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.522 qpair failed and we were unable to recover it. 00:30:10.522 [2024-04-18 12:06:00.799437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.522 [2024-04-18 12:06:00.799804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.522 [2024-04-18 12:06:00.799821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.522 qpair failed and we were unable to recover it. 00:30:10.522 [2024-04-18 12:06:00.800089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.522 [2024-04-18 12:06:00.800376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.522 [2024-04-18 12:06:00.800391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.522 qpair failed and we were unable to recover it. 00:30:10.522 [2024-04-18 12:06:00.800620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.522 [2024-04-18 12:06:00.800847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.522 [2024-04-18 12:06:00.800863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.522 qpair failed and we were unable to recover it. 00:30:10.522 [2024-04-18 12:06:00.801075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.522 [2024-04-18 12:06:00.801412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.522 [2024-04-18 12:06:00.801428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.522 qpair failed and we were unable to recover it. 00:30:10.522 [2024-04-18 12:06:00.801705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.522 [2024-04-18 12:06:00.801983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.522 [2024-04-18 12:06:00.801999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.522 qpair failed and we were unable to recover it. 00:30:10.522 [2024-04-18 12:06:00.802324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.522 [2024-04-18 12:06:00.802673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.522 [2024-04-18 12:06:00.802689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.522 qpair failed and we were unable to recover it. 00:30:10.522 [2024-04-18 12:06:00.802968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.522 [2024-04-18 12:06:00.803325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.522 [2024-04-18 12:06:00.803341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.522 qpair failed and we were unable to recover it. 00:30:10.522 [2024-04-18 12:06:00.803694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.522 [2024-04-18 12:06:00.803966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.522 [2024-04-18 12:06:00.803982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.522 qpair failed and we were unable to recover it. 00:30:10.522 [2024-04-18 12:06:00.804390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.522 [2024-04-18 12:06:00.804695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.522 [2024-04-18 12:06:00.804711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.522 qpair failed and we were unable to recover it. 00:30:10.522 [2024-04-18 12:06:00.804926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.522 [2024-04-18 12:06:00.805245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.522 [2024-04-18 12:06:00.805261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.522 qpair failed and we were unable to recover it. 00:30:10.522 [2024-04-18 12:06:00.805614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.522 [2024-04-18 12:06:00.805890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.522 [2024-04-18 12:06:00.805906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.522 qpair failed and we were unable to recover it. 00:30:10.522 [2024-04-18 12:06:00.806221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.523 [2024-04-18 12:06:00.806584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.523 [2024-04-18 12:06:00.806600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.523 qpair failed and we were unable to recover it. 00:30:10.523 [2024-04-18 12:06:00.806896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.523 [2024-04-18 12:06:00.807172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.523 [2024-04-18 12:06:00.807189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.523 qpair failed and we were unable to recover it. 00:30:10.523 [2024-04-18 12:06:00.807445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.523 [2024-04-18 12:06:00.807724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.523 [2024-04-18 12:06:00.807740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.523 qpair failed and we were unable to recover it. 00:30:10.523 [2024-04-18 12:06:00.807954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.523 [2024-04-18 12:06:00.808182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.523 [2024-04-18 12:06:00.808198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.523 qpair failed and we were unable to recover it. 00:30:10.523 [2024-04-18 12:06:00.808438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.523 [2024-04-18 12:06:00.808663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.523 [2024-04-18 12:06:00.808678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.523 qpair failed and we were unable to recover it. 00:30:10.523 [2024-04-18 12:06:00.808973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.523 [2024-04-18 12:06:00.809292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.523 [2024-04-18 12:06:00.809340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.523 qpair failed and we were unable to recover it. 00:30:10.523 [2024-04-18 12:06:00.809841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.523 [2024-04-18 12:06:00.810365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.523 [2024-04-18 12:06:00.810425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:30:10.523 qpair failed and we were unable to recover it. 00:30:10.523 [2024-04-18 12:06:00.810923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.523 [2024-04-18 12:06:00.811230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.523 [2024-04-18 12:06:00.811255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:30:10.523 qpair failed and we were unable to recover it. 00:30:10.523 [2024-04-18 12:06:00.811652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.523 [2024-04-18 12:06:00.811947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.523 [2024-04-18 12:06:00.812001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:10.523 qpair failed and we were unable to recover it. 00:30:10.523 [2024-04-18 12:06:00.812388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.523 [2024-04-18 12:06:00.812810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.523 [2024-04-18 12:06:00.812861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.523 qpair failed and we were unable to recover it. 00:30:10.523 [2024-04-18 12:06:00.813328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.523 [2024-04-18 12:06:00.813624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.523 [2024-04-18 12:06:00.813640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.523 qpair failed and we were unable to recover it. 00:30:10.523 [2024-04-18 12:06:00.813899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.523 [2024-04-18 12:06:00.814222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.523 [2024-04-18 12:06:00.814238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.523 qpair failed and we were unable to recover it. 00:30:10.523 [2024-04-18 12:06:00.814590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.523 [2024-04-18 12:06:00.814890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.523 [2024-04-18 12:06:00.814905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.523 qpair failed and we were unable to recover it. 00:30:10.523 [2024-04-18 12:06:00.815201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.523 [2024-04-18 12:06:00.815608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.523 [2024-04-18 12:06:00.815656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.523 qpair failed and we were unable to recover it. 00:30:10.523 [2024-04-18 12:06:00.816007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.523 [2024-04-18 12:06:00.816276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.523 [2024-04-18 12:06:00.816324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.523 qpair failed and we were unable to recover it. 00:30:10.523 [2024-04-18 12:06:00.816781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.523 [2024-04-18 12:06:00.817032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.523 [2024-04-18 12:06:00.817081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.523 qpair failed and we were unable to recover it. 00:30:10.523 [2024-04-18 12:06:00.817449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.523 [2024-04-18 12:06:00.817805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.523 [2024-04-18 12:06:00.817853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.523 qpair failed and we were unable to recover it. 00:30:10.523 [2024-04-18 12:06:00.818217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.523 [2024-04-18 12:06:00.818625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.523 [2024-04-18 12:06:00.818675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.523 qpair failed and we were unable to recover it. 00:30:10.523 [2024-04-18 12:06:00.819074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.523 [2024-04-18 12:06:00.819425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.523 [2024-04-18 12:06:00.819484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.523 qpair failed and we were unable to recover it. 00:30:10.523 [2024-04-18 12:06:00.819927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.523 [2024-04-18 12:06:00.820281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.523 [2024-04-18 12:06:00.820331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.523 qpair failed and we were unable to recover it. 00:30:10.523 [2024-04-18 12:06:00.820790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.523 [2024-04-18 12:06:00.821091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.523 [2024-04-18 12:06:00.821107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.523 qpair failed and we were unable to recover it. 00:30:10.523 [2024-04-18 12:06:00.821474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.523 [2024-04-18 12:06:00.821814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.523 [2024-04-18 12:06:00.821864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.523 qpair failed and we were unable to recover it. 00:30:10.523 [2024-04-18 12:06:00.822263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.523 [2024-04-18 12:06:00.822583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.523 [2024-04-18 12:06:00.822600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.523 qpair failed and we were unable to recover it. 00:30:10.523 [2024-04-18 12:06:00.822924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.523 [2024-04-18 12:06:00.823261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.523 [2024-04-18 12:06:00.823276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.523 qpair failed and we were unable to recover it. 00:30:10.523 [2024-04-18 12:06:00.823482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.523 [2024-04-18 12:06:00.823837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.523 [2024-04-18 12:06:00.823886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.523 qpair failed and we were unable to recover it. 00:30:10.523 [2024-04-18 12:06:00.824252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.523 [2024-04-18 12:06:00.824553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.523 [2024-04-18 12:06:00.824603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.523 qpair failed and we were unable to recover it. 00:30:10.523 [2024-04-18 12:06:00.824967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.523 [2024-04-18 12:06:00.825312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.523 [2024-04-18 12:06:00.825360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.523 qpair failed and we were unable to recover it. 00:30:10.523 [2024-04-18 12:06:00.825672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.523 [2024-04-18 12:06:00.825923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.523 [2024-04-18 12:06:00.825971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.523 qpair failed and we were unable to recover it. 00:30:10.523 [2024-04-18 12:06:00.826389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.523 [2024-04-18 12:06:00.826699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.523 [2024-04-18 12:06:00.826715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.524 qpair failed and we were unable to recover it. 00:30:10.524 [2024-04-18 12:06:00.826989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.524 [2024-04-18 12:06:00.827283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.524 [2024-04-18 12:06:00.827298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.524 qpair failed and we were unable to recover it. 00:30:10.524 [2024-04-18 12:06:00.827633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.524 [2024-04-18 12:06:00.828012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.524 [2024-04-18 12:06:00.828060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.524 qpair failed and we were unable to recover it. 00:30:10.524 [2024-04-18 12:06:00.828474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.524 [2024-04-18 12:06:00.828797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.524 [2024-04-18 12:06:00.828845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.524 qpair failed and we were unable to recover it. 00:30:10.524 [2024-04-18 12:06:00.829185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.524 [2024-04-18 12:06:00.829582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.524 [2024-04-18 12:06:00.829631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.524 qpair failed and we were unable to recover it. 00:30:10.524 [2024-04-18 12:06:00.829991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.524 [2024-04-18 12:06:00.830351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.524 [2024-04-18 12:06:00.830399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.524 qpair failed and we were unable to recover it. 00:30:10.524 [2024-04-18 12:06:00.830832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.524 [2024-04-18 12:06:00.831167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.524 [2024-04-18 12:06:00.831217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.524 qpair failed and we were unable to recover it. 00:30:10.524 [2024-04-18 12:06:00.831639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.524 [2024-04-18 12:06:00.831995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.524 [2024-04-18 12:06:00.832044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.524 qpair failed and we were unable to recover it. 00:30:10.524 [2024-04-18 12:06:00.832381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.524 [2024-04-18 12:06:00.832722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.524 [2024-04-18 12:06:00.832738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.524 qpair failed and we were unable to recover it. 00:30:10.524 [2024-04-18 12:06:00.833040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.524 [2024-04-18 12:06:00.833495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.524 [2024-04-18 12:06:00.833546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.524 qpair failed and we were unable to recover it. 00:30:10.524 [2024-04-18 12:06:00.833931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.524 [2024-04-18 12:06:00.834365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.524 [2024-04-18 12:06:00.834413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.524 qpair failed and we were unable to recover it. 00:30:10.524 [2024-04-18 12:06:00.834784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.524 [2024-04-18 12:06:00.835197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.524 [2024-04-18 12:06:00.835246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.524 qpair failed and we were unable to recover it. 00:30:10.524 [2024-04-18 12:06:00.835578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.524 [2024-04-18 12:06:00.835937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.524 [2024-04-18 12:06:00.835986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.524 qpair failed and we were unable to recover it. 00:30:10.524 [2024-04-18 12:06:00.836399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.524 [2024-04-18 12:06:00.836702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.524 [2024-04-18 12:06:00.836753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.524 qpair failed and we were unable to recover it. 00:30:10.524 [2024-04-18 12:06:00.837150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.524 [2024-04-18 12:06:00.837421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.524 [2024-04-18 12:06:00.837456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.524 qpair failed and we were unable to recover it. 00:30:10.524 [2024-04-18 12:06:00.837737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.524 [2024-04-18 12:06:00.838092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.524 [2024-04-18 12:06:00.838108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.524 qpair failed and we were unable to recover it. 00:30:10.524 [2024-04-18 12:06:00.838443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.524 [2024-04-18 12:06:00.838891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.524 [2024-04-18 12:06:00.838940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.524 qpair failed and we were unable to recover it. 00:30:10.524 [2024-04-18 12:06:00.839243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.524 [2024-04-18 12:06:00.839575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.524 [2024-04-18 12:06:00.839591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.524 qpair failed and we were unable to recover it. 00:30:10.524 [2024-04-18 12:06:00.839795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.524 [2024-04-18 12:06:00.840073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.524 [2024-04-18 12:06:00.840089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.524 qpair failed and we were unable to recover it. 00:30:10.524 [2024-04-18 12:06:00.840383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.524 [2024-04-18 12:06:00.840735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.524 [2024-04-18 12:06:00.840752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.524 qpair failed and we were unable to recover it. 00:30:10.524 [2024-04-18 12:06:00.841058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.524 [2024-04-18 12:06:00.841268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.524 [2024-04-18 12:06:00.841283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.524 qpair failed and we were unable to recover it. 00:30:10.524 [2024-04-18 12:06:00.841559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.524 [2024-04-18 12:06:00.841911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.524 [2024-04-18 12:06:00.841927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.524 qpair failed and we were unable to recover it. 00:30:10.524 [2024-04-18 12:06:00.842203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.524 [2024-04-18 12:06:00.842544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.524 [2024-04-18 12:06:00.842561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.524 qpair failed and we were unable to recover it. 00:30:10.524 [2024-04-18 12:06:00.842886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.524 [2024-04-18 12:06:00.843162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.524 [2024-04-18 12:06:00.843178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.524 qpair failed and we were unable to recover it. 00:30:10.524 [2024-04-18 12:06:00.843503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.524 [2024-04-18 12:06:00.843825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.524 [2024-04-18 12:06:00.843841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.524 qpair failed and we were unable to recover it. 00:30:10.524 [2024-04-18 12:06:00.844184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.524 [2024-04-18 12:06:00.844461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.524 [2024-04-18 12:06:00.844477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.524 qpair failed and we were unable to recover it. 00:30:10.524 [2024-04-18 12:06:00.844817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.524 [2024-04-18 12:06:00.845160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.524 [2024-04-18 12:06:00.845176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.524 qpair failed and we were unable to recover it. 00:30:10.524 [2024-04-18 12:06:00.845563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.524 [2024-04-18 12:06:00.845908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.524 [2024-04-18 12:06:00.845925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.524 qpair failed and we were unable to recover it. 00:30:10.524 [2024-04-18 12:06:00.846194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.524 [2024-04-18 12:06:00.846515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.524 [2024-04-18 12:06:00.846533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.524 qpair failed and we were unable to recover it. 00:30:10.524 [2024-04-18 12:06:00.846744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.524 [2024-04-18 12:06:00.847050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.525 [2024-04-18 12:06:00.847066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.525 qpair failed and we were unable to recover it. 00:30:10.525 [2024-04-18 12:06:00.847382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.525 [2024-04-18 12:06:00.847635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.525 [2024-04-18 12:06:00.847651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.525 qpair failed and we were unable to recover it. 00:30:10.525 [2024-04-18 12:06:00.847987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.525 [2024-04-18 12:06:00.848345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.525 [2024-04-18 12:06:00.848361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.525 qpair failed and we were unable to recover it. 00:30:10.525 [2024-04-18 12:06:00.848710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.525 [2024-04-18 12:06:00.849037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.525 [2024-04-18 12:06:00.849053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.525 qpair failed and we were unable to recover it. 00:30:10.525 [2024-04-18 12:06:00.849311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.525 [2024-04-18 12:06:00.849583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.525 [2024-04-18 12:06:00.849599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.525 qpair failed and we were unable to recover it. 00:30:10.525 [2024-04-18 12:06:00.849884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.525 [2024-04-18 12:06:00.850185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.525 [2024-04-18 12:06:00.850201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.525 qpair failed and we were unable to recover it. 00:30:10.525 [2024-04-18 12:06:00.850548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.525 [2024-04-18 12:06:00.850822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.525 [2024-04-18 12:06:00.850838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.525 qpair failed and we were unable to recover it. 00:30:10.525 [2024-04-18 12:06:00.851160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.525 [2024-04-18 12:06:00.851533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.525 [2024-04-18 12:06:00.851549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.525 qpair failed and we were unable to recover it. 00:30:10.525 [2024-04-18 12:06:00.851896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.525 [2024-04-18 12:06:00.852246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.525 [2024-04-18 12:06:00.852262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.525 qpair failed and we were unable to recover it. 00:30:10.525 [2024-04-18 12:06:00.852631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.525 [2024-04-18 12:06:00.852927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.525 [2024-04-18 12:06:00.852992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.525 qpair failed and we were unable to recover it. 00:30:10.525 [2024-04-18 12:06:00.853259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.525 [2024-04-18 12:06:00.853559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.525 [2024-04-18 12:06:00.853575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.525 qpair failed and we were unable to recover it. 00:30:10.525 [2024-04-18 12:06:00.853857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.525 [2024-04-18 12:06:00.854206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.525 [2024-04-18 12:06:00.854222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.525 qpair failed and we were unable to recover it. 00:30:10.525 [2024-04-18 12:06:00.854567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.525 [2024-04-18 12:06:00.854934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.525 [2024-04-18 12:06:00.854950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.525 qpair failed and we were unable to recover it. 00:30:10.525 [2024-04-18 12:06:00.855331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.525 [2024-04-18 12:06:00.855653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.525 [2024-04-18 12:06:00.855669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.525 qpair failed and we were unable to recover it. 00:30:10.525 [2024-04-18 12:06:00.855952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.525 [2024-04-18 12:06:00.856218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.525 [2024-04-18 12:06:00.856234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.525 qpair failed and we were unable to recover it. 00:30:10.525 [2024-04-18 12:06:00.856557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.525 [2024-04-18 12:06:00.856887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.525 [2024-04-18 12:06:00.856903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.525 qpair failed and we were unable to recover it. 00:30:10.525 [2024-04-18 12:06:00.857256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.525 [2024-04-18 12:06:00.857633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.525 [2024-04-18 12:06:00.857649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.525 qpair failed and we were unable to recover it. 00:30:10.525 [2024-04-18 12:06:00.858001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.525 [2024-04-18 12:06:00.858302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.525 [2024-04-18 12:06:00.858318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.525 qpair failed and we were unable to recover it. 00:30:10.525 [2024-04-18 12:06:00.858692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.525 [2024-04-18 12:06:00.858967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.525 [2024-04-18 12:06:00.858983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.525 qpair failed and we were unable to recover it. 00:30:10.525 [2024-04-18 12:06:00.859274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.525 [2024-04-18 12:06:00.859528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.525 [2024-04-18 12:06:00.859547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.525 qpair failed and we were unable to recover it. 00:30:10.525 [2024-04-18 12:06:00.859756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.525 [2024-04-18 12:06:00.860021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.525 [2024-04-18 12:06:00.860037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.525 qpair failed and we were unable to recover it. 00:30:10.525 [2024-04-18 12:06:00.860380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.525 [2024-04-18 12:06:00.860751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.525 [2024-04-18 12:06:00.860767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.525 qpair failed and we were unable to recover it. 00:30:10.525 [2024-04-18 12:06:00.861035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.525 [2024-04-18 12:06:00.861223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.525 [2024-04-18 12:06:00.861238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.525 qpair failed and we were unable to recover it. 00:30:10.525 [2024-04-18 12:06:00.861581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.525 [2024-04-18 12:06:00.861925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.526 [2024-04-18 12:06:00.861941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.526 qpair failed and we were unable to recover it. 00:30:10.526 [2024-04-18 12:06:00.862223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.526 [2024-04-18 12:06:00.862543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.526 [2024-04-18 12:06:00.862559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.526 qpair failed and we were unable to recover it. 00:30:10.526 [2024-04-18 12:06:00.862850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.526 [2024-04-18 12:06:00.863171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.526 [2024-04-18 12:06:00.863187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.526 qpair failed and we were unable to recover it. 00:30:10.526 [2024-04-18 12:06:00.863526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.526 [2024-04-18 12:06:00.863799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.526 [2024-04-18 12:06:00.863815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.526 qpair failed and we were unable to recover it. 00:30:10.526 [2024-04-18 12:06:00.864105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.526 [2024-04-18 12:06:00.864445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.526 [2024-04-18 12:06:00.864474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.526 qpair failed and we were unable to recover it. 00:30:10.526 [2024-04-18 12:06:00.864779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.526 [2024-04-18 12:06:00.865124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.526 [2024-04-18 12:06:00.865139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.526 qpair failed and we were unable to recover it. 00:30:10.526 [2024-04-18 12:06:00.865462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.526 [2024-04-18 12:06:00.865784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.526 [2024-04-18 12:06:00.865802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.526 qpair failed and we were unable to recover it. 00:30:10.526 [2024-04-18 12:06:00.866149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.526 [2024-04-18 12:06:00.866419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.526 [2024-04-18 12:06:00.866436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.526 qpair failed and we were unable to recover it. 00:30:10.526 [2024-04-18 12:06:00.866817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.526 [2024-04-18 12:06:00.867162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.526 [2024-04-18 12:06:00.867178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.526 qpair failed and we were unable to recover it. 00:30:10.526 [2024-04-18 12:06:00.867500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.526 [2024-04-18 12:06:00.867779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.526 [2024-04-18 12:06:00.867795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.526 qpair failed and we were unable to recover it. 00:30:10.526 [2024-04-18 12:06:00.868072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.526 [2024-04-18 12:06:00.868424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.526 [2024-04-18 12:06:00.868440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.526 qpair failed and we were unable to recover it. 00:30:10.526 [2024-04-18 12:06:00.868770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.526 [2024-04-18 12:06:00.869115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.526 [2024-04-18 12:06:00.869131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.526 qpair failed and we were unable to recover it. 00:30:10.526 [2024-04-18 12:06:00.869383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.526 [2024-04-18 12:06:00.869755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.526 [2024-04-18 12:06:00.869771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.526 qpair failed and we were unable to recover it. 00:30:10.526 [2024-04-18 12:06:00.870120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.526 [2024-04-18 12:06:00.870398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.526 [2024-04-18 12:06:00.870414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.526 qpair failed and we were unable to recover it. 00:30:10.526 [2024-04-18 12:06:00.870681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.526 [2024-04-18 12:06:00.870933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.526 [2024-04-18 12:06:00.870949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.526 qpair failed and we were unable to recover it. 00:30:10.526 [2024-04-18 12:06:00.871273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.526 [2024-04-18 12:06:00.871617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.526 [2024-04-18 12:06:00.871634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.526 qpair failed and we were unable to recover it. 00:30:10.526 [2024-04-18 12:06:00.871916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.526 [2024-04-18 12:06:00.872211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.526 [2024-04-18 12:06:00.872227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.526 qpair failed and we were unable to recover it. 00:30:10.526 [2024-04-18 12:06:00.872592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.526 [2024-04-18 12:06:00.872808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.526 [2024-04-18 12:06:00.872824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.526 qpair failed and we were unable to recover it. 00:30:10.526 [2024-04-18 12:06:00.873172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.526 [2024-04-18 12:06:00.873448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.526 [2024-04-18 12:06:00.873470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.526 qpair failed and we were unable to recover it. 00:30:10.526 [2024-04-18 12:06:00.873743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.526 [2024-04-18 12:06:00.874089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.526 [2024-04-18 12:06:00.874105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.526 qpair failed and we were unable to recover it. 00:30:10.526 [2024-04-18 12:06:00.874475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.526 [2024-04-18 12:06:00.874823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.526 [2024-04-18 12:06:00.874839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.526 qpair failed and we were unable to recover it. 00:30:10.526 [2024-04-18 12:06:00.875124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.526 [2024-04-18 12:06:00.875403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.526 [2024-04-18 12:06:00.875419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.526 qpair failed and we were unable to recover it. 00:30:10.526 [2024-04-18 12:06:00.875771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.526 [2024-04-18 12:06:00.876043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.526 [2024-04-18 12:06:00.876059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.526 qpair failed and we were unable to recover it. 00:30:10.526 [2024-04-18 12:06:00.876404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.526 [2024-04-18 12:06:00.876771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.526 [2024-04-18 12:06:00.876787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.526 qpair failed and we were unable to recover it. 00:30:10.526 [2024-04-18 12:06:00.877108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.526 [2024-04-18 12:06:00.877428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.526 [2024-04-18 12:06:00.877444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.526 qpair failed and we were unable to recover it. 00:30:10.526 [2024-04-18 12:06:00.877771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.526 [2024-04-18 12:06:00.878113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.526 [2024-04-18 12:06:00.878130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.526 qpair failed and we were unable to recover it. 00:30:10.526 [2024-04-18 12:06:00.878476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.526 [2024-04-18 12:06:00.878843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.526 [2024-04-18 12:06:00.878859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.526 qpair failed and we were unable to recover it. 00:30:10.526 [2024-04-18 12:06:00.879142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.526 [2024-04-18 12:06:00.879496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.526 [2024-04-18 12:06:00.879512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.526 qpair failed and we were unable to recover it. 00:30:10.526 [2024-04-18 12:06:00.879875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.526 [2024-04-18 12:06:00.880243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.526 [2024-04-18 12:06:00.880259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.526 qpair failed and we were unable to recover it. 00:30:10.526 [2024-04-18 12:06:00.880612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.527 [2024-04-18 12:06:00.880981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.527 [2024-04-18 12:06:00.880996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.527 qpair failed and we were unable to recover it. 00:30:10.527 [2024-04-18 12:06:00.881252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.527 [2024-04-18 12:06:00.881600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.527 [2024-04-18 12:06:00.881616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.527 qpair failed and we were unable to recover it. 00:30:10.527 [2024-04-18 12:06:00.881894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.527 [2024-04-18 12:06:00.882153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.527 [2024-04-18 12:06:00.882169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.527 qpair failed and we were unable to recover it. 00:30:10.527 [2024-04-18 12:06:00.882538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.527 [2024-04-18 12:06:00.882803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.527 [2024-04-18 12:06:00.882818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.527 qpair failed and we were unable to recover it. 00:30:10.527 [2024-04-18 12:06:00.883194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.527 [2024-04-18 12:06:00.883531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.527 [2024-04-18 12:06:00.883547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.527 qpair failed and we were unable to recover it. 00:30:10.527 [2024-04-18 12:06:00.883894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.527 [2024-04-18 12:06:00.884158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.527 [2024-04-18 12:06:00.884175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.527 qpair failed and we were unable to recover it. 00:30:10.527 [2024-04-18 12:06:00.884463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.527 [2024-04-18 12:06:00.884807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.527 [2024-04-18 12:06:00.884823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.527 qpair failed and we were unable to recover it. 00:30:10.527 [2024-04-18 12:06:00.885147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.527 [2024-04-18 12:06:00.885466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.527 [2024-04-18 12:06:00.885498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.527 qpair failed and we were unable to recover it. 00:30:10.527 [2024-04-18 12:06:00.885849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.527 [2024-04-18 12:06:00.886050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.527 [2024-04-18 12:06:00.886066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.527 qpair failed and we were unable to recover it. 00:30:10.527 [2024-04-18 12:06:00.886393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.527 [2024-04-18 12:06:00.886736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.527 [2024-04-18 12:06:00.886753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.527 qpair failed and we were unable to recover it. 00:30:10.527 [2024-04-18 12:06:00.887034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.527 [2024-04-18 12:06:00.887283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.527 [2024-04-18 12:06:00.887299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.527 qpair failed and we were unable to recover it. 00:30:10.527 [2024-04-18 12:06:00.887674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.527 [2024-04-18 12:06:00.887956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.527 [2024-04-18 12:06:00.888004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.527 qpair failed and we were unable to recover it. 00:30:10.527 [2024-04-18 12:06:00.888334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.527 [2024-04-18 12:06:00.888603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.527 [2024-04-18 12:06:00.888619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.527 qpair failed and we were unable to recover it. 00:30:10.527 [2024-04-18 12:06:00.888956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.527 [2024-04-18 12:06:00.889338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.527 [2024-04-18 12:06:00.889385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.527 qpair failed and we were unable to recover it. 00:30:10.527 [2024-04-18 12:06:00.889837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.527 [2024-04-18 12:06:00.890249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.527 [2024-04-18 12:06:00.890299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.527 qpair failed and we were unable to recover it. 00:30:10.527 [2024-04-18 12:06:00.890628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.527 [2024-04-18 12:06:00.890907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.527 [2024-04-18 12:06:00.890956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.527 qpair failed and we were unable to recover it. 00:30:10.527 [2024-04-18 12:06:00.891391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.527 [2024-04-18 12:06:00.891747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.527 [2024-04-18 12:06:00.891799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.527 qpair failed and we were unable to recover it. 00:30:10.527 [2024-04-18 12:06:00.892175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.527 [2024-04-18 12:06:00.892482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.527 [2024-04-18 12:06:00.892499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.527 qpair failed and we were unable to recover it. 00:30:10.527 [2024-04-18 12:06:00.892827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.527 [2024-04-18 12:06:00.893090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.527 [2024-04-18 12:06:00.893106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.527 qpair failed and we were unable to recover it. 00:30:10.527 [2024-04-18 12:06:00.893406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.527 [2024-04-18 12:06:00.893754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.527 [2024-04-18 12:06:00.893804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.527 qpair failed and we were unable to recover it. 00:30:10.527 [2024-04-18 12:06:00.894225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.527 [2024-04-18 12:06:00.894624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.527 [2024-04-18 12:06:00.894674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.527 qpair failed and we were unable to recover it. 00:30:10.527 [2024-04-18 12:06:00.895094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.527 [2024-04-18 12:06:00.895490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.527 [2024-04-18 12:06:00.895506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.527 qpair failed and we were unable to recover it. 00:30:10.527 [2024-04-18 12:06:00.895779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.527 [2024-04-18 12:06:00.896098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.527 [2024-04-18 12:06:00.896113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.527 qpair failed and we were unable to recover it. 00:30:10.527 [2024-04-18 12:06:00.896483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.527 [2024-04-18 12:06:00.896891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.527 [2024-04-18 12:06:00.896939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.527 qpair failed and we were unable to recover it. 00:30:10.527 [2024-04-18 12:06:00.897360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.527 [2024-04-18 12:06:00.897709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.527 [2024-04-18 12:06:00.897725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.527 qpair failed and we were unable to recover it. 00:30:10.527 [2024-04-18 12:06:00.898104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.527 [2024-04-18 12:06:00.898510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.527 [2024-04-18 12:06:00.898559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.527 qpair failed and we were unable to recover it. 00:30:10.527 [2024-04-18 12:06:00.898980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.527 [2024-04-18 12:06:00.899389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.527 [2024-04-18 12:06:00.899429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.527 qpair failed and we were unable to recover it. 00:30:10.527 [2024-04-18 12:06:00.899717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.527 [2024-04-18 12:06:00.900000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.527 [2024-04-18 12:06:00.900016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.527 qpair failed and we were unable to recover it. 00:30:10.527 [2024-04-18 12:06:00.900318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.527 [2024-04-18 12:06:00.900638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.527 [2024-04-18 12:06:00.900689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.527 qpair failed and we were unable to recover it. 00:30:10.527 [2024-04-18 12:06:00.901107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.528 [2024-04-18 12:06:00.901512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.528 [2024-04-18 12:06:00.901561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.528 qpair failed and we were unable to recover it. 00:30:10.528 [2024-04-18 12:06:00.901978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.528 [2024-04-18 12:06:00.902383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.528 [2024-04-18 12:06:00.902431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.528 qpair failed and we were unable to recover it. 00:30:10.528 [2024-04-18 12:06:00.902844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.528 [2024-04-18 12:06:00.903250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.528 [2024-04-18 12:06:00.903300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.528 qpair failed and we were unable to recover it. 00:30:10.528 [2024-04-18 12:06:00.903642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.528 [2024-04-18 12:06:00.904023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.528 [2024-04-18 12:06:00.904072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.528 qpair failed and we were unable to recover it. 00:30:10.528 [2024-04-18 12:06:00.904492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.528 [2024-04-18 12:06:00.904881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.528 [2024-04-18 12:06:00.904930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.528 qpair failed and we were unable to recover it. 00:30:10.528 [2024-04-18 12:06:00.905281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.528 [2024-04-18 12:06:00.905676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.528 [2024-04-18 12:06:00.905692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.528 qpair failed and we were unable to recover it. 00:30:10.528 [2024-04-18 12:06:00.906031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.528 [2024-04-18 12:06:00.906437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.528 [2024-04-18 12:06:00.906494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.528 qpair failed and we were unable to recover it. 00:30:10.528 [2024-04-18 12:06:00.906885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.528 [2024-04-18 12:06:00.907233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.528 [2024-04-18 12:06:00.907281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.528 qpair failed and we were unable to recover it. 00:30:10.528 [2024-04-18 12:06:00.907620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.528 [2024-04-18 12:06:00.908028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.528 [2024-04-18 12:06:00.908077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.528 qpair failed and we were unable to recover it. 00:30:10.528 [2024-04-18 12:06:00.908429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.528 [2024-04-18 12:06:00.908775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.528 [2024-04-18 12:06:00.908824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.528 qpair failed and we were unable to recover it. 00:30:10.528 [2024-04-18 12:06:00.909165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.528 [2024-04-18 12:06:00.909530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.528 [2024-04-18 12:06:00.909579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.528 qpair failed and we were unable to recover it. 00:30:10.528 [2024-04-18 12:06:00.910016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.528 [2024-04-18 12:06:00.910403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.528 [2024-04-18 12:06:00.910471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.528 qpair failed and we were unable to recover it. 00:30:10.528 [2024-04-18 12:06:00.910872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.528 [2024-04-18 12:06:00.911055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.528 [2024-04-18 12:06:00.911071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.528 qpair failed and we were unable to recover it. 00:30:10.528 [2024-04-18 12:06:00.911400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.528 [2024-04-18 12:06:00.911741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.528 [2024-04-18 12:06:00.911756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.528 qpair failed and we were unable to recover it. 00:30:10.528 [2024-04-18 12:06:00.912081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.528 [2024-04-18 12:06:00.912471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.528 [2024-04-18 12:06:00.912487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.528 qpair failed and we were unable to recover it. 00:30:10.528 [2024-04-18 12:06:00.912826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.528 [2024-04-18 12:06:00.913108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.528 [2024-04-18 12:06:00.913124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.528 qpair failed and we were unable to recover it. 00:30:10.528 [2024-04-18 12:06:00.913473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.528 [2024-04-18 12:06:00.913753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.528 [2024-04-18 12:06:00.913769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.528 qpair failed and we were unable to recover it. 00:30:10.528 [2024-04-18 12:06:00.914115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.528 [2024-04-18 12:06:00.914487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.528 [2024-04-18 12:06:00.914503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.528 qpair failed and we were unable to recover it. 00:30:10.528 [2024-04-18 12:06:00.914852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.528 [2024-04-18 12:06:00.915133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.528 [2024-04-18 12:06:00.915149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.528 qpair failed and we were unable to recover it. 00:30:10.528 [2024-04-18 12:06:00.915363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.528 [2024-04-18 12:06:00.915684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.528 [2024-04-18 12:06:00.915700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.528 qpair failed and we were unable to recover it. 00:30:10.528 [2024-04-18 12:06:00.916027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.528 [2024-04-18 12:06:00.916295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.528 [2024-04-18 12:06:00.916310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.528 qpair failed and we were unable to recover it. 00:30:10.528 [2024-04-18 12:06:00.916586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.528 [2024-04-18 12:06:00.916939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.528 [2024-04-18 12:06:00.916955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.528 qpair failed and we were unable to recover it. 00:30:10.528 [2024-04-18 12:06:00.917279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.528 [2024-04-18 12:06:00.917569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.528 [2024-04-18 12:06:00.917585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.528 qpair failed and we were unable to recover it. 00:30:10.528 [2024-04-18 12:06:00.917931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.528 [2024-04-18 12:06:00.918210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.528 [2024-04-18 12:06:00.918225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.528 qpair failed and we were unable to recover it. 00:30:10.528 [2024-04-18 12:06:00.918570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.528 [2024-04-18 12:06:00.918914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.528 [2024-04-18 12:06:00.918931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.528 qpair failed and we were unable to recover it. 00:30:10.528 [2024-04-18 12:06:00.919276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.528 [2024-04-18 12:06:00.919572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.528 [2024-04-18 12:06:00.919588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.528 qpair failed and we were unable to recover it. 00:30:10.528 [2024-04-18 12:06:00.919952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.528 [2024-04-18 12:06:00.920304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.528 [2024-04-18 12:06:00.920320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.528 qpair failed and we were unable to recover it. 00:30:10.528 [2024-04-18 12:06:00.920596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.528 [2024-04-18 12:06:00.920817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.528 [2024-04-18 12:06:00.920833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.528 qpair failed and we were unable to recover it. 00:30:10.528 [2024-04-18 12:06:00.921042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.528 [2024-04-18 12:06:00.921335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.528 [2024-04-18 12:06:00.921351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.528 qpair failed and we were unable to recover it. 00:30:10.528 [2024-04-18 12:06:00.921619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.529 [2024-04-18 12:06:00.921909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.529 [2024-04-18 12:06:00.921925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.529 qpair failed and we were unable to recover it. 00:30:10.529 [2024-04-18 12:06:00.922271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.529 [2024-04-18 12:06:00.922545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.529 [2024-04-18 12:06:00.922561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.529 qpair failed and we were unable to recover it. 00:30:10.529 [2024-04-18 12:06:00.922854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.529 [2024-04-18 12:06:00.923190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.529 [2024-04-18 12:06:00.923207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.529 qpair failed and we were unable to recover it. 00:30:10.529 [2024-04-18 12:06:00.923477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.529 [2024-04-18 12:06:00.923742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.529 [2024-04-18 12:06:00.923763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.529 qpair failed and we were unable to recover it. 00:30:10.529 [2024-04-18 12:06:00.924118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.529 [2024-04-18 12:06:00.924385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.529 [2024-04-18 12:06:00.924402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.529 qpair failed and we were unable to recover it. 00:30:10.529 [2024-04-18 12:06:00.924660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.529 [2024-04-18 12:06:00.925027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.529 [2024-04-18 12:06:00.925042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.529 qpair failed and we were unable to recover it. 00:30:10.529 [2024-04-18 12:06:00.925302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.529 [2024-04-18 12:06:00.925651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.529 [2024-04-18 12:06:00.925667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.529 qpair failed and we were unable to recover it. 00:30:10.529 [2024-04-18 12:06:00.926011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.529 [2024-04-18 12:06:00.926353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.529 [2024-04-18 12:06:00.926369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.529 qpair failed and we were unable to recover it. 00:30:10.529 [2024-04-18 12:06:00.926714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.529 [2024-04-18 12:06:00.927035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.529 [2024-04-18 12:06:00.927051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.529 qpair failed and we were unable to recover it. 00:30:10.529 [2024-04-18 12:06:00.927304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.529 [2024-04-18 12:06:00.927567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.529 [2024-04-18 12:06:00.927583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.529 qpair failed and we were unable to recover it. 00:30:10.529 [2024-04-18 12:06:00.927953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.529 [2024-04-18 12:06:00.928223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.529 [2024-04-18 12:06:00.928239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.529 qpair failed and we were unable to recover it. 00:30:10.529 [2024-04-18 12:06:00.928608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.529 [2024-04-18 12:06:00.928896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.529 [2024-04-18 12:06:00.928912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.529 qpair failed and we were unable to recover it. 00:30:10.529 [2024-04-18 12:06:00.929256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.529 [2024-04-18 12:06:00.929528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.529 [2024-04-18 12:06:00.929544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.529 qpair failed and we were unable to recover it. 00:30:10.529 [2024-04-18 12:06:00.929890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.529 [2024-04-18 12:06:00.930113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.529 [2024-04-18 12:06:00.930129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.529 qpair failed and we were unable to recover it. 00:30:10.529 [2024-04-18 12:06:00.930476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.529 [2024-04-18 12:06:00.930703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.529 [2024-04-18 12:06:00.930719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.529 qpair failed and we were unable to recover it. 00:30:10.529 [2024-04-18 12:06:00.931045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.529 [2024-04-18 12:06:00.931374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.529 [2024-04-18 12:06:00.931390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.529 qpair failed and we were unable to recover it. 00:30:10.529 [2024-04-18 12:06:00.931644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.529 [2024-04-18 12:06:00.931982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.529 [2024-04-18 12:06:00.931998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.529 qpair failed and we were unable to recover it. 00:30:10.529 [2024-04-18 12:06:00.932340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.529 [2024-04-18 12:06:00.932660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.529 [2024-04-18 12:06:00.932676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.529 qpair failed and we were unable to recover it. 00:30:10.529 [2024-04-18 12:06:00.932969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.529 [2024-04-18 12:06:00.933233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.529 [2024-04-18 12:06:00.933249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.529 qpair failed and we were unable to recover it. 00:30:10.529 [2024-04-18 12:06:00.933597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.529 [2024-04-18 12:06:00.933972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.529 [2024-04-18 12:06:00.933987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.529 qpair failed and we were unable to recover it. 00:30:10.529 [2024-04-18 12:06:00.934311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.529 [2024-04-18 12:06:00.934635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.529 [2024-04-18 12:06:00.934652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.529 qpair failed and we were unable to recover it. 00:30:10.529 [2024-04-18 12:06:00.934998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.529 [2024-04-18 12:06:00.935365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.529 [2024-04-18 12:06:00.935381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.529 qpair failed and we were unable to recover it. 00:30:10.529 [2024-04-18 12:06:00.935704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.529 [2024-04-18 12:06:00.936028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.529 [2024-04-18 12:06:00.936044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.529 qpair failed and we were unable to recover it. 00:30:10.529 [2024-04-18 12:06:00.936367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.529 [2024-04-18 12:06:00.936691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.529 [2024-04-18 12:06:00.936707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.529 qpair failed and we were unable to recover it. 00:30:10.529 [2024-04-18 12:06:00.936910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.529 [2024-04-18 12:06:00.937238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.529 [2024-04-18 12:06:00.937254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.529 qpair failed and we were unable to recover it. 00:30:10.529 [2024-04-18 12:06:00.937599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.529 [2024-04-18 12:06:00.937970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.529 [2024-04-18 12:06:00.937986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.529 qpair failed and we were unable to recover it. 00:30:10.529 [2024-04-18 12:06:00.938337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.529 [2024-04-18 12:06:00.938656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.529 [2024-04-18 12:06:00.938672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.529 qpair failed and we were unable to recover it. 00:30:10.529 [2024-04-18 12:06:00.938942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.529 [2024-04-18 12:06:00.939301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.529 [2024-04-18 12:06:00.939317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.529 qpair failed and we were unable to recover it. 00:30:10.529 [2024-04-18 12:06:00.939683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.529 [2024-04-18 12:06:00.939961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.529 [2024-04-18 12:06:00.939977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.529 qpair failed and we were unable to recover it. 00:30:10.529 [2024-04-18 12:06:00.940319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.530 [2024-04-18 12:06:00.940585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.530 [2024-04-18 12:06:00.940601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.530 qpair failed and we were unable to recover it. 00:30:10.530 [2024-04-18 12:06:00.940964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.530 [2024-04-18 12:06:00.941286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.530 [2024-04-18 12:06:00.941301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.530 qpair failed and we were unable to recover it. 00:30:10.530 [2024-04-18 12:06:00.941649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.530 [2024-04-18 12:06:00.941904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.530 [2024-04-18 12:06:00.941920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.530 qpair failed and we were unable to recover it. 00:30:10.530 [2024-04-18 12:06:00.942271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.530 [2024-04-18 12:06:00.942640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.530 [2024-04-18 12:06:00.942665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.530 qpair failed and we were unable to recover it. 00:30:10.530 [2024-04-18 12:06:00.943015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.530 [2024-04-18 12:06:00.943388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.530 [2024-04-18 12:06:00.943404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.530 qpair failed and we were unable to recover it. 00:30:10.530 [2024-04-18 12:06:00.943655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.530 [2024-04-18 12:06:00.943992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.530 [2024-04-18 12:06:00.944008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.530 qpair failed and we were unable to recover it. 00:30:10.530 [2024-04-18 12:06:00.944308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.530 [2024-04-18 12:06:00.944650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.530 [2024-04-18 12:06:00.944666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.530 qpair failed and we were unable to recover it. 00:30:10.530 [2024-04-18 12:06:00.944923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.530 [2024-04-18 12:06:00.945265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.530 [2024-04-18 12:06:00.945281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.530 qpair failed and we were unable to recover it. 00:30:10.530 [2024-04-18 12:06:00.945652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.530 [2024-04-18 12:06:00.945974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.530 [2024-04-18 12:06:00.945991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.530 qpair failed and we were unable to recover it. 00:30:10.530 [2024-04-18 12:06:00.946335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.530 [2024-04-18 12:06:00.946594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.530 [2024-04-18 12:06:00.946611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.530 qpair failed and we were unable to recover it. 00:30:10.530 [2024-04-18 12:06:00.946869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.530 [2024-04-18 12:06:00.947197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.530 [2024-04-18 12:06:00.947213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.530 qpair failed and we were unable to recover it. 00:30:10.530 [2024-04-18 12:06:00.947539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.530 [2024-04-18 12:06:00.947760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.530 [2024-04-18 12:06:00.947781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.530 qpair failed and we were unable to recover it. 00:30:10.530 [2024-04-18 12:06:00.948036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.530 [2024-04-18 12:06:00.948394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.530 [2024-04-18 12:06:00.948411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.530 qpair failed and we were unable to recover it. 00:30:10.530 [2024-04-18 12:06:00.948774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.530 [2024-04-18 12:06:00.949046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.530 [2024-04-18 12:06:00.949062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.530 qpair failed and we were unable to recover it. 00:30:10.530 [2024-04-18 12:06:00.949404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.530 [2024-04-18 12:06:00.949726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.530 [2024-04-18 12:06:00.949742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.530 qpair failed and we were unable to recover it. 00:30:10.530 [2024-04-18 12:06:00.950088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.530 [2024-04-18 12:06:00.950456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.530 [2024-04-18 12:06:00.950472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.530 qpair failed and we were unable to recover it. 00:30:10.530 [2024-04-18 12:06:00.950831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.530 [2024-04-18 12:06:00.951111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.530 [2024-04-18 12:06:00.951127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.530 qpair failed and we were unable to recover it. 00:30:10.530 [2024-04-18 12:06:00.951472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.530 [2024-04-18 12:06:00.951842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.530 [2024-04-18 12:06:00.951858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.530 qpair failed and we were unable to recover it. 00:30:10.530 [2024-04-18 12:06:00.952134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.530 [2024-04-18 12:06:00.952404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.530 [2024-04-18 12:06:00.952420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.530 qpair failed and we were unable to recover it. 00:30:10.530 [2024-04-18 12:06:00.952749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.530 [2024-04-18 12:06:00.953094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.530 [2024-04-18 12:06:00.953110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.530 qpair failed and we were unable to recover it. 00:30:10.530 [2024-04-18 12:06:00.953434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.530 [2024-04-18 12:06:00.953784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.530 [2024-04-18 12:06:00.953800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.530 qpair failed and we were unable to recover it. 00:30:10.530 [2024-04-18 12:06:00.954067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.530 [2024-04-18 12:06:00.954340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.530 [2024-04-18 12:06:00.954358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.530 qpair failed and we were unable to recover it. 00:30:10.530 [2024-04-18 12:06:00.954571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.530 [2024-04-18 12:06:00.954917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.530 [2024-04-18 12:06:00.954933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.530 qpair failed and we were unable to recover it. 00:30:10.530 [2024-04-18 12:06:00.955300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.530 [2024-04-18 12:06:00.955552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.530 [2024-04-18 12:06:00.955569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.530 qpair failed and we were unable to recover it. 00:30:10.530 [2024-04-18 12:06:00.955916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.530 [2024-04-18 12:06:00.956284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.531 [2024-04-18 12:06:00.956300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.531 qpair failed and we were unable to recover it. 00:30:10.531 [2024-04-18 12:06:00.956651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.531 [2024-04-18 12:06:00.956918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.531 [2024-04-18 12:06:00.956934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.531 qpair failed and we were unable to recover it. 00:30:10.531 [2024-04-18 12:06:00.957260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.531 [2024-04-18 12:06:00.957531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.531 [2024-04-18 12:06:00.957547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.531 qpair failed and we were unable to recover it. 00:30:10.531 [2024-04-18 12:06:00.957907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.531 [2024-04-18 12:06:00.958181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.531 [2024-04-18 12:06:00.958197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.531 qpair failed and we were unable to recover it. 00:30:10.531 [2024-04-18 12:06:00.958482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.531 [2024-04-18 12:06:00.958822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.531 [2024-04-18 12:06:00.958837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.531 qpair failed and we were unable to recover it. 00:30:10.531 [2024-04-18 12:06:00.959211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.531 [2024-04-18 12:06:00.959559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.531 [2024-04-18 12:06:00.959576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.531 qpair failed and we were unable to recover it. 00:30:10.531 [2024-04-18 12:06:00.959947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.531 [2024-04-18 12:06:00.960300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.531 [2024-04-18 12:06:00.960316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.531 qpair failed and we were unable to recover it. 00:30:10.531 [2024-04-18 12:06:00.960596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.531 [2024-04-18 12:06:00.960916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.531 [2024-04-18 12:06:00.960934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.531 qpair failed and we were unable to recover it. 00:30:10.531 [2024-04-18 12:06:00.961198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.531 [2024-04-18 12:06:00.961492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.531 [2024-04-18 12:06:00.961509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.531 qpair failed and we were unable to recover it. 00:30:10.531 [2024-04-18 12:06:00.961782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.531 [2024-04-18 12:06:00.962123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.531 [2024-04-18 12:06:00.962139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.531 qpair failed and we were unable to recover it. 00:30:10.531 [2024-04-18 12:06:00.962398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.531 [2024-04-18 12:06:00.962742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.531 [2024-04-18 12:06:00.962758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.531 qpair failed and we were unable to recover it. 00:30:10.531 [2024-04-18 12:06:00.963049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.531 [2024-04-18 12:06:00.963298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.531 [2024-04-18 12:06:00.963314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.531 qpair failed and we were unable to recover it. 00:30:10.531 [2024-04-18 12:06:00.963658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.531 [2024-04-18 12:06:00.963988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.531 [2024-04-18 12:06:00.964004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.531 qpair failed and we were unable to recover it. 00:30:10.531 [2024-04-18 12:06:00.964352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.531 [2024-04-18 12:06:00.964637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.531 [2024-04-18 12:06:00.964653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.531 qpair failed and we were unable to recover it. 00:30:10.531 [2024-04-18 12:06:00.964997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.531 [2024-04-18 12:06:00.965268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.531 [2024-04-18 12:06:00.965285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.531 qpair failed and we were unable to recover it. 00:30:10.531 [2024-04-18 12:06:00.965631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.531 [2024-04-18 12:06:00.965886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.531 [2024-04-18 12:06:00.965902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.531 qpair failed and we were unable to recover it. 00:30:10.531 [2024-04-18 12:06:00.966253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.531 [2024-04-18 12:06:00.966506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.531 [2024-04-18 12:06:00.966523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.531 qpair failed and we were unable to recover it. 00:30:10.531 [2024-04-18 12:06:00.966869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.531 [2024-04-18 12:06:00.967198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.531 [2024-04-18 12:06:00.967216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.531 qpair failed and we were unable to recover it. 00:30:10.531 [2024-04-18 12:06:00.967538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.531 [2024-04-18 12:06:00.967880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.531 [2024-04-18 12:06:00.967896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.531 qpair failed and we were unable to recover it. 00:30:10.531 [2024-04-18 12:06:00.968278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.531 [2024-04-18 12:06:00.968633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.531 [2024-04-18 12:06:00.968682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.531 qpair failed and we were unable to recover it. 00:30:10.531 [2024-04-18 12:06:00.969078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.531 [2024-04-18 12:06:00.969486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.531 [2024-04-18 12:06:00.969535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.531 qpair failed and we were unable to recover it. 00:30:10.531 [2024-04-18 12:06:00.969920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.531 [2024-04-18 12:06:00.970210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.531 [2024-04-18 12:06:00.970226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.531 qpair failed and we were unable to recover it. 00:30:10.531 [2024-04-18 12:06:00.970593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.531 [2024-04-18 12:06:00.970961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.531 [2024-04-18 12:06:00.971010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.531 qpair failed and we were unable to recover it. 00:30:10.531 [2024-04-18 12:06:00.971434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.531 [2024-04-18 12:06:00.971878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.531 [2024-04-18 12:06:00.971928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.531 qpair failed and we were unable to recover it. 00:30:10.531 [2024-04-18 12:06:00.972196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.531 [2024-04-18 12:06:00.972598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.531 [2024-04-18 12:06:00.972625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.531 qpair failed and we were unable to recover it. 00:30:10.531 [2024-04-18 12:06:00.972977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.531 [2024-04-18 12:06:00.973290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.531 [2024-04-18 12:06:00.973337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.531 qpair failed and we were unable to recover it. 00:30:10.531 [2024-04-18 12:06:00.973756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.531 [2024-04-18 12:06:00.974086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.531 [2024-04-18 12:06:00.974134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.531 qpair failed and we were unable to recover it. 00:30:10.531 [2024-04-18 12:06:00.974473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.531 [2024-04-18 12:06:00.974834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.531 [2024-04-18 12:06:00.974882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.531 qpair failed and we were unable to recover it. 00:30:10.531 [2024-04-18 12:06:00.975288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.531 [2024-04-18 12:06:00.975693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.531 [2024-04-18 12:06:00.975742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.531 qpair failed and we were unable to recover it. 00:30:10.531 [2024-04-18 12:06:00.976130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.532 [2024-04-18 12:06:00.976457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.532 [2024-04-18 12:06:00.976473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.532 qpair failed and we were unable to recover it. 00:30:10.532 [2024-04-18 12:06:00.976826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.532 [2024-04-18 12:06:00.977162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.532 [2024-04-18 12:06:00.977210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.532 qpair failed and we were unable to recover it. 00:30:10.532 [2024-04-18 12:06:00.977604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.532 [2024-04-18 12:06:00.977935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.532 [2024-04-18 12:06:00.977984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.532 qpair failed and we were unable to recover it. 00:30:10.532 [2024-04-18 12:06:00.978323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.532 [2024-04-18 12:06:00.978681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.532 [2024-04-18 12:06:00.978730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.532 qpair failed and we were unable to recover it. 00:30:10.532 [2024-04-18 12:06:00.979166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.532 [2024-04-18 12:06:00.979572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.532 [2024-04-18 12:06:00.979622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.532 qpair failed and we were unable to recover it. 00:30:10.532 [2024-04-18 12:06:00.980040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.532 [2024-04-18 12:06:00.980444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.532 [2024-04-18 12:06:00.980527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.532 qpair failed and we were unable to recover it. 00:30:10.532 [2024-04-18 12:06:00.980836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.532 [2024-04-18 12:06:00.981223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.532 [2024-04-18 12:06:00.981273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.532 qpair failed and we were unable to recover it. 00:30:10.532 [2024-04-18 12:06:00.981627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.532 [2024-04-18 12:06:00.981969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.532 [2024-04-18 12:06:00.982018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.532 qpair failed and we were unable to recover it. 00:30:10.532 [2024-04-18 12:06:00.982386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.532 [2024-04-18 12:06:00.982804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.532 [2024-04-18 12:06:00.982853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.532 qpair failed and we were unable to recover it. 00:30:10.532 [2024-04-18 12:06:00.983305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.532 [2024-04-18 12:06:00.983691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.532 [2024-04-18 12:06:00.983741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.532 qpair failed and we were unable to recover it. 00:30:10.532 [2024-04-18 12:06:00.984161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.532 [2024-04-18 12:06:00.984568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.532 [2024-04-18 12:06:00.984617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.532 qpair failed and we were unable to recover it. 00:30:10.532 [2024-04-18 12:06:00.985068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.532 [2024-04-18 12:06:00.985427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.532 [2024-04-18 12:06:00.985488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.532 qpair failed and we were unable to recover it. 00:30:10.532 [2024-04-18 12:06:00.985853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.532 [2024-04-18 12:06:00.986279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.532 [2024-04-18 12:06:00.986328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.532 qpair failed and we were unable to recover it. 00:30:10.532 [2024-04-18 12:06:00.986742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.532 [2024-04-18 12:06:00.987150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.532 [2024-04-18 12:06:00.987199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.532 qpair failed and we were unable to recover it. 00:30:10.532 [2024-04-18 12:06:00.987621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.532 [2024-04-18 12:06:00.988029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.532 [2024-04-18 12:06:00.988077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.532 qpair failed and we were unable to recover it. 00:30:10.532 [2024-04-18 12:06:00.988496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.532 [2024-04-18 12:06:00.988901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.532 [2024-04-18 12:06:00.988951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.532 qpair failed and we were unable to recover it. 00:30:10.532 [2024-04-18 12:06:00.989369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.532 [2024-04-18 12:06:00.989774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.532 [2024-04-18 12:06:00.989790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.532 qpair failed and we were unable to recover it. 00:30:10.532 [2024-04-18 12:06:00.990120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.532 [2024-04-18 12:06:00.990529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.532 [2024-04-18 12:06:00.990578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.532 qpair failed and we were unable to recover it. 00:30:10.532 [2024-04-18 12:06:00.990978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.532 [2024-04-18 12:06:00.991362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.532 [2024-04-18 12:06:00.991410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.532 qpair failed and we were unable to recover it. 00:30:10.532 [2024-04-18 12:06:00.991784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.532 [2024-04-18 12:06:00.992205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.532 [2024-04-18 12:06:00.992220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.532 qpair failed and we were unable to recover it. 00:30:10.532 [2024-04-18 12:06:00.992515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.532 [2024-04-18 12:06:00.992795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.532 [2024-04-18 12:06:00.992842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.532 qpair failed and we were unable to recover it. 00:30:10.532 [2024-04-18 12:06:00.993239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.532 [2024-04-18 12:06:00.993498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.532 [2024-04-18 12:06:00.993548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.532 qpair failed and we were unable to recover it. 00:30:10.532 [2024-04-18 12:06:00.993962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.532 [2024-04-18 12:06:00.994321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.532 [2024-04-18 12:06:00.994370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.532 qpair failed and we were unable to recover it. 00:30:10.532 [2024-04-18 12:06:00.994752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.532 [2024-04-18 12:06:00.995070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.532 [2024-04-18 12:06:00.995118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.532 qpair failed and we were unable to recover it. 00:30:10.532 [2024-04-18 12:06:00.995515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.532 [2024-04-18 12:06:00.995924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.532 [2024-04-18 12:06:00.995973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.532 qpair failed and we were unable to recover it. 00:30:10.532 [2024-04-18 12:06:00.996395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.532 [2024-04-18 12:06:00.996846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.532 [2024-04-18 12:06:00.996897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.532 qpair failed and we were unable to recover it. 00:30:10.532 [2024-04-18 12:06:00.997168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.532 [2024-04-18 12:06:00.997578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.532 [2024-04-18 12:06:00.997628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.532 qpair failed and we were unable to recover it. 00:30:10.532 [2024-04-18 12:06:00.998032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.532 [2024-04-18 12:06:00.998305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.532 [2024-04-18 12:06:00.998319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.532 qpair failed and we were unable to recover it. 00:30:10.532 [2024-04-18 12:06:00.998511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.532 [2024-04-18 12:06:00.998801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.532 [2024-04-18 12:06:00.998817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.532 qpair failed and we were unable to recover it. 00:30:10.533 [2024-04-18 12:06:00.999165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-04-18 12:06:00.999461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-04-18 12:06:00.999493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.533 qpair failed and we were unable to recover it. 00:30:10.533 [2024-04-18 12:06:00.999847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-04-18 12:06:01.000194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-04-18 12:06:01.000209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.533 qpair failed and we were unable to recover it. 00:30:10.533 [2024-04-18 12:06:01.000532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-04-18 12:06:01.000883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-04-18 12:06:01.000899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.533 qpair failed and we were unable to recover it. 00:30:10.533 [2024-04-18 12:06:01.001269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-04-18 12:06:01.001615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-04-18 12:06:01.001631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.533 qpair failed and we were unable to recover it. 00:30:10.533 [2024-04-18 12:06:01.001953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-04-18 12:06:01.002299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-04-18 12:06:01.002315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.533 qpair failed and we were unable to recover it. 00:30:10.533 [2024-04-18 12:06:01.002690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-04-18 12:06:01.003039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-04-18 12:06:01.003055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.533 qpair failed and we were unable to recover it. 00:30:10.533 [2024-04-18 12:06:01.003426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-04-18 12:06:01.003784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-04-18 12:06:01.003800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.533 qpair failed and we were unable to recover it. 00:30:10.533 [2024-04-18 12:06:01.004175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-04-18 12:06:01.004428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-04-18 12:06:01.004444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.533 qpair failed and we were unable to recover it. 00:30:10.533 [2024-04-18 12:06:01.004704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-04-18 12:06:01.004966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-04-18 12:06:01.004982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.533 qpair failed and we were unable to recover it. 00:30:10.533 [2024-04-18 12:06:01.005312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-04-18 12:06:01.005584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-04-18 12:06:01.005600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.533 qpair failed and we were unable to recover it. 00:30:10.533 [2024-04-18 12:06:01.005955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-04-18 12:06:01.006322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-04-18 12:06:01.006338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.533 qpair failed and we were unable to recover it. 00:30:10.533 [2024-04-18 12:06:01.006664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-04-18 12:06:01.007010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-04-18 12:06:01.007025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.533 qpair failed and we were unable to recover it. 00:30:10.533 [2024-04-18 12:06:01.007325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-04-18 12:06:01.007645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-04-18 12:06:01.007662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.533 qpair failed and we were unable to recover it. 00:30:10.533 [2024-04-18 12:06:01.007985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-04-18 12:06:01.008329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-04-18 12:06:01.008345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.533 qpair failed and we were unable to recover it. 00:30:10.533 [2024-04-18 12:06:01.008602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-04-18 12:06:01.008923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-04-18 12:06:01.008939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.533 qpair failed and we were unable to recover it. 00:30:10.533 [2024-04-18 12:06:01.009219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-04-18 12:06:01.009570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-04-18 12:06:01.009586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.533 qpair failed and we were unable to recover it. 00:30:10.533 [2024-04-18 12:06:01.009909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-04-18 12:06:01.010198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-04-18 12:06:01.010214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.533 qpair failed and we were unable to recover it. 00:30:10.533 [2024-04-18 12:06:01.010577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-04-18 12:06:01.010865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-04-18 12:06:01.010881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.533 qpair failed and we were unable to recover it. 00:30:10.533 [2024-04-18 12:06:01.011151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-04-18 12:06:01.011417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-04-18 12:06:01.011433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.533 qpair failed and we were unable to recover it. 00:30:10.533 [2024-04-18 12:06:01.011740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-04-18 12:06:01.012107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-04-18 12:06:01.012124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.533 qpair failed and we were unable to recover it. 00:30:10.533 [2024-04-18 12:06:01.012398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-04-18 12:06:01.012741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-04-18 12:06:01.012763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.533 qpair failed and we were unable to recover it. 00:30:10.533 [2024-04-18 12:06:01.013133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-04-18 12:06:01.013419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-04-18 12:06:01.013435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.533 qpair failed and we were unable to recover it. 00:30:10.533 [2024-04-18 12:06:01.013698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-04-18 12:06:01.014061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-04-18 12:06:01.014077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.533 qpair failed and we were unable to recover it. 00:30:10.533 [2024-04-18 12:06:01.014424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-04-18 12:06:01.014684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-04-18 12:06:01.014700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.533 qpair failed and we were unable to recover it. 00:30:10.533 [2024-04-18 12:06:01.015044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-04-18 12:06:01.015262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-04-18 12:06:01.015278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.533 qpair failed and we were unable to recover it. 00:30:10.533 [2024-04-18 12:06:01.015639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-04-18 12:06:01.015902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-04-18 12:06:01.015918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.533 qpair failed and we were unable to recover it. 00:30:10.533 [2024-04-18 12:06:01.016244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-04-18 12:06:01.016629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-04-18 12:06:01.016646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.533 qpair failed and we were unable to recover it. 00:30:10.533 [2024-04-18 12:06:01.016916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-04-18 12:06:01.017286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-04-18 12:06:01.017302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.533 qpair failed and we were unable to recover it. 00:30:10.533 [2024-04-18 12:06:01.017645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-04-18 12:06:01.017991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-04-18 12:06:01.018006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.533 qpair failed and we were unable to recover it. 00:30:10.534 [2024-04-18 12:06:01.018380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.534 [2024-04-18 12:06:01.018726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.534 [2024-04-18 12:06:01.018742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.534 qpair failed and we were unable to recover it. 00:30:10.534 [2024-04-18 12:06:01.019112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.534 [2024-04-18 12:06:01.019363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.534 [2024-04-18 12:06:01.019379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.534 qpair failed and we were unable to recover it. 00:30:10.534 [2024-04-18 12:06:01.019703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.534 [2024-04-18 12:06:01.020029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.534 [2024-04-18 12:06:01.020045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.534 qpair failed and we were unable to recover it. 00:30:10.534 [2024-04-18 12:06:01.020369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.534 [2024-04-18 12:06:01.020758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.534 [2024-04-18 12:06:01.020774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.534 qpair failed and we were unable to recover it. 00:30:10.534 [2024-04-18 12:06:01.021112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.534 [2024-04-18 12:06:01.021431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.534 [2024-04-18 12:06:01.021447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.534 qpair failed and we were unable to recover it. 00:30:10.534 [2024-04-18 12:06:01.021806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.534 [2024-04-18 12:06:01.022173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.534 [2024-04-18 12:06:01.022189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.534 qpair failed and we were unable to recover it. 00:30:10.534 [2024-04-18 12:06:01.022419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.534 [2024-04-18 12:06:01.022747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.534 [2024-04-18 12:06:01.022763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.534 qpair failed and we were unable to recover it. 00:30:10.534 [2024-04-18 12:06:01.023114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.534 [2024-04-18 12:06:01.023381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.534 [2024-04-18 12:06:01.023397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.534 qpair failed and we were unable to recover it. 00:30:10.534 [2024-04-18 12:06:01.023743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.534 [2024-04-18 12:06:01.024113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.534 [2024-04-18 12:06:01.024129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.534 qpair failed and we were unable to recover it. 00:30:10.534 [2024-04-18 12:06:01.024420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.534 [2024-04-18 12:06:01.024767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.534 [2024-04-18 12:06:01.024783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.534 qpair failed and we were unable to recover it. 00:30:10.534 [2024-04-18 12:06:01.025106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.534 [2024-04-18 12:06:01.025384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.534 [2024-04-18 12:06:01.025400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.534 qpair failed and we were unable to recover it. 00:30:10.534 [2024-04-18 12:06:01.025751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.534 [2024-04-18 12:06:01.026074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.534 [2024-04-18 12:06:01.026090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.534 qpair failed and we were unable to recover it. 00:30:10.534 [2024-04-18 12:06:01.026358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.534 [2024-04-18 12:06:01.026700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.534 [2024-04-18 12:06:01.026717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.534 qpair failed and we were unable to recover it. 00:30:10.534 [2024-04-18 12:06:01.027007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.534 [2024-04-18 12:06:01.027343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.534 [2024-04-18 12:06:01.027359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.534 qpair failed and we were unable to recover it. 00:30:10.534 [2024-04-18 12:06:01.027738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.534 [2024-04-18 12:06:01.028091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.534 [2024-04-18 12:06:01.028107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.534 qpair failed and we were unable to recover it. 00:30:10.534 [2024-04-18 12:06:01.028363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.534 [2024-04-18 12:06:01.028650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.534 [2024-04-18 12:06:01.028666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.534 qpair failed and we were unable to recover it. 00:30:10.534 [2024-04-18 12:06:01.029007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.534 [2024-04-18 12:06:01.029381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.534 [2024-04-18 12:06:01.029397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.534 qpair failed and we were unable to recover it. 00:30:10.534 [2024-04-18 12:06:01.029693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.534 [2024-04-18 12:06:01.030016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.534 [2024-04-18 12:06:01.030032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.534 qpair failed and we were unable to recover it. 00:30:10.534 [2024-04-18 12:06:01.030308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.534 [2024-04-18 12:06:01.030599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.534 [2024-04-18 12:06:01.030615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.534 qpair failed and we were unable to recover it. 00:30:10.534 [2024-04-18 12:06:01.030945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.534 [2024-04-18 12:06:01.031211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.534 [2024-04-18 12:06:01.031227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.534 qpair failed and we were unable to recover it. 00:30:10.534 [2024-04-18 12:06:01.031587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.534 [2024-04-18 12:06:01.031841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.534 [2024-04-18 12:06:01.031857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.534 qpair failed and we were unable to recover it. 00:30:10.534 [2024-04-18 12:06:01.032212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.534 [2024-04-18 12:06:01.032425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.534 [2024-04-18 12:06:01.032441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.534 qpair failed and we were unable to recover it. 00:30:10.534 [2024-04-18 12:06:01.032705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.534 [2024-04-18 12:06:01.032953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.534 [2024-04-18 12:06:01.032969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.534 qpair failed and we were unable to recover it. 00:30:10.534 [2024-04-18 12:06:01.033234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.534 [2024-04-18 12:06:01.033569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.534 [2024-04-18 12:06:01.033585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.534 qpair failed and we were unable to recover it. 00:30:10.534 [2024-04-18 12:06:01.033877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.534 [2024-04-18 12:06:01.034132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.535 [2024-04-18 12:06:01.034149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.535 qpair failed and we were unable to recover it. 00:30:10.535 [2024-04-18 12:06:01.034459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.535 [2024-04-18 12:06:01.034713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.535 [2024-04-18 12:06:01.034729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.535 qpair failed and we were unable to recover it. 00:30:10.535 [2024-04-18 12:06:01.035076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.535 [2024-04-18 12:06:01.035426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.535 [2024-04-18 12:06:01.035442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.535 qpair failed and we were unable to recover it. 00:30:10.535 [2024-04-18 12:06:01.035816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.535 [2024-04-18 12:06:01.036162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.535 [2024-04-18 12:06:01.036177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.535 qpair failed and we were unable to recover it. 00:30:10.535 [2024-04-18 12:06:01.036501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.535 [2024-04-18 12:06:01.036823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.535 [2024-04-18 12:06:01.036840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.535 qpair failed and we were unable to recover it. 00:30:10.535 [2024-04-18 12:06:01.037112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.535 [2024-04-18 12:06:01.037473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.535 [2024-04-18 12:06:01.037489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.535 qpair failed and we were unable to recover it. 00:30:10.535 [2024-04-18 12:06:01.037786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.535 [2024-04-18 12:06:01.038128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.535 [2024-04-18 12:06:01.038144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.535 qpair failed and we were unable to recover it. 00:30:10.535 [2024-04-18 12:06:01.038517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.535 [2024-04-18 12:06:01.038866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.535 [2024-04-18 12:06:01.038883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.535 qpair failed and we were unable to recover it. 00:30:10.535 [2024-04-18 12:06:01.039259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.535 [2024-04-18 12:06:01.039463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.535 [2024-04-18 12:06:01.039479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.535 qpair failed and we were unable to recover it. 00:30:10.535 [2024-04-18 12:06:01.039831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.535 [2024-04-18 12:06:01.040087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.535 [2024-04-18 12:06:01.040103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.535 qpair failed and we were unable to recover it. 00:30:10.535 [2024-04-18 12:06:01.040364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.535 [2024-04-18 12:06:01.040740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.535 [2024-04-18 12:06:01.040757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.535 qpair failed and we were unable to recover it. 00:30:10.535 [2024-04-18 12:06:01.041111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.535 [2024-04-18 12:06:01.041380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.535 [2024-04-18 12:06:01.041396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.535 qpair failed and we were unable to recover it. 00:30:10.535 [2024-04-18 12:06:01.041740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.535 [2024-04-18 12:06:01.042082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.535 [2024-04-18 12:06:01.042098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.535 qpair failed and we were unable to recover it. 00:30:10.535 [2024-04-18 12:06:01.042421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.535 [2024-04-18 12:06:01.042674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.535 [2024-04-18 12:06:01.042690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.535 qpair failed and we were unable to recover it. 00:30:10.535 [2024-04-18 12:06:01.043036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.535 [2024-04-18 12:06:01.043318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.535 [2024-04-18 12:06:01.043334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.535 qpair failed and we were unable to recover it. 00:30:10.535 [2024-04-18 12:06:01.043681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.535 [2024-04-18 12:06:01.044050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.535 [2024-04-18 12:06:01.044066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.535 qpair failed and we were unable to recover it. 00:30:10.535 [2024-04-18 12:06:01.044416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.535 [2024-04-18 12:06:01.044783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.535 [2024-04-18 12:06:01.044799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.535 qpair failed and we were unable to recover it. 00:30:10.535 [2024-04-18 12:06:01.045100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.535 [2024-04-18 12:06:01.045454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.535 [2024-04-18 12:06:01.045470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.535 qpair failed and we were unable to recover it. 00:30:10.535 [2024-04-18 12:06:01.045839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.535 [2024-04-18 12:06:01.046185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.535 [2024-04-18 12:06:01.046202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.535 qpair failed and we were unable to recover it. 00:30:10.535 [2024-04-18 12:06:01.046579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.535 [2024-04-18 12:06:01.046933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.535 [2024-04-18 12:06:01.046949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.535 qpair failed and we were unable to recover it. 00:30:10.535 [2024-04-18 12:06:01.047295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.535 [2024-04-18 12:06:01.047666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.535 [2024-04-18 12:06:01.047682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.535 qpair failed and we were unable to recover it. 00:30:10.535 [2024-04-18 12:06:01.048007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.535 [2024-04-18 12:06:01.048354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.535 [2024-04-18 12:06:01.048370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.535 qpair failed and we were unable to recover it. 00:30:10.535 [2024-04-18 12:06:01.048742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.535 [2024-04-18 12:06:01.048948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.535 [2024-04-18 12:06:01.048964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.535 qpair failed and we were unable to recover it. 00:30:10.535 [2024-04-18 12:06:01.049261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.535 [2024-04-18 12:06:01.049592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.535 [2024-04-18 12:06:01.049609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.535 qpair failed and we were unable to recover it. 00:30:10.535 [2024-04-18 12:06:01.049932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.535 [2024-04-18 12:06:01.050230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.535 [2024-04-18 12:06:01.050247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.535 qpair failed and we were unable to recover it. 00:30:10.535 [2024-04-18 12:06:01.050517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.535 [2024-04-18 12:06:01.050841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.535 [2024-04-18 12:06:01.050858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.535 qpair failed and we were unable to recover it. 00:30:10.535 [2024-04-18 12:06:01.051223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.535 [2024-04-18 12:06:01.051496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.535 [2024-04-18 12:06:01.051512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.535 qpair failed and we were unable to recover it. 00:30:10.535 [2024-04-18 12:06:01.051790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.535 [2024-04-18 12:06:01.052119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.535 [2024-04-18 12:06:01.052169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.535 qpair failed and we were unable to recover it. 00:30:10.535 [2024-04-18 12:06:01.052589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 2656381 Killed "${NVMF_APP[@]}" "$@" 00:30:10.535 [2024-04-18 12:06:01.052918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.536 [2024-04-18 12:06:01.052934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.536 qpair failed and we were unable to recover it. 00:30:10.536 [2024-04-18 12:06:01.053143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.536 [2024-04-18 12:06:01.053487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.536 [2024-04-18 12:06:01.053504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.536 qpair failed and we were unable to recover it. 00:30:10.536 12:06:01 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:30:10.536 [2024-04-18 12:06:01.053770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.536 12:06:01 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:10.536 [2024-04-18 12:06:01.054092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.536 [2024-04-18 12:06:01.054109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.536 qpair failed and we were unable to recover it. 00:30:10.536 12:06:01 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:30:10.536 [2024-04-18 12:06:01.054460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.536 12:06:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:10.536 [2024-04-18 12:06:01.054789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.536 [2024-04-18 12:06:01.054807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.536 qpair failed and we were unable to recover it. 00:30:10.536 12:06:01 -- common/autotest_common.sh@10 -- # set +x 00:30:10.536 [2024-04-18 12:06:01.055060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.536 [2024-04-18 12:06:01.055380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.536 [2024-04-18 12:06:01.055397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.536 qpair failed and we were unable to recover it. 00:30:10.536 [2024-04-18 12:06:01.055722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.536 [2024-04-18 12:06:01.056064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.536 [2024-04-18 12:06:01.056080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.536 qpair failed and we were unable to recover it. 00:30:10.536 [2024-04-18 12:06:01.056384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.536 [2024-04-18 12:06:01.056663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.536 [2024-04-18 12:06:01.056679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.536 qpair failed and we were unable to recover it. 00:30:10.536 [2024-04-18 12:06:01.057025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.536 [2024-04-18 12:06:01.057302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.536 [2024-04-18 12:06:01.057318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.536 qpair failed and we were unable to recover it. 00:30:10.536 [2024-04-18 12:06:01.057602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.536 [2024-04-18 12:06:01.057951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.536 [2024-04-18 12:06:01.057967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.536 qpair failed and we were unable to recover it. 00:30:10.536 [2024-04-18 12:06:01.058243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.536 [2024-04-18 12:06:01.058590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.536 [2024-04-18 12:06:01.058607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.536 qpair failed and we were unable to recover it. 00:30:10.536 [2024-04-18 12:06:01.058978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.536 [2024-04-18 12:06:01.059201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.536 [2024-04-18 12:06:01.059217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.536 qpair failed and we were unable to recover it. 00:30:10.536 [2024-04-18 12:06:01.059476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.536 [2024-04-18 12:06:01.059806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.536 [2024-04-18 12:06:01.059822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.536 qpair failed and we were unable to recover it. 00:30:10.536 [2024-04-18 12:06:01.060194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.536 [2024-04-18 12:06:01.060538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.536 [2024-04-18 12:06:01.060554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.536 qpair failed and we were unable to recover it. 00:30:10.804 [2024-04-18 12:06:01.060848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.804 [2024-04-18 12:06:01.061212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.804 [2024-04-18 12:06:01.061228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.804 qpair failed and we were unable to recover it. 00:30:10.804 [2024-04-18 12:06:01.061480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.804 [2024-04-18 12:06:01.061765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.804 [2024-04-18 12:06:01.061781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.804 qpair failed and we were unable to recover it. 00:30:10.804 12:06:01 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:10.804 [2024-04-18 12:06:01.062060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.804 12:06:01 -- nvmf/common.sh@470 -- # nvmfpid=2657276 00:30:10.804 [2024-04-18 12:06:01.062382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.804 [2024-04-18 12:06:01.062398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.804 qpair failed and we were unable to recover it. 00:30:10.804 12:06:01 -- nvmf/common.sh@471 -- # waitforlisten 2657276 00:30:10.804 [2024-04-18 12:06:01.062728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.804 12:06:01 -- common/autotest_common.sh@817 -- # '[' -z 2657276 ']' 00:30:10.804 [2024-04-18 12:06:01.063000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.804 [2024-04-18 12:06:01.063017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.804 qpair failed and we were unable to recover it. 00:30:10.804 12:06:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:10.804 [2024-04-18 12:06:01.063393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.804 12:06:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:10.804 [2024-04-18 12:06:01.063742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.804 [2024-04-18 12:06:01.063761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.804 qpair failed and we were unable to recover it. 00:30:10.805 12:06:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:10.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:10.805 [2024-04-18 12:06:01.064105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.805 12:06:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:10.805 [2024-04-18 12:06:01.064361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.805 [2024-04-18 12:06:01.064378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.805 12:06:01 -- common/autotest_common.sh@10 -- # set +x 00:30:10.805 qpair failed and we were unable to recover it. 00:30:10.805 [2024-04-18 12:06:01.064730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.805 [2024-04-18 12:06:01.065016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.805 [2024-04-18 12:06:01.065035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.805 qpair failed and we were unable to recover it. 00:30:10.805 [2024-04-18 12:06:01.065329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.805 [2024-04-18 12:06:01.065667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.805 [2024-04-18 12:06:01.065683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.805 qpair failed and we were unable to recover it. 00:30:10.805 [2024-04-18 12:06:01.066049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.805 [2024-04-18 12:06:01.066290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.805 [2024-04-18 12:06:01.066305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.805 qpair failed and we were unable to recover it. 00:30:10.805 [2024-04-18 12:06:01.066608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.805 [2024-04-18 12:06:01.066950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.805 [2024-04-18 12:06:01.066966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.805 qpair failed and we were unable to recover it. 00:30:10.805 [2024-04-18 12:06:01.067314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.805 [2024-04-18 12:06:01.067636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.805 [2024-04-18 12:06:01.067651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.805 qpair failed and we were unable to recover it. 00:30:10.805 [2024-04-18 12:06:01.068002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.805 [2024-04-18 12:06:01.068349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.805 [2024-04-18 12:06:01.068364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.805 qpair failed and we were unable to recover it. 00:30:10.805 [2024-04-18 12:06:01.068735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.805 [2024-04-18 12:06:01.069080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.805 [2024-04-18 12:06:01.069095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.805 qpair failed and we were unable to recover it. 00:30:10.805 [2024-04-18 12:06:01.069422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.805 [2024-04-18 12:06:01.069679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.805 [2024-04-18 12:06:01.069694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.805 qpair failed and we were unable to recover it. 00:30:10.805 [2024-04-18 12:06:01.070019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.805 [2024-04-18 12:06:01.070362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.805 [2024-04-18 12:06:01.070377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.805 qpair failed and we were unable to recover it. 00:30:10.805 [2024-04-18 12:06:01.070723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.805 [2024-04-18 12:06:01.071052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.805 [2024-04-18 12:06:01.071066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.805 qpair failed and we were unable to recover it. 00:30:10.805 [2024-04-18 12:06:01.071338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.805 [2024-04-18 12:06:01.071605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.805 [2024-04-18 12:06:01.071620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.805 qpair failed and we were unable to recover it. 00:30:10.805 [2024-04-18 12:06:01.071894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.805 [2024-04-18 12:06:01.072262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.805 [2024-04-18 12:06:01.072277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.805 qpair failed and we were unable to recover it. 00:30:10.805 [2024-04-18 12:06:01.072663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.805 [2024-04-18 12:06:01.073010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.805 [2024-04-18 12:06:01.073025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.805 qpair failed and we were unable to recover it. 00:30:10.805 [2024-04-18 12:06:01.073395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.805 [2024-04-18 12:06:01.073684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.805 [2024-04-18 12:06:01.073701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.805 qpair failed and we were unable to recover it. 00:30:10.805 [2024-04-18 12:06:01.074053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.805 [2024-04-18 12:06:01.074380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.805 [2024-04-18 12:06:01.074395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.805 qpair failed and we were unable to recover it. 00:30:10.805 [2024-04-18 12:06:01.074671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.805 [2024-04-18 12:06:01.074949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.805 [2024-04-18 12:06:01.074966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.805 qpair failed and we were unable to recover it. 00:30:10.805 [2024-04-18 12:06:01.075289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.805 [2024-04-18 12:06:01.075642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.805 [2024-04-18 12:06:01.075658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.805 qpair failed and we were unable to recover it. 00:30:10.805 [2024-04-18 12:06:01.075985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.805 [2024-04-18 12:06:01.076330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.805 [2024-04-18 12:06:01.076349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.805 qpair failed and we were unable to recover it. 00:30:10.805 [2024-04-18 12:06:01.076643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.805 [2024-04-18 12:06:01.076918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.805 [2024-04-18 12:06:01.076934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.805 qpair failed and we were unable to recover it. 00:30:10.805 [2024-04-18 12:06:01.077218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.805 [2024-04-18 12:06:01.077484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.805 [2024-04-18 12:06:01.077500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.805 qpair failed and we were unable to recover it. 00:30:10.805 [2024-04-18 12:06:01.077763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.805 [2024-04-18 12:06:01.077979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.805 [2024-04-18 12:06:01.077994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.805 qpair failed and we were unable to recover it. 00:30:10.805 [2024-04-18 12:06:01.078318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.805 [2024-04-18 12:06:01.078599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.805 [2024-04-18 12:06:01.078616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.805 qpair failed and we were unable to recover it. 00:30:10.805 [2024-04-18 12:06:01.078818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.805 [2024-04-18 12:06:01.079108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.805 [2024-04-18 12:06:01.079124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.805 qpair failed and we were unable to recover it. 00:30:10.805 [2024-04-18 12:06:01.079472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.805 [2024-04-18 12:06:01.079764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.805 [2024-04-18 12:06:01.079780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.805 qpair failed and we were unable to recover it. 00:30:10.805 [2024-04-18 12:06:01.080126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.805 [2024-04-18 12:06:01.080414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.805 [2024-04-18 12:06:01.080430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.805 qpair failed and we were unable to recover it. 00:30:10.805 [2024-04-18 12:06:01.080787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.805 [2024-04-18 12:06:01.081121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.805 [2024-04-18 12:06:01.081137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.805 qpair failed and we were unable to recover it. 00:30:10.805 [2024-04-18 12:06:01.081460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.805 [2024-04-18 12:06:01.081785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.805 [2024-04-18 12:06:01.081801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.806 qpair failed and we were unable to recover it. 00:30:10.806 [2024-04-18 12:06:01.082149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.806 [2024-04-18 12:06:01.082446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.806 [2024-04-18 12:06:01.082470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.806 qpair failed and we were unable to recover it. 00:30:10.806 [2024-04-18 12:06:01.082828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.806 [2024-04-18 12:06:01.083197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.806 [2024-04-18 12:06:01.083214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.806 qpair failed and we were unable to recover it. 00:30:10.806 [2024-04-18 12:06:01.083540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.806 [2024-04-18 12:06:01.083887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.806 [2024-04-18 12:06:01.083904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.806 qpair failed and we were unable to recover it. 00:30:10.806 [2024-04-18 12:06:01.084273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.806 [2024-04-18 12:06:01.084546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.806 [2024-04-18 12:06:01.084563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.806 qpair failed and we were unable to recover it. 00:30:10.806 [2024-04-18 12:06:01.084889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.806 [2024-04-18 12:06:01.085172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.806 [2024-04-18 12:06:01.085187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.806 qpair failed and we were unable to recover it. 00:30:10.806 [2024-04-18 12:06:01.085510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.806 [2024-04-18 12:06:01.085858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.806 [2024-04-18 12:06:01.085875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.806 qpair failed and we were unable to recover it. 00:30:10.806 [2024-04-18 12:06:01.086254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.806 [2024-04-18 12:06:01.086606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.806 [2024-04-18 12:06:01.086623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.806 qpair failed and we were unable to recover it. 00:30:10.806 [2024-04-18 12:06:01.086948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.806 [2024-04-18 12:06:01.087287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.806 [2024-04-18 12:06:01.087304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.806 qpair failed and we were unable to recover it. 00:30:10.806 [2024-04-18 12:06:01.087599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.806 [2024-04-18 12:06:01.087921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.806 [2024-04-18 12:06:01.087938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.806 qpair failed and we were unable to recover it. 00:30:10.806 [2024-04-18 12:06:01.088329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.806 [2024-04-18 12:06:01.088690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.806 [2024-04-18 12:06:01.088706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.806 qpair failed and we were unable to recover it. 00:30:10.806 [2024-04-18 12:06:01.088980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.806 [2024-04-18 12:06:01.089347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.806 [2024-04-18 12:06:01.089365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.806 qpair failed and we were unable to recover it. 00:30:10.806 [2024-04-18 12:06:01.089626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.806 [2024-04-18 12:06:01.089883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.806 [2024-04-18 12:06:01.089899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.806 qpair failed and we were unable to recover it. 00:30:10.806 [2024-04-18 12:06:01.090222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.806 [2024-04-18 12:06:01.090569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.806 [2024-04-18 12:06:01.090585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.806 qpair failed and we were unable to recover it. 00:30:10.806 [2024-04-18 12:06:01.090788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.806 [2024-04-18 12:06:01.091088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.806 [2024-04-18 12:06:01.091105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.806 qpair failed and we were unable to recover it. 00:30:10.806 [2024-04-18 12:06:01.091317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.806 [2024-04-18 12:06:01.091596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.806 [2024-04-18 12:06:01.091612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.806 qpair failed and we were unable to recover it. 00:30:10.806 [2024-04-18 12:06:01.091904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.806 [2024-04-18 12:06:01.092229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.806 [2024-04-18 12:06:01.092246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.806 qpair failed and we were unable to recover it. 00:30:10.806 [2024-04-18 12:06:01.092460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.806 [2024-04-18 12:06:01.092788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.806 [2024-04-18 12:06:01.092805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.806 qpair failed and we were unable to recover it. 00:30:10.806 [2024-04-18 12:06:01.093064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.806 [2024-04-18 12:06:01.093347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.806 [2024-04-18 12:06:01.093364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.806 qpair failed and we were unable to recover it. 00:30:10.806 [2024-04-18 12:06:01.093567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.806 [2024-04-18 12:06:01.093764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.806 [2024-04-18 12:06:01.093780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.806 qpair failed and we were unable to recover it. 00:30:10.806 [2024-04-18 12:06:01.094054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.806 [2024-04-18 12:06:01.094332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.806 [2024-04-18 12:06:01.094348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.806 qpair failed and we were unable to recover it. 00:30:10.806 [2024-04-18 12:06:01.094563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.806 [2024-04-18 12:06:01.094906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.806 [2024-04-18 12:06:01.094926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.806 qpair failed and we were unable to recover it. 00:30:10.806 [2024-04-18 12:06:01.095193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.806 [2024-04-18 12:06:01.095536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.806 [2024-04-18 12:06:01.095558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.806 qpair failed and we were unable to recover it. 00:30:10.806 [2024-04-18 12:06:01.095835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.806 [2024-04-18 12:06:01.096096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.806 [2024-04-18 12:06:01.096112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.806 qpair failed and we were unable to recover it. 00:30:10.806 [2024-04-18 12:06:01.096440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.806 [2024-04-18 12:06:01.096792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.806 [2024-04-18 12:06:01.096807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.806 qpair failed and we were unable to recover it. 00:30:10.806 [2024-04-18 12:06:01.096927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.806 [2024-04-18 12:06:01.097183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.806 [2024-04-18 12:06:01.097198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.806 qpair failed and we were unable to recover it. 00:30:10.806 [2024-04-18 12:06:01.097510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.806 [2024-04-18 12:06:01.097773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.806 [2024-04-18 12:06:01.097790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.806 qpair failed and we were unable to recover it. 00:30:10.806 [2024-04-18 12:06:01.098069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.806 [2024-04-18 12:06:01.098324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.806 [2024-04-18 12:06:01.098340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.806 qpair failed and we were unable to recover it. 00:30:10.806 [2024-04-18 12:06:01.098640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.806 [2024-04-18 12:06:01.098897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.806 [2024-04-18 12:06:01.098913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.806 qpair failed and we were unable to recover it. 00:30:10.806 [2024-04-18 12:06:01.099179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.806 [2024-04-18 12:06:01.099400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.807 [2024-04-18 12:06:01.099416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.807 qpair failed and we were unable to recover it. 00:30:10.807 [2024-04-18 12:06:01.099672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.807 [2024-04-18 12:06:01.099971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.807 [2024-04-18 12:06:01.099988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.807 qpair failed and we were unable to recover it. 00:30:10.807 [2024-04-18 12:06:01.100270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.807 [2024-04-18 12:06:01.100392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.807 [2024-04-18 12:06:01.100409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.807 qpair failed and we were unable to recover it. 00:30:10.807 [2024-04-18 12:06:01.100736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.807 [2024-04-18 12:06:01.101078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.807 [2024-04-18 12:06:01.101094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.807 qpair failed and we were unable to recover it. 00:30:10.807 [2024-04-18 12:06:01.101367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.807 [2024-04-18 12:06:01.101653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.807 [2024-04-18 12:06:01.101669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.807 qpair failed and we were unable to recover it. 00:30:10.807 [2024-04-18 12:06:01.101952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.807 [2024-04-18 12:06:01.102272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.807 [2024-04-18 12:06:01.102289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.807 qpair failed and we were unable to recover it. 00:30:10.807 [2024-04-18 12:06:01.102641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.807 [2024-04-18 12:06:01.102907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.807 [2024-04-18 12:06:01.102923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.807 qpair failed and we were unable to recover it. 00:30:10.807 [2024-04-18 12:06:01.103225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.807 [2024-04-18 12:06:01.103432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.807 [2024-04-18 12:06:01.103448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.807 qpair failed and we were unable to recover it. 00:30:10.807 [2024-04-18 12:06:01.103658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.807 [2024-04-18 12:06:01.103927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.807 [2024-04-18 12:06:01.103943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.807 qpair failed and we were unable to recover it. 00:30:10.807 [2024-04-18 12:06:01.104199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.807 [2024-04-18 12:06:01.104467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.807 [2024-04-18 12:06:01.104484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.807 qpair failed and we were unable to recover it. 00:30:10.807 [2024-04-18 12:06:01.104807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.807 [2024-04-18 12:06:01.105022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.807 [2024-04-18 12:06:01.105039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.807 qpair failed and we were unable to recover it. 00:30:10.807 [2024-04-18 12:06:01.105254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.807 [2024-04-18 12:06:01.105615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.807 [2024-04-18 12:06:01.105631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.807 qpair failed and we were unable to recover it. 00:30:10.807 [2024-04-18 12:06:01.105913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.807 [2024-04-18 12:06:01.106185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.807 [2024-04-18 12:06:01.106201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.807 qpair failed and we were unable to recover it. 00:30:10.807 [2024-04-18 12:06:01.106530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.807 [2024-04-18 12:06:01.106741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.807 [2024-04-18 12:06:01.106757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.807 qpair failed and we were unable to recover it. 00:30:10.807 [2024-04-18 12:06:01.107021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.807 [2024-04-18 12:06:01.107281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.807 [2024-04-18 12:06:01.107297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.807 qpair failed and we were unable to recover it. 00:30:10.807 [2024-04-18 12:06:01.107599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.807 [2024-04-18 12:06:01.107932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.807 [2024-04-18 12:06:01.107948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.807 qpair failed and we were unable to recover it. 00:30:10.807 [2024-04-18 12:06:01.108247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.807 [2024-04-18 12:06:01.108515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.807 [2024-04-18 12:06:01.108531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.807 qpair failed and we were unable to recover it. 00:30:10.807 [2024-04-18 12:06:01.108733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.807 [2024-04-18 12:06:01.108982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.807 [2024-04-18 12:06:01.108998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.807 qpair failed and we were unable to recover it. 00:30:10.807 [2024-04-18 12:06:01.109276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.807 [2024-04-18 12:06:01.109598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.807 [2024-04-18 12:06:01.109614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.807 qpair failed and we were unable to recover it. 00:30:10.807 [2024-04-18 12:06:01.109939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.807 [2024-04-18 12:06:01.110212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.807 [2024-04-18 12:06:01.110228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.807 qpair failed and we were unable to recover it. 00:30:10.807 [2024-04-18 12:06:01.110428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.807 [2024-04-18 12:06:01.110694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.807 [2024-04-18 12:06:01.110711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.807 qpair failed and we were unable to recover it. 00:30:10.807 [2024-04-18 12:06:01.111039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.807 [2024-04-18 12:06:01.111388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.807 [2024-04-18 12:06:01.111404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.807 qpair failed and we were unable to recover it. 00:30:10.807 [2024-04-18 12:06:01.111745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.807 [2024-04-18 12:06:01.112028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.807 [2024-04-18 12:06:01.112044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.807 qpair failed and we were unable to recover it. 00:30:10.807 [2024-04-18 12:06:01.112382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.807 [2024-04-18 12:06:01.112652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.807 [2024-04-18 12:06:01.112669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.807 qpair failed and we were unable to recover it. 00:30:10.807 [2024-04-18 12:06:01.112890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.807 [2024-04-18 12:06:01.113098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.807 [2024-04-18 12:06:01.113115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.807 qpair failed and we were unable to recover it. 00:30:10.807 [2024-04-18 12:06:01.113374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.807 [2024-04-18 12:06:01.113661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.807 [2024-04-18 12:06:01.113677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.807 qpair failed and we were unable to recover it. 00:30:10.807 [2024-04-18 12:06:01.113805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.807 [2024-04-18 12:06:01.114053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.807 [2024-04-18 12:06:01.114069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.807 qpair failed and we were unable to recover it. 00:30:10.807 [2024-04-18 12:06:01.114325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.807 [2024-04-18 12:06:01.114593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.807 [2024-04-18 12:06:01.114609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.807 qpair failed and we were unable to recover it. 00:30:10.807 [2024-04-18 12:06:01.114781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.807 [2024-04-18 12:06:01.115059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.807 [2024-04-18 12:06:01.115076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.807 qpair failed and we were unable to recover it. 00:30:10.807 [2024-04-18 12:06:01.115278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.808 [2024-04-18 12:06:01.115494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.808 [2024-04-18 12:06:01.115510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.808 qpair failed and we were unable to recover it. 00:30:10.808 [2024-04-18 12:06:01.115841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.808 [2024-04-18 12:06:01.116115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.808 [2024-04-18 12:06:01.116132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.808 qpair failed and we were unable to recover it. 00:30:10.808 [2024-04-18 12:06:01.116419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.808 [2024-04-18 12:06:01.116519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.808 [2024-04-18 12:06:01.116535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.808 qpair failed and we were unable to recover it. 00:30:10.808 [2024-04-18 12:06:01.116812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.808 [2024-04-18 12:06:01.117079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.808 [2024-04-18 12:06:01.117096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.808 qpair failed and we were unable to recover it. 00:30:10.808 [2024-04-18 12:06:01.117444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.808 [2024-04-18 12:06:01.117617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.808 [2024-04-18 12:06:01.117633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.808 qpair failed and we were unable to recover it. 00:30:10.808 [2024-04-18 12:06:01.117843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.808 [2024-04-18 12:06:01.118182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.808 [2024-04-18 12:06:01.118198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.808 qpair failed and we were unable to recover it. 00:30:10.808 [2024-04-18 12:06:01.118478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.808 [2024-04-18 12:06:01.118662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.808 [2024-04-18 12:06:01.118678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.808 qpair failed and we were unable to recover it. 00:30:10.808 [2024-04-18 12:06:01.118881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.808 [2024-04-18 12:06:01.119235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.808 [2024-04-18 12:06:01.119251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.808 qpair failed and we were unable to recover it. 00:30:10.808 [2024-04-18 12:06:01.119513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.808 [2024-04-18 12:06:01.119703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.808 [2024-04-18 12:06:01.119719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.808 qpair failed and we were unable to recover it. 00:30:10.808 [2024-04-18 12:06:01.119988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.808 [2024-04-18 12:06:01.120207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.808 [2024-04-18 12:06:01.120223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.808 qpair failed and we were unable to recover it. 00:30:10.808 [2024-04-18 12:06:01.120436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.808 [2024-04-18 12:06:01.120653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.808 [2024-04-18 12:06:01.120669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.808 qpair failed and we were unable to recover it. 00:30:10.808 [2024-04-18 12:06:01.120950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.808 [2024-04-18 12:06:01.121162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.808 [2024-04-18 12:06:01.121178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.808 qpair failed and we were unable to recover it. 00:30:10.808 [2024-04-18 12:06:01.121526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.808 [2024-04-18 12:06:01.121891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.808 [2024-04-18 12:06:01.121907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.808 qpair failed and we were unable to recover it. 00:30:10.808 [2024-04-18 12:06:01.122169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.808 [2024-04-18 12:06:01.122490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.808 [2024-04-18 12:06:01.122506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.808 qpair failed and we were unable to recover it. 00:30:10.808 [2024-04-18 12:06:01.122785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.808 [2024-04-18 12:06:01.123076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.808 [2024-04-18 12:06:01.123093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.808 qpair failed and we were unable to recover it. 00:30:10.808 [2024-04-18 12:06:01.123369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.808 [2024-04-18 12:06:01.123619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.808 [2024-04-18 12:06:01.123636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.808 qpair failed and we were unable to recover it. 00:30:10.808 [2024-04-18 12:06:01.123768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.808 [2024-04-18 12:06:01.124105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.808 [2024-04-18 12:06:01.124121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.808 qpair failed and we were unable to recover it. 00:30:10.808 [2024-04-18 12:06:01.124381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.808 [2024-04-18 12:06:01.124579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.808 [2024-04-18 12:06:01.124594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.808 qpair failed and we were unable to recover it. 00:30:10.808 [2024-04-18 12:06:01.124803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.808 [2024-04-18 12:06:01.125055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.808 [2024-04-18 12:06:01.125071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.808 qpair failed and we were unable to recover it. 00:30:10.808 [2024-04-18 12:06:01.125417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.808 [2024-04-18 12:06:01.125627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.808 [2024-04-18 12:06:01.125642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.808 qpair failed and we were unable to recover it. 00:30:10.808 [2024-04-18 12:06:01.125916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.808 [2024-04-18 12:06:01.126192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.808 [2024-04-18 12:06:01.126209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.808 qpair failed and we were unable to recover it. 00:30:10.808 [2024-04-18 12:06:01.126475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.808 [2024-04-18 12:06:01.126742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.808 [2024-04-18 12:06:01.126758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.808 qpair failed and we were unable to recover it. 00:30:10.808 [2024-04-18 12:06:01.127106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.808 [2024-04-18 12:06:01.127462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.808 [2024-04-18 12:06:01.127478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.808 qpair failed and we were unable to recover it. 00:30:10.808 [2024-04-18 12:06:01.127768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.808 [2024-04-18 12:06:01.128022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.808 [2024-04-18 12:06:01.128038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.808 qpair failed and we were unable to recover it. 00:30:10.808 [2024-04-18 12:06:01.128258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.808 [2024-04-18 12:06:01.128637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.808 [2024-04-18 12:06:01.128654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.808 qpair failed and we were unable to recover it. 00:30:10.808 [2024-04-18 12:06:01.128899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.808 [2024-04-18 12:06:01.129183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.808 [2024-04-18 12:06:01.129199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.808 qpair failed and we were unable to recover it. 00:30:10.808 [2024-04-18 12:06:01.129459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.808 [2024-04-18 12:06:01.129781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.808 [2024-04-18 12:06:01.129798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.808 qpair failed and we were unable to recover it. 00:30:10.808 [2024-04-18 12:06:01.130126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.808 [2024-04-18 12:06:01.130331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.808 [2024-04-18 12:06:01.130347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.808 qpair failed and we were unable to recover it. 00:30:10.808 [2024-04-18 12:06:01.130615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.808 [2024-04-18 12:06:01.130881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.808 [2024-04-18 12:06:01.130897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.808 qpair failed and we were unable to recover it. 00:30:10.808 [2024-04-18 12:06:01.131014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.809 [2024-04-18 12:06:01.131272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.809 [2024-04-18 12:06:01.131288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.809 qpair failed and we were unable to recover it. 00:30:10.809 [2024-04-18 12:06:01.131521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.809 [2024-04-18 12:06:01.131736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.809 [2024-04-18 12:06:01.131752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.809 qpair failed and we were unable to recover it. 00:30:10.809 [2024-04-18 12:06:01.132042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.809 [2024-04-18 12:06:01.132336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.809 [2024-04-18 12:06:01.132353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.809 qpair failed and we were unable to recover it. 00:30:10.809 [2024-04-18 12:06:01.132476] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:30:10.809 [2024-04-18 12:06:01.132561] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:10.809 [2024-04-18 12:06:01.132594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.809 [2024-04-18 12:06:01.132942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.809 [2024-04-18 12:06:01.132957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.809 qpair failed and we were unable to recover it. 00:30:10.809 [2024-04-18 12:06:01.133281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.809 [2024-04-18 12:06:01.133573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.809 [2024-04-18 12:06:01.133589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.809 qpair failed and we were unable to recover it. 00:30:10.809 [2024-04-18 12:06:01.133866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.809 [2024-04-18 12:06:01.134134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.809 [2024-04-18 12:06:01.134149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.809 qpair failed and we were unable to recover it. 00:30:10.809 [2024-04-18 12:06:01.134405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.809 [2024-04-18 12:06:01.134694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.809 [2024-04-18 12:06:01.134710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.809 qpair failed and we were unable to recover it. 00:30:10.809 [2024-04-18 12:06:01.135078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.809 [2024-04-18 12:06:01.135400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.809 [2024-04-18 12:06:01.135417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.809 qpair failed and we were unable to recover it. 00:30:10.809 [2024-04-18 12:06:01.135774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.809 [2024-04-18 12:06:01.136045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.809 [2024-04-18 12:06:01.136060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.809 qpair failed and we were unable to recover it. 00:30:10.809 [2024-04-18 12:06:01.136338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.809 [2024-04-18 12:06:01.136604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.809 [2024-04-18 12:06:01.136620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.809 qpair failed and we were unable to recover it. 00:30:10.809 [2024-04-18 12:06:01.136884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.809 [2024-04-18 12:06:01.137138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.809 [2024-04-18 12:06:01.137154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.809 qpair failed and we were unable to recover it. 00:30:10.809 [2024-04-18 12:06:01.137504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.809 [2024-04-18 12:06:01.137701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.809 [2024-04-18 12:06:01.137717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.809 qpair failed and we were unable to recover it. 00:30:10.809 [2024-04-18 12:06:01.137921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.809 [2024-04-18 12:06:01.138201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.809 [2024-04-18 12:06:01.138216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.809 qpair failed and we were unable to recover it. 00:30:10.809 [2024-04-18 12:06:01.138541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.809 [2024-04-18 12:06:01.138791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.809 [2024-04-18 12:06:01.138807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.809 qpair failed and we were unable to recover it. 00:30:10.809 [2024-04-18 12:06:01.139074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.809 [2024-04-18 12:06:01.139398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.809 [2024-04-18 12:06:01.139415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.809 qpair failed and we were unable to recover it. 00:30:10.809 [2024-04-18 12:06:01.139700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.809 [2024-04-18 12:06:01.139988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.809 [2024-04-18 12:06:01.140004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.809 qpair failed and we were unable to recover it. 00:30:10.809 [2024-04-18 12:06:01.140349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.809 [2024-04-18 12:06:01.140644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.809 [2024-04-18 12:06:01.140661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.809 qpair failed and we were unable to recover it. 00:30:10.809 [2024-04-18 12:06:01.140968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.809 [2024-04-18 12:06:01.141160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.809 [2024-04-18 12:06:01.141177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.809 qpair failed and we were unable to recover it. 00:30:10.809 [2024-04-18 12:06:01.141522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.809 [2024-04-18 12:06:01.141793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.809 [2024-04-18 12:06:01.141809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.809 qpair failed and we were unable to recover it. 00:30:10.809 [2024-04-18 12:06:01.141999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.809 [2024-04-18 12:06:01.142270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.809 [2024-04-18 12:06:01.142286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.809 qpair failed and we were unable to recover it. 00:30:10.809 [2024-04-18 12:06:01.142557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.809 [2024-04-18 12:06:01.142900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.809 [2024-04-18 12:06:01.142916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.809 qpair failed and we were unable to recover it. 00:30:10.809 [2024-04-18 12:06:01.143201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.809 [2024-04-18 12:06:01.143457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.809 [2024-04-18 12:06:01.143473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.809 qpair failed and we were unable to recover it. 00:30:10.809 [2024-04-18 12:06:01.143685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.809 [2024-04-18 12:06:01.143982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.809 [2024-04-18 12:06:01.143998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.809 qpair failed and we were unable to recover it. 00:30:10.810 [2024-04-18 12:06:01.144262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.810 [2024-04-18 12:06:01.144533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.810 [2024-04-18 12:06:01.144548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.810 qpair failed and we were unable to recover it. 00:30:10.810 [2024-04-18 12:06:01.144894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.810 [2024-04-18 12:06:01.145199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.810 [2024-04-18 12:06:01.145214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.810 qpair failed and we were unable to recover it. 00:30:10.810 [2024-04-18 12:06:01.145490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.810 [2024-04-18 12:06:01.145755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.810 [2024-04-18 12:06:01.145770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.810 qpair failed and we were unable to recover it. 00:30:10.810 [2024-04-18 12:06:01.146141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.810 [2024-04-18 12:06:01.146345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.810 [2024-04-18 12:06:01.146362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.810 qpair failed and we were unable to recover it. 00:30:10.810 [2024-04-18 12:06:01.146732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.810 [2024-04-18 12:06:01.147016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.810 [2024-04-18 12:06:01.147033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.810 qpair failed and we were unable to recover it. 00:30:10.810 [2024-04-18 12:06:01.147244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.810 [2024-04-18 12:06:01.147604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.810 [2024-04-18 12:06:01.147620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.810 qpair failed and we were unable to recover it. 00:30:10.810 [2024-04-18 12:06:01.147835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.810 [2024-04-18 12:06:01.148124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.810 [2024-04-18 12:06:01.148139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.810 qpair failed and we were unable to recover it. 00:30:10.810 [2024-04-18 12:06:01.148413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.810 [2024-04-18 12:06:01.148674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.810 [2024-04-18 12:06:01.148691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.810 qpair failed and we were unable to recover it. 00:30:10.810 [2024-04-18 12:06:01.148989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.810 [2024-04-18 12:06:01.149270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.810 [2024-04-18 12:06:01.149286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.810 qpair failed and we were unable to recover it. 00:30:10.810 [2024-04-18 12:06:01.149610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.810 [2024-04-18 12:06:01.149880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.810 [2024-04-18 12:06:01.149896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.810 qpair failed and we were unable to recover it. 00:30:10.810 [2024-04-18 12:06:01.150178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.810 [2024-04-18 12:06:01.150384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.810 [2024-04-18 12:06:01.150400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.810 qpair failed and we were unable to recover it. 00:30:10.810 [2024-04-18 12:06:01.150601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.810 [2024-04-18 12:06:01.150972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.810 [2024-04-18 12:06:01.150989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.810 qpair failed and we were unable to recover it. 00:30:10.810 [2024-04-18 12:06:01.151252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.810 [2024-04-18 12:06:01.151547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.810 [2024-04-18 12:06:01.151564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.810 qpair failed and we were unable to recover it. 00:30:10.810 [2024-04-18 12:06:01.151928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.810 [2024-04-18 12:06:01.152280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.810 [2024-04-18 12:06:01.152296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.810 qpair failed and we were unable to recover it. 00:30:10.810 [2024-04-18 12:06:01.152576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.810 [2024-04-18 12:06:01.152945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.810 [2024-04-18 12:06:01.152962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.810 qpair failed and we were unable to recover it. 00:30:10.810 [2024-04-18 12:06:01.153158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.810 [2024-04-18 12:06:01.153423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.810 [2024-04-18 12:06:01.153439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.810 qpair failed and we were unable to recover it. 00:30:10.810 [2024-04-18 12:06:01.153660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.810 [2024-04-18 12:06:01.153843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.810 [2024-04-18 12:06:01.153858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.810 qpair failed and we were unable to recover it. 00:30:10.810 [2024-04-18 12:06:01.154064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.810 [2024-04-18 12:06:01.154383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.810 [2024-04-18 12:06:01.154399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.810 qpair failed and we were unable to recover it. 00:30:10.810 [2024-04-18 12:06:01.154678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.810 [2024-04-18 12:06:01.154974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.810 [2024-04-18 12:06:01.154990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.810 qpair failed and we were unable to recover it. 00:30:10.810 [2024-04-18 12:06:01.155265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.810 [2024-04-18 12:06:01.155545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.810 [2024-04-18 12:06:01.155561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.810 qpair failed and we were unable to recover it. 00:30:10.810 [2024-04-18 12:06:01.155829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.810 [2024-04-18 12:06:01.156081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.810 [2024-04-18 12:06:01.156097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.810 qpair failed and we were unable to recover it. 00:30:10.810 [2024-04-18 12:06:01.156387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.810 [2024-04-18 12:06:01.156601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.810 [2024-04-18 12:06:01.156620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.810 qpair failed and we were unable to recover it. 00:30:10.810 [2024-04-18 12:06:01.156968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.810 [2024-04-18 12:06:01.157221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.810 [2024-04-18 12:06:01.157238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.810 qpair failed and we were unable to recover it. 00:30:10.810 [2024-04-18 12:06:01.157524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.810 [2024-04-18 12:06:01.157708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.810 [2024-04-18 12:06:01.157724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.810 qpair failed and we were unable to recover it. 00:30:10.810 [2024-04-18 12:06:01.157999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.810 [2024-04-18 12:06:01.158206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.810 [2024-04-18 12:06:01.158223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.810 qpair failed and we were unable to recover it. 00:30:10.810 [2024-04-18 12:06:01.158495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.810 [2024-04-18 12:06:01.158753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.810 [2024-04-18 12:06:01.158768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.810 qpair failed and we were unable to recover it. 00:30:10.810 [2024-04-18 12:06:01.159113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.810 [2024-04-18 12:06:01.159365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.810 [2024-04-18 12:06:01.159385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.810 qpair failed and we were unable to recover it. 00:30:10.810 [2024-04-18 12:06:01.159522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.810 [2024-04-18 12:06:01.159843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.810 [2024-04-18 12:06:01.159859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.810 qpair failed and we were unable to recover it. 00:30:10.810 [2024-04-18 12:06:01.160141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.810 [2024-04-18 12:06:01.160361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.810 [2024-04-18 12:06:01.160377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.811 qpair failed and we were unable to recover it. 00:30:10.811 [2024-04-18 12:06:01.160652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.811 [2024-04-18 12:06:01.160906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.811 [2024-04-18 12:06:01.160922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.811 qpair failed and we were unable to recover it. 00:30:10.811 [2024-04-18 12:06:01.161271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.811 [2024-04-18 12:06:01.161616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.811 [2024-04-18 12:06:01.161632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.811 qpair failed and we were unable to recover it. 00:30:10.811 [2024-04-18 12:06:01.161856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.811 [2024-04-18 12:06:01.162110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.811 [2024-04-18 12:06:01.162128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.811 qpair failed and we were unable to recover it. 00:30:10.811 [2024-04-18 12:06:01.162418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.811 [2024-04-18 12:06:01.162705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.811 [2024-04-18 12:06:01.162722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.811 qpair failed and we were unable to recover it. 00:30:10.811 [2024-04-18 12:06:01.162937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.811 [2024-04-18 12:06:01.163160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.811 [2024-04-18 12:06:01.163176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.811 qpair failed and we were unable to recover it. 00:30:10.811 [2024-04-18 12:06:01.163459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.811 [2024-04-18 12:06:01.163639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.811 [2024-04-18 12:06:01.163655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.811 qpair failed and we were unable to recover it. 00:30:10.811 [2024-04-18 12:06:01.163934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.811 [2024-04-18 12:06:01.164211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.811 [2024-04-18 12:06:01.164227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.811 qpair failed and we were unable to recover it. 00:30:10.811 [2024-04-18 12:06:01.164493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.811 [2024-04-18 12:06:01.164842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.811 [2024-04-18 12:06:01.164858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.811 qpair failed and we were unable to recover it. 00:30:10.811 [2024-04-18 12:06:01.165132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.811 [2024-04-18 12:06:01.165404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.811 [2024-04-18 12:06:01.165420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.811 qpair failed and we were unable to recover it. 00:30:10.811 [2024-04-18 12:06:01.165694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.811 [2024-04-18 12:06:01.166015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.811 [2024-04-18 12:06:01.166031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.811 qpair failed and we were unable to recover it. 00:30:10.811 [2024-04-18 12:06:01.166308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.811 [2024-04-18 12:06:01.166586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.811 [2024-04-18 12:06:01.166604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.811 qpair failed and we were unable to recover it. 00:30:10.811 [2024-04-18 12:06:01.166969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.811 [2024-04-18 12:06:01.167288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.811 [2024-04-18 12:06:01.167310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.811 qpair failed and we were unable to recover it. 00:30:10.811 [2024-04-18 12:06:01.167580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.811 [2024-04-18 12:06:01.167854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.811 [2024-04-18 12:06:01.167874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.811 qpair failed and we were unable to recover it. 00:30:10.811 [2024-04-18 12:06:01.168132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.811 [2024-04-18 12:06:01.168401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.811 [2024-04-18 12:06:01.168417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.811 qpair failed and we were unable to recover it. 00:30:10.811 [2024-04-18 12:06:01.168675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.811 [2024-04-18 12:06:01.168865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.811 [2024-04-18 12:06:01.168881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.811 qpair failed and we were unable to recover it. 00:30:10.811 [2024-04-18 12:06:01.169228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.811 [2024-04-18 12:06:01.169516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.811 [2024-04-18 12:06:01.169532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.811 qpair failed and we were unable to recover it. 00:30:10.811 [2024-04-18 12:06:01.169900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.811 [2024-04-18 12:06:01.170167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.811 [2024-04-18 12:06:01.170183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.811 qpair failed and we were unable to recover it. 00:30:10.811 [2024-04-18 12:06:01.170456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.811 [2024-04-18 12:06:01.170728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.811 [2024-04-18 12:06:01.170744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.811 qpair failed and we were unable to recover it. 00:30:10.811 [2024-04-18 12:06:01.170954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.811 [2024-04-18 12:06:01.171207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.811 [2024-04-18 12:06:01.171223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.811 qpair failed and we were unable to recover it. 00:30:10.811 [2024-04-18 12:06:01.171430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.811 [2024-04-18 12:06:01.171690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.811 [2024-04-18 12:06:01.171706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.811 qpair failed and we were unable to recover it. 00:30:10.811 [2024-04-18 12:06:01.171968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.811 [2024-04-18 12:06:01.172290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.811 [2024-04-18 12:06:01.172306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.811 qpair failed and we were unable to recover it. 00:30:10.811 [2024-04-18 12:06:01.172583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.811 [2024-04-18 12:06:01.172900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.811 [2024-04-18 12:06:01.172917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.811 qpair failed and we were unable to recover it. 00:30:10.811 [2024-04-18 12:06:01.173121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.811 [2024-04-18 12:06:01.173411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.811 [2024-04-18 12:06:01.173430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.811 qpair failed and we were unable to recover it. 00:30:10.811 [2024-04-18 12:06:01.173652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.811 [2024-04-18 12:06:01.173999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.811 [2024-04-18 12:06:01.174015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.811 qpair failed and we were unable to recover it. 00:30:10.811 [2024-04-18 12:06:01.174297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.811 [2024-04-18 12:06:01.174567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.811 [2024-04-18 12:06:01.174584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.811 qpair failed and we were unable to recover it. 00:30:10.811 [2024-04-18 12:06:01.174857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.811 [2024-04-18 12:06:01.175197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.811 [2024-04-18 12:06:01.175213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.811 qpair failed and we were unable to recover it. 00:30:10.811 [2024-04-18 12:06:01.175485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.811 [2024-04-18 12:06:01.175760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.811 [2024-04-18 12:06:01.175775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.811 qpair failed and we were unable to recover it. 00:30:10.811 [2024-04-18 12:06:01.175987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.811 [2024-04-18 12:06:01.176191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.811 [2024-04-18 12:06:01.176207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.811 qpair failed and we were unable to recover it. 00:30:10.811 [2024-04-18 12:06:01.176540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.811 [2024-04-18 12:06:01.176886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.812 [2024-04-18 12:06:01.176902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.812 qpair failed and we were unable to recover it. 00:30:10.812 [2024-04-18 12:06:01.177187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.812 [2024-04-18 12:06:01.177414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.812 [2024-04-18 12:06:01.177430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.812 qpair failed and we were unable to recover it. 00:30:10.812 [2024-04-18 12:06:01.177681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.812 [2024-04-18 12:06:01.177871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.812 [2024-04-18 12:06:01.177887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.812 qpair failed and we were unable to recover it. 00:30:10.812 [2024-04-18 12:06:01.178229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.812 [2024-04-18 12:06:01.178480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.812 [2024-04-18 12:06:01.178496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.812 qpair failed and we were unable to recover it. 00:30:10.812 [2024-04-18 12:06:01.178698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.812 [2024-04-18 12:06:01.178962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.812 [2024-04-18 12:06:01.178978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.812 qpair failed and we were unable to recover it. 00:30:10.812 [2024-04-18 12:06:01.179180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.812 [2024-04-18 12:06:01.179380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.812 [2024-04-18 12:06:01.179396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.812 qpair failed and we were unable to recover it. 00:30:10.812 [2024-04-18 12:06:01.179741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.812 [2024-04-18 12:06:01.180009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.812 [2024-04-18 12:06:01.180025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.812 qpair failed and we were unable to recover it. 00:30:10.812 [2024-04-18 12:06:01.180371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.812 [2024-04-18 12:06:01.180638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.812 [2024-04-18 12:06:01.180655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.812 qpair failed and we were unable to recover it. 00:30:10.812 [2024-04-18 12:06:01.180940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.812 [2024-04-18 12:06:01.181126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.812 [2024-04-18 12:06:01.181143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.812 qpair failed and we were unable to recover it. 00:30:10.812 [2024-04-18 12:06:01.181407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.812 [2024-04-18 12:06:01.181751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.812 [2024-04-18 12:06:01.181767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.812 qpair failed and we were unable to recover it. 00:30:10.812 [2024-04-18 12:06:01.181923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.812 [2024-04-18 12:06:01.182266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.812 [2024-04-18 12:06:01.182282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.812 qpair failed and we were unable to recover it. 00:30:10.812 [2024-04-18 12:06:01.182530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.812 [2024-04-18 12:06:01.182746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.812 [2024-04-18 12:06:01.182762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.812 qpair failed and we were unable to recover it. 00:30:10.812 [2024-04-18 12:06:01.182977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.812 [2024-04-18 12:06:01.183317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.812 [2024-04-18 12:06:01.183333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.812 qpair failed and we were unable to recover it. 00:30:10.812 [2024-04-18 12:06:01.183545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.812 [2024-04-18 12:06:01.183818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.812 [2024-04-18 12:06:01.183834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.812 qpair failed and we were unable to recover it. 00:30:10.812 [2024-04-18 12:06:01.184158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.812 [2024-04-18 12:06:01.184431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.812 [2024-04-18 12:06:01.184447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.812 qpair failed and we were unable to recover it. 00:30:10.812 [2024-04-18 12:06:01.184660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.812 [2024-04-18 12:06:01.184948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.812 [2024-04-18 12:06:01.184965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.812 qpair failed and we were unable to recover it. 00:30:10.812 [2024-04-18 12:06:01.185166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.812 [2024-04-18 12:06:01.185458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.812 [2024-04-18 12:06:01.185474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.812 qpair failed and we were unable to recover it. 00:30:10.812 [2024-04-18 12:06:01.185686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.812 [2024-04-18 12:06:01.186010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.812 [2024-04-18 12:06:01.186026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.812 qpair failed and we were unable to recover it. 00:30:10.812 [2024-04-18 12:06:01.186341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.812 [2024-04-18 12:06:01.186613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.812 [2024-04-18 12:06:01.186630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.812 qpair failed and we were unable to recover it. 00:30:10.812 [2024-04-18 12:06:01.186953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.812 [2024-04-18 12:06:01.187296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.812 [2024-04-18 12:06:01.187311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.812 qpair failed and we were unable to recover it. 00:30:10.812 [2024-04-18 12:06:01.187568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.812 [2024-04-18 12:06:01.187842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.812 [2024-04-18 12:06:01.187858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.812 qpair failed and we were unable to recover it. 00:30:10.812 [2024-04-18 12:06:01.188069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.812 [2024-04-18 12:06:01.188264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.812 [2024-04-18 12:06:01.188280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.812 qpair failed and we were unable to recover it. 00:30:10.812 [2024-04-18 12:06:01.188553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.812 [2024-04-18 12:06:01.188829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.812 [2024-04-18 12:06:01.188845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.812 qpair failed and we were unable to recover it. 00:30:10.812 [2024-04-18 12:06:01.189167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.812 [2024-04-18 12:06:01.189459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.812 [2024-04-18 12:06:01.189475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.812 qpair failed and we were unable to recover it. 00:30:10.812 [2024-04-18 12:06:01.189813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.812 [2024-04-18 12:06:01.190161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.812 [2024-04-18 12:06:01.190177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.812 qpair failed and we were unable to recover it. 00:30:10.812 [2024-04-18 12:06:01.190541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.812 [2024-04-18 12:06:01.190847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.812 [2024-04-18 12:06:01.190863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.812 qpair failed and we were unable to recover it. 00:30:10.812 [2024-04-18 12:06:01.191213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.812 [2024-04-18 12:06:01.191534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.812 [2024-04-18 12:06:01.191550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.812 qpair failed and we were unable to recover it. 00:30:10.812 [2024-04-18 12:06:01.191906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.812 [2024-04-18 12:06:01.192109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.812 [2024-04-18 12:06:01.192125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.812 qpair failed and we were unable to recover it. 00:30:10.812 [2024-04-18 12:06:01.192398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.812 [2024-04-18 12:06:01.192581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.813 [2024-04-18 12:06:01.192597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.813 qpair failed and we were unable to recover it. 00:30:10.813 [2024-04-18 12:06:01.192810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.813 [2024-04-18 12:06:01.193080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.813 [2024-04-18 12:06:01.193097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.813 qpair failed and we were unable to recover it. 00:30:10.813 [2024-04-18 12:06:01.193355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.813 [2024-04-18 12:06:01.193617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.813 [2024-04-18 12:06:01.193634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.813 qpair failed and we were unable to recover it. 00:30:10.813 [2024-04-18 12:06:01.193844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.813 [2024-04-18 12:06:01.194134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.813 [2024-04-18 12:06:01.194151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.813 qpair failed and we were unable to recover it. 00:30:10.813 [2024-04-18 12:06:01.194497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.813 [2024-04-18 12:06:01.194705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.813 [2024-04-18 12:06:01.194721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.813 qpair failed and we were unable to recover it. 00:30:10.813 [2024-04-18 12:06:01.195039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.813 [2024-04-18 12:06:01.195393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.813 [2024-04-18 12:06:01.195409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.813 qpair failed and we were unable to recover it. 00:30:10.813 [2024-04-18 12:06:01.195666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.813 [2024-04-18 12:06:01.195941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.813 [2024-04-18 12:06:01.195956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.813 qpair failed and we were unable to recover it. 00:30:10.813 [2024-04-18 12:06:01.196233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.813 [2024-04-18 12:06:01.196510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.813 [2024-04-18 12:06:01.196526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.813 qpair failed and we were unable to recover it. 00:30:10.813 [2024-04-18 12:06:01.196804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.813 [2024-04-18 12:06:01.197060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.813 [2024-04-18 12:06:01.197076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.813 qpair failed and we were unable to recover it. 00:30:10.813 [2024-04-18 12:06:01.197342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.813 [2024-04-18 12:06:01.197677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.813 [2024-04-18 12:06:01.197694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.813 qpair failed and we were unable to recover it. 00:30:10.813 [2024-04-18 12:06:01.198019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.813 [2024-04-18 12:06:01.198288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.813 [2024-04-18 12:06:01.198304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.813 qpair failed and we were unable to recover it. 00:30:10.813 [2024-04-18 12:06:01.198509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.813 [2024-04-18 12:06:01.198767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.813 [2024-04-18 12:06:01.198783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.813 qpair failed and we were unable to recover it. 00:30:10.813 [2024-04-18 12:06:01.198953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.813 [2024-04-18 12:06:01.199278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.813 [2024-04-18 12:06:01.199295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.813 qpair failed and we were unable to recover it. 00:30:10.813 [2024-04-18 12:06:01.199525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.813 [2024-04-18 12:06:01.199778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.813 [2024-04-18 12:06:01.199794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.813 qpair failed and we were unable to recover it. 00:30:10.813 [2024-04-18 12:06:01.200146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.813 [2024-04-18 12:06:01.200467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.813 [2024-04-18 12:06:01.200483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.813 qpair failed and we were unable to recover it. 00:30:10.813 [2024-04-18 12:06:01.200674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.813 [2024-04-18 12:06:01.200959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.813 [2024-04-18 12:06:01.200975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.813 qpair failed and we were unable to recover it. 00:30:10.813 [2024-04-18 12:06:01.201294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.813 [2024-04-18 12:06:01.201464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.813 [2024-04-18 12:06:01.201480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.813 qpair failed and we were unable to recover it. 00:30:10.813 [2024-04-18 12:06:01.201813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.813 [2024-04-18 12:06:01.202096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.813 [2024-04-18 12:06:01.202112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.813 qpair failed and we were unable to recover it. 00:30:10.813 [2024-04-18 12:06:01.202393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.813 [2024-04-18 12:06:01.202713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.813 [2024-04-18 12:06:01.202730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.813 qpair failed and we were unable to recover it. 00:30:10.813 [2024-04-18 12:06:01.203024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.813 [2024-04-18 12:06:01.203287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.813 [2024-04-18 12:06:01.203303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.813 qpair failed and we were unable to recover it. 00:30:10.813 [2024-04-18 12:06:01.203568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.813 [2024-04-18 12:06:01.203838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.813 [2024-04-18 12:06:01.203854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.813 qpair failed and we were unable to recover it. 00:30:10.813 [2024-04-18 12:06:01.204213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.813 [2024-04-18 12:06:01.204480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.813 [2024-04-18 12:06:01.204496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.813 qpair failed and we were unable to recover it. 00:30:10.813 [2024-04-18 12:06:01.204825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.813 [2024-04-18 12:06:01.205036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.813 [2024-04-18 12:06:01.205052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.813 qpair failed and we were unable to recover it. 00:30:10.813 [2024-04-18 12:06:01.205399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.813 [2024-04-18 12:06:01.205690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.813 [2024-04-18 12:06:01.205707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.813 qpair failed and we were unable to recover it. 00:30:10.813 [2024-04-18 12:06:01.205978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.813 [2024-04-18 12:06:01.206280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.813 [2024-04-18 12:06:01.206296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.814 qpair failed and we were unable to recover it. 00:30:10.814 [2024-04-18 12:06:01.206641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.814 [2024-04-18 12:06:01.206828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.814 [2024-04-18 12:06:01.206845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.814 qpair failed and we were unable to recover it. 00:30:10.814 [2024-04-18 12:06:01.207118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.814 [2024-04-18 12:06:01.207468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.814 [2024-04-18 12:06:01.207484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.814 qpair failed and we were unable to recover it. 00:30:10.814 EAL: No free 2048 kB hugepages reported on node 1 00:30:10.814 [2024-04-18 12:06:01.207751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.814 [2024-04-18 12:06:01.207950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.814 [2024-04-18 12:06:01.207965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.814 qpair failed and we were unable to recover it. 00:30:10.814 [2024-04-18 12:06:01.208241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.814 [2024-04-18 12:06:01.208607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.814 [2024-04-18 12:06:01.208623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.814 qpair failed and we were unable to recover it. 00:30:10.814 [2024-04-18 12:06:01.208949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.814 [2024-04-18 12:06:01.209141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.814 [2024-04-18 12:06:01.209157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.814 qpair failed and we were unable to recover it. 00:30:10.814 [2024-04-18 12:06:01.209371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.814 [2024-04-18 12:06:01.209624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.814 [2024-04-18 12:06:01.209640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.814 qpair failed and we were unable to recover it. 00:30:10.814 [2024-04-18 12:06:01.209931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.814 [2024-04-18 12:06:01.210182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.814 [2024-04-18 12:06:01.210197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.814 qpair failed and we were unable to recover it. 00:30:10.814 [2024-04-18 12:06:01.210520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.814 [2024-04-18 12:06:01.210795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.814 [2024-04-18 12:06:01.210811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.814 qpair failed and we were unable to recover it. 00:30:10.814 [2024-04-18 12:06:01.211020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.814 [2024-04-18 12:06:01.211302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.814 [2024-04-18 12:06:01.211316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.814 qpair failed and we were unable to recover it. 00:30:10.814 [2024-04-18 12:06:01.211638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.814 [2024-04-18 12:06:01.211925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.814 [2024-04-18 12:06:01.211939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.814 qpair failed and we were unable to recover it. 00:30:10.814 [2024-04-18 12:06:01.212268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.814 [2024-04-18 12:06:01.212534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.814 [2024-04-18 12:06:01.212549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.814 qpair failed and we were unable to recover it. 00:30:10.814 [2024-04-18 12:06:01.212829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.814 [2024-04-18 12:06:01.213082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.814 [2024-04-18 12:06:01.213095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.814 qpair failed and we were unable to recover it. 00:30:10.814 [2024-04-18 12:06:01.213419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.814 [2024-04-18 12:06:01.213787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.814 [2024-04-18 12:06:01.213801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.814 qpair failed and we were unable to recover it. 00:30:10.814 [2024-04-18 12:06:01.214143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.814 [2024-04-18 12:06:01.214253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.814 [2024-04-18 12:06:01.214266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.814 qpair failed and we were unable to recover it. 00:30:10.814 [2024-04-18 12:06:01.214550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.814 [2024-04-18 12:06:01.214760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.814 [2024-04-18 12:06:01.214774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.814 qpair failed and we were unable to recover it. 00:30:10.814 [2024-04-18 12:06:01.215053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.814 [2024-04-18 12:06:01.215288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.814 [2024-04-18 12:06:01.215302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.814 qpair failed and we were unable to recover it. 00:30:10.814 [2024-04-18 12:06:01.215584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.814 [2024-04-18 12:06:01.215795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.814 [2024-04-18 12:06:01.215810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.814 qpair failed and we were unable to recover it. 00:30:10.814 [2024-04-18 12:06:01.216155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.814 [2024-04-18 12:06:01.216404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.814 [2024-04-18 12:06:01.216419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.814 qpair failed and we were unable to recover it. 00:30:10.814 [2024-04-18 12:06:01.216706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.814 [2024-04-18 12:06:01.216979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.814 [2024-04-18 12:06:01.216995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.814 qpair failed and we were unable to recover it. 00:30:10.814 [2024-04-18 12:06:01.217261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.814 [2024-04-18 12:06:01.217486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.814 [2024-04-18 12:06:01.217502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.814 qpair failed and we were unable to recover it. 00:30:10.814 [2024-04-18 12:06:01.217697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.814 [2024-04-18 12:06:01.218064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.814 [2024-04-18 12:06:01.218079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.814 qpair failed and we were unable to recover it. 00:30:10.814 [2024-04-18 12:06:01.218404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.814 [2024-04-18 12:06:01.218669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.814 [2024-04-18 12:06:01.218685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.814 qpair failed and we were unable to recover it. 00:30:10.814 [2024-04-18 12:06:01.218967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.814 [2024-04-18 12:06:01.219312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.815 [2024-04-18 12:06:01.219328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.815 qpair failed and we were unable to recover it. 00:30:10.815 [2024-04-18 12:06:01.219511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.815 [2024-04-18 12:06:01.219834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.815 [2024-04-18 12:06:01.219850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.815 qpair failed and we were unable to recover it. 00:30:10.815 [2024-04-18 12:06:01.220150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.815 [2024-04-18 12:06:01.220425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.815 [2024-04-18 12:06:01.220442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.815 qpair failed and we were unable to recover it. 00:30:10.815 [2024-04-18 12:06:01.220789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.815 [2024-04-18 12:06:01.221081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.815 [2024-04-18 12:06:01.221097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.815 qpair failed and we were unable to recover it. 00:30:10.815 [2024-04-18 12:06:01.221375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.815 [2024-04-18 12:06:01.221610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.815 [2024-04-18 12:06:01.221626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.815 qpair failed and we were unable to recover it. 00:30:10.815 [2024-04-18 12:06:01.221899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.815 [2024-04-18 12:06:01.222165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.815 [2024-04-18 12:06:01.222181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.815 qpair failed and we were unable to recover it. 00:30:10.815 [2024-04-18 12:06:01.222505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.815 [2024-04-18 12:06:01.222694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.815 [2024-04-18 12:06:01.222711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.815 qpair failed and we were unable to recover it. 00:30:10.815 [2024-04-18 12:06:01.223038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.815 [2024-04-18 12:06:01.223309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.815 [2024-04-18 12:06:01.223325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.815 qpair failed and we were unable to recover it. 00:30:10.815 [2024-04-18 12:06:01.223669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.815 [2024-04-18 12:06:01.223992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.815 [2024-04-18 12:06:01.224009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.815 qpair failed and we were unable to recover it. 00:30:10.815 [2024-04-18 12:06:01.224224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.815 [2024-04-18 12:06:01.224504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.815 [2024-04-18 12:06:01.224520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.815 qpair failed and we were unable to recover it. 00:30:10.815 [2024-04-18 12:06:01.224823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.815 [2024-04-18 12:06:01.225077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.815 [2024-04-18 12:06:01.225093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.815 qpair failed and we were unable to recover it. 00:30:10.815 [2024-04-18 12:06:01.225420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.815 [2024-04-18 12:06:01.225634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.815 [2024-04-18 12:06:01.225650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.815 qpair failed and we were unable to recover it. 00:30:10.815 [2024-04-18 12:06:01.225992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.815 [2024-04-18 12:06:01.226265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.815 [2024-04-18 12:06:01.226281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.815 qpair failed and we were unable to recover it. 00:30:10.815 [2024-04-18 12:06:01.226671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.815 [2024-04-18 12:06:01.226920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.815 [2024-04-18 12:06:01.226936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.815 qpair failed and we were unable to recover it. 00:30:10.815 [2024-04-18 12:06:01.227193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.815 [2024-04-18 12:06:01.227508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.815 [2024-04-18 12:06:01.227524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.815 qpair failed and we were unable to recover it. 00:30:10.815 [2024-04-18 12:06:01.227852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.815 [2024-04-18 12:06:01.228103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.815 [2024-04-18 12:06:01.228119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.815 qpair failed and we were unable to recover it. 00:30:10.815 [2024-04-18 12:06:01.228374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.815 [2024-04-18 12:06:01.228697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.815 [2024-04-18 12:06:01.228713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.815 qpair failed and we were unable to recover it. 00:30:10.815 [2024-04-18 12:06:01.228982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.815 [2024-04-18 12:06:01.229273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.815 [2024-04-18 12:06:01.229289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.815 qpair failed and we were unable to recover it. 00:30:10.815 [2024-04-18 12:06:01.229467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.815 [2024-04-18 12:06:01.229832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.815 [2024-04-18 12:06:01.229847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.815 qpair failed and we were unable to recover it. 00:30:10.815 [2024-04-18 12:06:01.230223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.815 [2024-04-18 12:06:01.230490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.815 [2024-04-18 12:06:01.230506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.815 qpair failed and we were unable to recover it. 00:30:10.815 [2024-04-18 12:06:01.230843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.815 [2024-04-18 12:06:01.231179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.815 [2024-04-18 12:06:01.231195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.815 qpair failed and we were unable to recover it. 00:30:10.815 [2024-04-18 12:06:01.231504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.815 [2024-04-18 12:06:01.231701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.815 [2024-04-18 12:06:01.231717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.815 qpair failed and we were unable to recover it. 00:30:10.815 [2024-04-18 12:06:01.231988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.815 [2024-04-18 12:06:01.232340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.815 [2024-04-18 12:06:01.232355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.815 qpair failed and we were unable to recover it. 00:30:10.815 [2024-04-18 12:06:01.232551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.815 [2024-04-18 12:06:01.232895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.815 [2024-04-18 12:06:01.232911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.815 qpair failed and we were unable to recover it. 00:30:10.815 [2024-04-18 12:06:01.233259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.815 [2024-04-18 12:06:01.233535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.815 [2024-04-18 12:06:01.233551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.815 qpair failed and we were unable to recover it. 00:30:10.815 [2024-04-18 12:06:01.233740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.815 [2024-04-18 12:06:01.234063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.815 [2024-04-18 12:06:01.234079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.815 qpair failed and we were unable to recover it. 00:30:10.815 [2024-04-18 12:06:01.234371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.815 [2024-04-18 12:06:01.234556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.815 [2024-04-18 12:06:01.234572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.815 qpair failed and we were unable to recover it. 00:30:10.815 [2024-04-18 12:06:01.234848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.815 [2024-04-18 12:06:01.235137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.815 [2024-04-18 12:06:01.235152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.815 qpair failed and we were unable to recover it. 00:30:10.815 [2024-04-18 12:06:01.235498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.815 [2024-04-18 12:06:01.235798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.816 [2024-04-18 12:06:01.235815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.816 qpair failed and we were unable to recover it. 00:30:10.816 [2024-04-18 12:06:01.236093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.816 [2024-04-18 12:06:01.236347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.816 [2024-04-18 12:06:01.236363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.816 qpair failed and we were unable to recover it. 00:30:10.816 [2024-04-18 12:06:01.236626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.816 [2024-04-18 12:06:01.236900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.816 [2024-04-18 12:06:01.236916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.816 qpair failed and we were unable to recover it. 00:30:10.816 [2024-04-18 12:06:01.237088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.816 [2024-04-18 12:06:01.237353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.816 [2024-04-18 12:06:01.237369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.816 qpair failed and we were unable to recover it. 00:30:10.816 [2024-04-18 12:06:01.237654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.816 [2024-04-18 12:06:01.237920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.816 [2024-04-18 12:06:01.237936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.816 qpair failed and we were unable to recover it. 00:30:10.816 [2024-04-18 12:06:01.238204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.816 [2024-04-18 12:06:01.238390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.816 [2024-04-18 12:06:01.238406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.816 qpair failed and we were unable to recover it. 00:30:10.816 [2024-04-18 12:06:01.238753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.816 [2024-04-18 12:06:01.239021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.816 [2024-04-18 12:06:01.239037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.816 qpair failed and we were unable to recover it. 00:30:10.816 [2024-04-18 12:06:01.239382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.816 [2024-04-18 12:06:01.239682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.816 [2024-04-18 12:06:01.239698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.816 qpair failed and we were unable to recover it. 00:30:10.816 [2024-04-18 12:06:01.239972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.816 [2024-04-18 12:06:01.240245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.816 [2024-04-18 12:06:01.240268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.816 qpair failed and we were unable to recover it. 00:30:10.816 [2024-04-18 12:06:01.240594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.816 [2024-04-18 12:06:01.240888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.816 [2024-04-18 12:06:01.240904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.816 qpair failed and we were unable to recover it. 00:30:10.816 [2024-04-18 12:06:01.241175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.816 [2024-04-18 12:06:01.241506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.816 [2024-04-18 12:06:01.241522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.816 qpair failed and we were unable to recover it. 00:30:10.816 [2024-04-18 12:06:01.241734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.816 [2024-04-18 12:06:01.241988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.816 [2024-04-18 12:06:01.242003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.816 qpair failed and we were unable to recover it. 00:30:10.816 [2024-04-18 12:06:01.242199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.816 [2024-04-18 12:06:01.242561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.816 [2024-04-18 12:06:01.242578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.816 qpair failed and we were unable to recover it. 00:30:10.816 [2024-04-18 12:06:01.242780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.816 [2024-04-18 12:06:01.243044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.816 [2024-04-18 12:06:01.243060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.816 qpair failed and we were unable to recover it. 00:30:10.816 [2024-04-18 12:06:01.243457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.816 [2024-04-18 12:06:01.243670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.816 [2024-04-18 12:06:01.243686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.816 qpair failed and we were unable to recover it. 00:30:10.816 [2024-04-18 12:06:01.243966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.816 [2024-04-18 12:06:01.244256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.816 [2024-04-18 12:06:01.244272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.816 qpair failed and we were unable to recover it. 00:30:10.816 [2024-04-18 12:06:01.244474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.816 [2024-04-18 12:06:01.244677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.816 [2024-04-18 12:06:01.244693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.816 qpair failed and we were unable to recover it. 00:30:10.816 [2024-04-18 12:06:01.244971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.816 [2024-04-18 12:06:01.245291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.816 [2024-04-18 12:06:01.245307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.816 qpair failed and we were unable to recover it. 00:30:10.816 [2024-04-18 12:06:01.245575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.816 [2024-04-18 12:06:01.245846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.816 [2024-04-18 12:06:01.245862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.816 qpair failed and we were unable to recover it. 00:30:10.816 [2024-04-18 12:06:01.246219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.816 [2024-04-18 12:06:01.246476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.816 [2024-04-18 12:06:01.246492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.816 qpair failed and we were unable to recover it. 00:30:10.816 [2024-04-18 12:06:01.246830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.816 [2024-04-18 12:06:01.247004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.816 [2024-04-18 12:06:01.247021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.816 qpair failed and we were unable to recover it. 00:30:10.816 [2024-04-18 12:06:01.247231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.816 [2024-04-18 12:06:01.247487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.816 [2024-04-18 12:06:01.247503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.816 qpair failed and we were unable to recover it. 00:30:10.816 [2024-04-18 12:06:01.247827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.816 [2024-04-18 12:06:01.248106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.816 [2024-04-18 12:06:01.248122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.816 qpair failed and we were unable to recover it. 00:30:10.816 [2024-04-18 12:06:01.248334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.816 [2024-04-18 12:06:01.248678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.816 [2024-04-18 12:06:01.248694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.816 qpair failed and we were unable to recover it. 00:30:10.816 [2024-04-18 12:06:01.249041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.816 [2024-04-18 12:06:01.249361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.816 [2024-04-18 12:06:01.249377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.816 qpair failed and we were unable to recover it. 00:30:10.816 [2024-04-18 12:06:01.249684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.816 [2024-04-18 12:06:01.249794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.816 [2024-04-18 12:06:01.249810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.816 qpair failed and we were unable to recover it. 00:30:10.816 [2024-04-18 12:06:01.250177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.817 [2024-04-18 12:06:01.250525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.817 [2024-04-18 12:06:01.250541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.817 qpair failed and we were unable to recover it. 00:30:10.817 [2024-04-18 12:06:01.250841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.817 [2024-04-18 12:06:01.251051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.817 [2024-04-18 12:06:01.251066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.817 qpair failed and we were unable to recover it. 00:30:10.817 [2024-04-18 12:06:01.251327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.817 [2024-04-18 12:06:01.251610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.817 [2024-04-18 12:06:01.251626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.817 qpair failed and we were unable to recover it. 00:30:10.817 [2024-04-18 12:06:01.251972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.817 [2024-04-18 12:06:01.252192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.817 [2024-04-18 12:06:01.252208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.817 qpair failed and we were unable to recover it. 00:30:10.817 [2024-04-18 12:06:01.252480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.817 [2024-04-18 12:06:01.252663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.817 [2024-04-18 12:06:01.252679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.817 qpair failed and we were unable to recover it. 00:30:10.817 [2024-04-18 12:06:01.252891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.817 [2024-04-18 12:06:01.253219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.817 [2024-04-18 12:06:01.253235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.817 qpair failed and we were unable to recover it. 00:30:10.817 [2024-04-18 12:06:01.253584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.817 [2024-04-18 12:06:01.253846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.817 [2024-04-18 12:06:01.253864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.817 qpair failed and we were unable to recover it. 00:30:10.817 [2024-04-18 12:06:01.254188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.817 [2024-04-18 12:06:01.254468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.817 [2024-04-18 12:06:01.254484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.817 qpair failed and we were unable to recover it. 00:30:10.817 [2024-04-18 12:06:01.254853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.817 [2024-04-18 12:06:01.255150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.817 [2024-04-18 12:06:01.255166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.817 qpair failed and we were unable to recover it. 00:30:10.817 [2024-04-18 12:06:01.255425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.817 [2024-04-18 12:06:01.255703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.817 [2024-04-18 12:06:01.255720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.817 qpair failed and we were unable to recover it. 00:30:10.817 [2024-04-18 12:06:01.256060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.817 [2024-04-18 12:06:01.256328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.817 [2024-04-18 12:06:01.256347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.817 qpair failed and we were unable to recover it. 00:30:10.817 [2024-04-18 12:06:01.256706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.817 [2024-04-18 12:06:01.257056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.817 [2024-04-18 12:06:01.257072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.817 qpair failed and we were unable to recover it. 00:30:10.817 [2024-04-18 12:06:01.257268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.817 [2024-04-18 12:06:01.257613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.817 [2024-04-18 12:06:01.257628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.817 qpair failed and we were unable to recover it. 00:30:10.817 [2024-04-18 12:06:01.257964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.817 [2024-04-18 12:06:01.258245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.817 [2024-04-18 12:06:01.258260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.817 qpair failed and we were unable to recover it. 00:30:10.817 [2024-04-18 12:06:01.258534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.817 [2024-04-18 12:06:01.258817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.817 [2024-04-18 12:06:01.258834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.817 qpair failed and we were unable to recover it. 00:30:10.817 [2024-04-18 12:06:01.259090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.817 [2024-04-18 12:06:01.259359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.817 [2024-04-18 12:06:01.259374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.817 qpair failed and we were unable to recover it. 00:30:10.817 [2024-04-18 12:06:01.259739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.817 [2024-04-18 12:06:01.260014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.817 [2024-04-18 12:06:01.260033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.817 qpair failed and we were unable to recover it. 00:30:10.817 [2024-04-18 12:06:01.260308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.817 [2024-04-18 12:06:01.260649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.817 [2024-04-18 12:06:01.260665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.817 qpair failed and we were unable to recover it. 00:30:10.817 [2024-04-18 12:06:01.260939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.817 [2024-04-18 12:06:01.261129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.817 [2024-04-18 12:06:01.261145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.817 qpair failed and we were unable to recover it. 00:30:10.817 [2024-04-18 12:06:01.261497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.817 [2024-04-18 12:06:01.261842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.817 [2024-04-18 12:06:01.261858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.817 qpair failed and we were unable to recover it. 00:30:10.817 [2024-04-18 12:06:01.262231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.817 [2024-04-18 12:06:01.262487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.817 [2024-04-18 12:06:01.262503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.817 qpair failed and we were unable to recover it. 00:30:10.817 [2024-04-18 12:06:01.262777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.817 [2024-04-18 12:06:01.263054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.817 [2024-04-18 12:06:01.263070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.817 qpair failed and we were unable to recover it. 00:30:10.817 [2024-04-18 12:06:01.263442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.817 [2024-04-18 12:06:01.263772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.817 [2024-04-18 12:06:01.263788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.817 qpair failed and we were unable to recover it. 00:30:10.817 [2024-04-18 12:06:01.264112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.817 [2024-04-18 12:06:01.264432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.817 [2024-04-18 12:06:01.264447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.817 qpair failed and we were unable to recover it. 00:30:10.817 [2024-04-18 12:06:01.264748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.817 [2024-04-18 12:06:01.265030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.817 [2024-04-18 12:06:01.265046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.817 qpair failed and we were unable to recover it. 00:30:10.817 [2024-04-18 12:06:01.265312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.817 [2024-04-18 12:06:01.265661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.817 [2024-04-18 12:06:01.265677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.817 qpair failed and we were unable to recover it. 00:30:10.817 [2024-04-18 12:06:01.265904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.817 [2024-04-18 12:06:01.266249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.817 [2024-04-18 12:06:01.266267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.817 qpair failed and we were unable to recover it. 00:30:10.817 [2024-04-18 12:06:01.266526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.817 [2024-04-18 12:06:01.266809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.817 [2024-04-18 12:06:01.266824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.817 qpair failed and we were unable to recover it. 00:30:10.817 [2024-04-18 12:06:01.267171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.817 [2024-04-18 12:06:01.267538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.818 [2024-04-18 12:06:01.267554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.818 qpair failed and we were unable to recover it. 00:30:10.818 [2024-04-18 12:06:01.267905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.818 [2024-04-18 12:06:01.268178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.818 [2024-04-18 12:06:01.268194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.818 qpair failed and we were unable to recover it. 00:30:10.818 [2024-04-18 12:06:01.268537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.818 [2024-04-18 12:06:01.268924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.818 [2024-04-18 12:06:01.268940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.818 qpair failed and we were unable to recover it. 00:30:10.818 [2024-04-18 12:06:01.269191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.818 [2024-04-18 12:06:01.269530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.818 [2024-04-18 12:06:01.269546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.818 qpair failed and we were unable to recover it. 00:30:10.818 [2024-04-18 12:06:01.269908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.818 [2024-04-18 12:06:01.270209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.818 [2024-04-18 12:06:01.270225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.818 qpair failed and we were unable to recover it. 00:30:10.818 [2024-04-18 12:06:01.270522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.818 [2024-04-18 12:06:01.270734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.818 [2024-04-18 12:06:01.270750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.818 qpair failed and we were unable to recover it. 00:30:10.818 [2024-04-18 12:06:01.271028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.818 [2024-04-18 12:06:01.271285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.818 [2024-04-18 12:06:01.271301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.818 qpair failed and we were unable to recover it. 00:30:10.818 [2024-04-18 12:06:01.271647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.818 [2024-04-18 12:06:01.271929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.818 [2024-04-18 12:06:01.271945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.818 qpair failed and we were unable to recover it. 00:30:10.818 [2024-04-18 12:06:01.272295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.818 [2024-04-18 12:06:01.272674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.818 [2024-04-18 12:06:01.272692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.818 qpair failed and we were unable to recover it. 00:30:10.818 [2024-04-18 12:06:01.272946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.818 [2024-04-18 12:06:01.273208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.818 [2024-04-18 12:06:01.273224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.818 qpair failed and we were unable to recover it. 00:30:10.818 [2024-04-18 12:06:01.273485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.818 [2024-04-18 12:06:01.273828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.818 [2024-04-18 12:06:01.273843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.818 qpair failed and we were unable to recover it. 00:30:10.818 [2024-04-18 12:06:01.274127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.818 [2024-04-18 12:06:01.274426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.818 [2024-04-18 12:06:01.274442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.818 qpair failed and we were unable to recover it. 00:30:10.818 [2024-04-18 12:06:01.274723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.818 [2024-04-18 12:06:01.274980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.818 [2024-04-18 12:06:01.274996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.818 qpair failed and we were unable to recover it. 00:30:10.818 [2024-04-18 12:06:01.275371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.818 [2024-04-18 12:06:01.275572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.818 [2024-04-18 12:06:01.275589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.818 qpair failed and we were unable to recover it. 00:30:10.818 [2024-04-18 12:06:01.275915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.818 [2024-04-18 12:06:01.276198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.818 [2024-04-18 12:06:01.276214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.818 qpair failed and we were unable to recover it. 00:30:10.818 [2024-04-18 12:06:01.276509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.818 [2024-04-18 12:06:01.276850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.818 [2024-04-18 12:06:01.276865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.818 qpair failed and we were unable to recover it. 00:30:10.818 [2024-04-18 12:06:01.277188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.818 [2024-04-18 12:06:01.277476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.818 [2024-04-18 12:06:01.277492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.818 qpair failed and we were unable to recover it. 00:30:10.818 [2024-04-18 12:06:01.277842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.818 [2024-04-18 12:06:01.278188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.818 [2024-04-18 12:06:01.278204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.818 qpair failed and we were unable to recover it. 00:30:10.818 [2024-04-18 12:06:01.278530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.818 [2024-04-18 12:06:01.278851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.818 [2024-04-18 12:06:01.278867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.818 qpair failed and we were unable to recover it. 00:30:10.818 [2024-04-18 12:06:01.279179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.818 [2024-04-18 12:06:01.279443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.818 [2024-04-18 12:06:01.279465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.818 qpair failed and we were unable to recover it. 00:30:10.818 [2024-04-18 12:06:01.279792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.818 [2024-04-18 12:06:01.280072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.818 [2024-04-18 12:06:01.280088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.818 qpair failed and we were unable to recover it. 00:30:10.818 [2024-04-18 12:06:01.280412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.818 [2024-04-18 12:06:01.280768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.818 [2024-04-18 12:06:01.280784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.818 qpair failed and we were unable to recover it. 00:30:10.818 [2024-04-18 12:06:01.281089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.818 [2024-04-18 12:06:01.281399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.818 [2024-04-18 12:06:01.281416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.819 qpair failed and we were unable to recover it. 00:30:10.819 [2024-04-18 12:06:01.281711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.819 [2024-04-18 12:06:01.281905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.819 [2024-04-18 12:06:01.281921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.819 qpair failed and we were unable to recover it. 00:30:10.819 [2024-04-18 12:06:01.282246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.819 [2024-04-18 12:06:01.282569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.819 [2024-04-18 12:06:01.282584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.819 qpair failed and we were unable to recover it. 00:30:10.819 [2024-04-18 12:06:01.282869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.819 [2024-04-18 12:06:01.283138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.819 [2024-04-18 12:06:01.283153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.819 qpair failed and we were unable to recover it. 00:30:10.819 [2024-04-18 12:06:01.283499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.819 [2024-04-18 12:06:01.283794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.819 [2024-04-18 12:06:01.283810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.819 qpair failed and we were unable to recover it. 00:30:10.819 [2024-04-18 12:06:01.284109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.819 [2024-04-18 12:06:01.284471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.819 [2024-04-18 12:06:01.284487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.819 qpair failed and we were unable to recover it. 00:30:10.819 [2024-04-18 12:06:01.284836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.819 [2024-04-18 12:06:01.285203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.819 [2024-04-18 12:06:01.285219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.819 qpair failed and we were unable to recover it. 00:30:10.819 [2024-04-18 12:06:01.285480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.819 [2024-04-18 12:06:01.285824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.819 [2024-04-18 12:06:01.285840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.819 qpair failed and we were unable to recover it. 00:30:10.819 [2024-04-18 12:06:01.286113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.819 [2024-04-18 12:06:01.286434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.819 [2024-04-18 12:06:01.286463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.819 qpair failed and we were unable to recover it. 00:30:10.819 [2024-04-18 12:06:01.286499] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:10.819 [2024-04-18 12:06:01.286791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.819 [2024-04-18 12:06:01.287058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.819 [2024-04-18 12:06:01.287074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.819 qpair failed and we were unable to recover it. 00:30:10.819 [2024-04-18 12:06:01.287367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.819 [2024-04-18 12:06:01.287649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.819 [2024-04-18 12:06:01.287665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.819 qpair failed and we were unable to recover it. 00:30:10.819 [2024-04-18 12:06:01.287962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.819 [2024-04-18 12:06:01.288244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.819 [2024-04-18 12:06:01.288260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.819 qpair failed and we were unable to recover it. 00:30:10.819 [2024-04-18 12:06:01.288493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.819 [2024-04-18 12:06:01.288790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.819 [2024-04-18 12:06:01.288806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.819 qpair failed and we were unable to recover it. 00:30:10.819 [2024-04-18 12:06:01.289079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.819 [2024-04-18 12:06:01.289334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.819 [2024-04-18 12:06:01.289350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.819 qpair failed and we were unable to recover it. 00:30:10.819 [2024-04-18 12:06:01.289698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.819 [2024-04-18 12:06:01.290046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.819 [2024-04-18 12:06:01.290061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.819 qpair failed and we were unable to recover it. 00:30:10.819 [2024-04-18 12:06:01.290319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.819 [2024-04-18 12:06:01.290615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.819 [2024-04-18 12:06:01.290631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.819 qpair failed and we were unable to recover it. 00:30:10.819 [2024-04-18 12:06:01.290860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.819 [2024-04-18 12:06:01.291188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.819 [2024-04-18 12:06:01.291207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.819 qpair failed and we were unable to recover it. 00:30:10.819 [2024-04-18 12:06:01.291576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.819 [2024-04-18 12:06:01.291790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.819 [2024-04-18 12:06:01.291807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.819 qpair failed and we were unable to recover it. 00:30:10.819 [2024-04-18 12:06:01.292058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.819 [2024-04-18 12:06:01.292340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.819 [2024-04-18 12:06:01.292357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.819 qpair failed and we were unable to recover it. 00:30:10.819 [2024-04-18 12:06:01.292703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.819 [2024-04-18 12:06:01.293048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.819 [2024-04-18 12:06:01.293065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.819 qpair failed and we were unable to recover it. 00:30:10.819 [2024-04-18 12:06:01.293441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.819 [2024-04-18 12:06:01.293715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.819 [2024-04-18 12:06:01.293732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.819 qpair failed and we were unable to recover it. 00:30:10.819 [2024-04-18 12:06:01.293925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.819 [2024-04-18 12:06:01.294198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.819 [2024-04-18 12:06:01.294215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.819 qpair failed and we were unable to recover it. 00:30:10.819 [2024-04-18 12:06:01.294590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.819 [2024-04-18 12:06:01.294843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.819 [2024-04-18 12:06:01.294860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.819 qpair failed and we were unable to recover it. 00:30:10.819 [2024-04-18 12:06:01.295077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.819 [2024-04-18 12:06:01.295403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.819 [2024-04-18 12:06:01.295420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.819 qpair failed and we were unable to recover it. 00:30:10.819 [2024-04-18 12:06:01.295812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.819 [2024-04-18 12:06:01.296084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.819 [2024-04-18 12:06:01.296100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.819 qpair failed and we were unable to recover it. 00:30:10.819 [2024-04-18 12:06:01.296383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.819 [2024-04-18 12:06:01.296693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.819 [2024-04-18 12:06:01.296710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.819 qpair failed and we were unable to recover it. 00:30:10.819 [2024-04-18 12:06:01.297057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.819 [2024-04-18 12:06:01.297354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.819 [2024-04-18 12:06:01.297372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.819 qpair failed and we were unable to recover it. 00:30:10.819 [2024-04-18 12:06:01.297660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.819 [2024-04-18 12:06:01.297993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.819 [2024-04-18 12:06:01.298009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.820 qpair failed and we were unable to recover it. 00:30:10.820 [2024-04-18 12:06:01.298215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.820 [2024-04-18 12:06:01.298537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.820 [2024-04-18 12:06:01.298553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.820 qpair failed and we were unable to recover it. 00:30:10.820 [2024-04-18 12:06:01.298770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.820 [2024-04-18 12:06:01.299114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.820 [2024-04-18 12:06:01.299130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.820 qpair failed and we were unable to recover it. 00:30:10.820 [2024-04-18 12:06:01.299482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.820 [2024-04-18 12:06:01.299706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.820 [2024-04-18 12:06:01.299722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.820 qpair failed and we were unable to recover it. 00:30:10.820 [2024-04-18 12:06:01.299919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.820 [2024-04-18 12:06:01.300272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.820 [2024-04-18 12:06:01.300288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.820 qpair failed and we were unable to recover it. 00:30:10.820 [2024-04-18 12:06:01.300561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.820 [2024-04-18 12:06:01.300853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.820 [2024-04-18 12:06:01.300869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.820 qpair failed and we were unable to recover it. 00:30:10.820 [2024-04-18 12:06:01.301210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.820 [2024-04-18 12:06:01.301598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.820 [2024-04-18 12:06:01.301614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.820 qpair failed and we were unable to recover it. 00:30:10.820 [2024-04-18 12:06:01.301897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.820 [2024-04-18 12:06:01.302251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.820 [2024-04-18 12:06:01.302267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.820 qpair failed and we were unable to recover it. 00:30:10.820 [2024-04-18 12:06:01.302618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.820 [2024-04-18 12:06:01.302828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.820 [2024-04-18 12:06:01.302843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.820 qpair failed and we were unable to recover it. 00:30:10.820 [2024-04-18 12:06:01.303106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.820 [2024-04-18 12:06:01.303391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.820 [2024-04-18 12:06:01.303407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.820 qpair failed and we were unable to recover it. 00:30:10.820 [2024-04-18 12:06:01.303727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.820 [2024-04-18 12:06:01.304118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.820 [2024-04-18 12:06:01.304135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.820 qpair failed and we were unable to recover it. 00:30:10.820 [2024-04-18 12:06:01.304387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.820 [2024-04-18 12:06:01.304730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.820 [2024-04-18 12:06:01.304746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.820 qpair failed and we were unable to recover it. 00:30:10.820 [2024-04-18 12:06:01.305029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.820 [2024-04-18 12:06:01.305301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.820 [2024-04-18 12:06:01.305317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.820 qpair failed and we were unable to recover it. 00:30:10.820 [2024-04-18 12:06:01.305691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.820 [2024-04-18 12:06:01.305942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.820 [2024-04-18 12:06:01.305958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.820 qpair failed and we were unable to recover it. 00:30:10.820 [2024-04-18 12:06:01.306258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.820 [2024-04-18 12:06:01.306512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.820 [2024-04-18 12:06:01.306528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.820 qpair failed and we were unable to recover it. 00:30:10.820 [2024-04-18 12:06:01.306878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.820 [2024-04-18 12:06:01.307199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.820 [2024-04-18 12:06:01.307215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.820 qpair failed and we were unable to recover it. 00:30:10.820 [2024-04-18 12:06:01.307515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.820 [2024-04-18 12:06:01.307856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.820 [2024-04-18 12:06:01.307872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.820 qpair failed and we were unable to recover it. 00:30:10.820 [2024-04-18 12:06:01.308245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.820 [2024-04-18 12:06:01.308568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.820 [2024-04-18 12:06:01.308585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.820 qpair failed and we were unable to recover it. 00:30:10.820 [2024-04-18 12:06:01.308852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.820 [2024-04-18 12:06:01.309201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.820 [2024-04-18 12:06:01.309220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.820 qpair failed and we were unable to recover it. 00:30:10.820 [2024-04-18 12:06:01.309544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.820 [2024-04-18 12:06:01.309812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.820 [2024-04-18 12:06:01.309828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.820 qpair failed and we were unable to recover it. 00:30:10.820 [2024-04-18 12:06:01.310190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.820 [2024-04-18 12:06:01.310478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.820 [2024-04-18 12:06:01.310495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.820 qpair failed and we were unable to recover it. 00:30:10.820 [2024-04-18 12:06:01.310820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.820 [2024-04-18 12:06:01.311094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.820 [2024-04-18 12:06:01.311110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.820 qpair failed and we were unable to recover it. 00:30:10.820 [2024-04-18 12:06:01.311459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.820 [2024-04-18 12:06:01.311829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.820 [2024-04-18 12:06:01.311845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.820 qpair failed and we were unable to recover it. 00:30:10.820 [2024-04-18 12:06:01.312192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.820 [2024-04-18 12:06:01.312487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.820 [2024-04-18 12:06:01.312503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.820 qpair failed and we were unable to recover it. 00:30:10.820 [2024-04-18 12:06:01.312764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.820 [2024-04-18 12:06:01.313029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.820 [2024-04-18 12:06:01.313045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.820 qpair failed and we were unable to recover it. 00:30:10.820 [2024-04-18 12:06:01.313316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.820 [2024-04-18 12:06:01.313661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.820 [2024-04-18 12:06:01.313678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.820 qpair failed and we were unable to recover it. 00:30:10.820 [2024-04-18 12:06:01.313950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.820 [2024-04-18 12:06:01.314234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.820 [2024-04-18 12:06:01.314250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.820 qpair failed and we were unable to recover it. 00:30:10.820 [2024-04-18 12:06:01.314577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.820 [2024-04-18 12:06:01.314843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.820 [2024-04-18 12:06:01.314859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.820 qpair failed and we were unable to recover it. 00:30:10.820 [2024-04-18 12:06:01.315127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.820 [2024-04-18 12:06:01.315475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.820 [2024-04-18 12:06:01.315491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.820 qpair failed and we were unable to recover it. 00:30:10.820 [2024-04-18 12:06:01.315842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.820 [2024-04-18 12:06:01.316209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.820 [2024-04-18 12:06:01.316225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.821 qpair failed and we were unable to recover it. 00:30:10.821 [2024-04-18 12:06:01.316600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.821 [2024-04-18 12:06:01.316876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.821 [2024-04-18 12:06:01.316892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.821 qpair failed and we were unable to recover it. 00:30:10.821 [2024-04-18 12:06:01.317161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.821 [2024-04-18 12:06:01.317483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.821 [2024-04-18 12:06:01.317498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.821 qpair failed and we were unable to recover it. 00:30:10.821 [2024-04-18 12:06:01.317757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.821 [2024-04-18 12:06:01.318106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.821 [2024-04-18 12:06:01.318122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.821 qpair failed and we were unable to recover it. 00:30:10.821 [2024-04-18 12:06:01.318395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.821 [2024-04-18 12:06:01.318638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.821 [2024-04-18 12:06:01.318660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.821 qpair failed and we were unable to recover it. 00:30:10.821 [2024-04-18 12:06:01.319005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.821 [2024-04-18 12:06:01.319363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.821 [2024-04-18 12:06:01.319379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.821 qpair failed and we were unable to recover it. 00:30:10.821 [2024-04-18 12:06:01.319703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.821 [2024-04-18 12:06:01.320050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.821 [2024-04-18 12:06:01.320067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.821 qpair failed and we were unable to recover it. 00:30:10.821 [2024-04-18 12:06:01.320341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.821 [2024-04-18 12:06:01.320646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.821 [2024-04-18 12:06:01.320662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.821 qpair failed and we were unable to recover it. 00:30:10.821 [2024-04-18 12:06:01.321006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.821 [2024-04-18 12:06:01.321326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.821 [2024-04-18 12:06:01.321342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.821 qpair failed and we were unable to recover it. 00:30:10.821 [2024-04-18 12:06:01.321621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.821 [2024-04-18 12:06:01.321892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.821 [2024-04-18 12:06:01.321908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.821 qpair failed and we were unable to recover it. 00:30:10.821 [2024-04-18 12:06:01.322280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.821 [2024-04-18 12:06:01.322598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.821 [2024-04-18 12:06:01.322614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.821 qpair failed and we were unable to recover it. 00:30:10.821 [2024-04-18 12:06:01.322965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.821 [2024-04-18 12:06:01.323286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.821 [2024-04-18 12:06:01.323301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.821 qpair failed and we were unable to recover it. 00:30:10.821 [2024-04-18 12:06:01.323626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.821 [2024-04-18 12:06:01.323885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.821 [2024-04-18 12:06:01.323901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.821 qpair failed and we were unable to recover it. 00:30:10.821 [2024-04-18 12:06:01.324183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.821 [2024-04-18 12:06:01.324405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.821 [2024-04-18 12:06:01.324421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.821 qpair failed and we were unable to recover it. 00:30:10.821 [2024-04-18 12:06:01.324779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.821 [2024-04-18 12:06:01.325051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.821 [2024-04-18 12:06:01.325067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.821 qpair failed and we were unable to recover it. 00:30:10.821 [2024-04-18 12:06:01.325413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.821 [2024-04-18 12:06:01.325735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.821 [2024-04-18 12:06:01.325751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.821 qpair failed and we were unable to recover it. 00:30:10.821 [2024-04-18 12:06:01.326077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.821 [2024-04-18 12:06:01.326421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.821 [2024-04-18 12:06:01.326437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.821 qpair failed and we were unable to recover it. 00:30:10.821 [2024-04-18 12:06:01.326671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.821 [2024-04-18 12:06:01.326953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.821 [2024-04-18 12:06:01.326969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.821 qpair failed and we were unable to recover it. 00:30:10.821 [2024-04-18 12:06:01.327319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.821 [2024-04-18 12:06:01.327660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.821 [2024-04-18 12:06:01.327676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.821 qpair failed and we were unable to recover it. 00:30:10.821 [2024-04-18 12:06:01.327964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.821 [2024-04-18 12:06:01.328219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.821 [2024-04-18 12:06:01.328235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.821 qpair failed and we were unable to recover it. 00:30:10.821 [2024-04-18 12:06:01.328507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.821 [2024-04-18 12:06:01.328872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.821 [2024-04-18 12:06:01.328888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.821 qpair failed and we were unable to recover it. 00:30:10.821 [2024-04-18 12:06:01.329248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.821 [2024-04-18 12:06:01.329618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.821 [2024-04-18 12:06:01.329634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.821 qpair failed and we were unable to recover it. 00:30:10.821 [2024-04-18 12:06:01.329896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.821 [2024-04-18 12:06:01.330238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.821 [2024-04-18 12:06:01.330253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.821 qpair failed and we were unable to recover it. 00:30:10.821 [2024-04-18 12:06:01.330585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.821 [2024-04-18 12:06:01.330861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.821 [2024-04-18 12:06:01.330877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.821 qpair failed and we were unable to recover it. 00:30:10.821 [2024-04-18 12:06:01.331206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.821 [2024-04-18 12:06:01.331472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.821 [2024-04-18 12:06:01.331488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.821 qpair failed and we were unable to recover it. 00:30:10.821 [2024-04-18 12:06:01.331759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.821 [2024-04-18 12:06:01.332016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.821 [2024-04-18 12:06:01.332032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.821 qpair failed and we were unable to recover it. 00:30:10.821 [2024-04-18 12:06:01.332367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.821 [2024-04-18 12:06:01.332727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.821 [2024-04-18 12:06:01.332742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.821 qpair failed and we were unable to recover it. 00:30:10.821 [2024-04-18 12:06:01.333029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.821 [2024-04-18 12:06:01.333329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.821 [2024-04-18 12:06:01.333345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.821 qpair failed and we were unable to recover it. 00:30:10.821 [2024-04-18 12:06:01.333618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.821 [2024-04-18 12:06:01.333892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.821 [2024-04-18 12:06:01.333908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.821 qpair failed and we were unable to recover it. 00:30:10.821 [2024-04-18 12:06:01.334256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.821 [2024-04-18 12:06:01.334536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.822 [2024-04-18 12:06:01.334553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.822 qpair failed and we were unable to recover it. 00:30:10.822 [2024-04-18 12:06:01.334902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.822 [2024-04-18 12:06:01.335166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.822 [2024-04-18 12:06:01.335182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.822 qpair failed and we were unable to recover it. 00:30:10.822 [2024-04-18 12:06:01.335473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.822 [2024-04-18 12:06:01.335738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.822 [2024-04-18 12:06:01.335754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.822 qpair failed and we were unable to recover it. 00:30:10.822 [2024-04-18 12:06:01.336110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.822 [2024-04-18 12:06:01.336385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.822 [2024-04-18 12:06:01.336401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.822 qpair failed and we were unable to recover it. 00:30:10.822 [2024-04-18 12:06:01.336591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.822 [2024-04-18 12:06:01.336885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.822 [2024-04-18 12:06:01.336902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.822 qpair failed and we were unable to recover it. 00:30:10.822 [2024-04-18 12:06:01.337274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.822 [2024-04-18 12:06:01.337618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.822 [2024-04-18 12:06:01.337635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.822 qpair failed and we were unable to recover it. 00:30:10.822 [2024-04-18 12:06:01.338006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.822 [2024-04-18 12:06:01.338328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.822 [2024-04-18 12:06:01.338344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.822 qpair failed and we were unable to recover it. 00:30:10.822 [2024-04-18 12:06:01.338590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.822 [2024-04-18 12:06:01.338936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.822 [2024-04-18 12:06:01.338952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.822 qpair failed and we were unable to recover it. 00:30:10.822 [2024-04-18 12:06:01.339320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.822 [2024-04-18 12:06:01.339577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.822 [2024-04-18 12:06:01.339593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.822 qpair failed and we were unable to recover it. 00:30:10.822 [2024-04-18 12:06:01.339888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.822 [2024-04-18 12:06:01.340183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.822 [2024-04-18 12:06:01.340199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.822 qpair failed and we were unable to recover it. 00:30:10.822 [2024-04-18 12:06:01.340483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.822 [2024-04-18 12:06:01.340738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.822 [2024-04-18 12:06:01.340754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.822 qpair failed and we were unable to recover it. 00:30:10.822 [2024-04-18 12:06:01.341100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.822 [2024-04-18 12:06:01.341307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.822 [2024-04-18 12:06:01.341322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.822 qpair failed and we were unable to recover it. 00:30:10.822 [2024-04-18 12:06:01.341613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.822 [2024-04-18 12:06:01.341957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.822 [2024-04-18 12:06:01.341973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.822 qpair failed and we were unable to recover it. 00:30:10.822 [2024-04-18 12:06:01.342346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.822 [2024-04-18 12:06:01.342692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.822 [2024-04-18 12:06:01.342707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.822 qpair failed and we were unable to recover it. 00:30:10.822 [2024-04-18 12:06:01.342912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.822 [2024-04-18 12:06:01.343234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.822 [2024-04-18 12:06:01.343250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:10.822 qpair failed and we were unable to recover it. 00:30:11.090 [2024-04-18 12:06:01.343595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.090 [2024-04-18 12:06:01.343849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.090 [2024-04-18 12:06:01.343865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.090 qpair failed and we were unable to recover it. 00:30:11.090 [2024-04-18 12:06:01.344213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.090 [2024-04-18 12:06:01.344533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.090 [2024-04-18 12:06:01.344549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.090 qpair failed and we were unable to recover it. 00:30:11.090 [2024-04-18 12:06:01.344749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.090 [2024-04-18 12:06:01.345077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.090 [2024-04-18 12:06:01.345093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.090 qpair failed and we were unable to recover it. 00:30:11.090 [2024-04-18 12:06:01.345456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.090 [2024-04-18 12:06:01.345787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.090 [2024-04-18 12:06:01.345803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.090 qpair failed and we were unable to recover it. 00:30:11.090 [2024-04-18 12:06:01.346151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.090 [2024-04-18 12:06:01.346521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.090 [2024-04-18 12:06:01.346537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.090 qpair failed and we were unable to recover it. 00:30:11.090 [2024-04-18 12:06:01.346861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.090 [2024-04-18 12:06:01.347204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.090 [2024-04-18 12:06:01.347220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.090 qpair failed and we were unable to recover it. 00:30:11.090 [2024-04-18 12:06:01.347601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.090 [2024-04-18 12:06:01.347908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.090 [2024-04-18 12:06:01.347924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.090 qpair failed and we were unable to recover it. 00:30:11.090 [2024-04-18 12:06:01.348273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.090 [2024-04-18 12:06:01.348643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.090 [2024-04-18 12:06:01.348659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.090 qpair failed and we were unable to recover it. 00:30:11.090 [2024-04-18 12:06:01.348940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.090 [2024-04-18 12:06:01.349228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.090 [2024-04-18 12:06:01.349244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.090 qpair failed and we were unable to recover it. 00:30:11.090 [2024-04-18 12:06:01.349589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.090 [2024-04-18 12:06:01.349846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.090 [2024-04-18 12:06:01.349862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.090 qpair failed and we were unable to recover it. 00:30:11.090 [2024-04-18 12:06:01.350139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.090 [2024-04-18 12:06:01.350478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.090 [2024-04-18 12:06:01.350494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.091 qpair failed and we were unable to recover it. 00:30:11.091 [2024-04-18 12:06:01.350766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.091 [2024-04-18 12:06:01.351115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.091 [2024-04-18 12:06:01.351131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.091 qpair failed and we were unable to recover it. 00:30:11.091 [2024-04-18 12:06:01.351507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.091 [2024-04-18 12:06:01.351775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.091 [2024-04-18 12:06:01.351791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.091 qpair failed and we were unable to recover it. 00:30:11.091 [2024-04-18 12:06:01.352086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.091 [2024-04-18 12:06:01.352346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.091 [2024-04-18 12:06:01.352362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.091 qpair failed and we were unable to recover it. 00:30:11.091 [2024-04-18 12:06:01.352716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.091 [2024-04-18 12:06:01.353076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.091 [2024-04-18 12:06:01.353092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.091 qpair failed and we were unable to recover it. 00:30:11.091 [2024-04-18 12:06:01.353463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.091 [2024-04-18 12:06:01.353816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.091 [2024-04-18 12:06:01.353831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.091 qpair failed and we were unable to recover it. 00:30:11.091 [2024-04-18 12:06:01.354120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.091 [2024-04-18 12:06:01.354494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.091 [2024-04-18 12:06:01.354510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.091 qpair failed and we were unable to recover it. 00:30:11.091 [2024-04-18 12:06:01.354793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.091 [2024-04-18 12:06:01.355139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.091 [2024-04-18 12:06:01.355155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.091 qpair failed and we were unable to recover it. 00:30:11.091 [2024-04-18 12:06:01.355479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.091 [2024-04-18 12:06:01.355824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.091 [2024-04-18 12:06:01.355840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.091 qpair failed and we were unable to recover it. 00:30:11.091 [2024-04-18 12:06:01.356209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.091 [2024-04-18 12:06:01.356529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.091 [2024-04-18 12:06:01.356545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.091 qpair failed and we were unable to recover it. 00:30:11.091 [2024-04-18 12:06:01.356890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.091 [2024-04-18 12:06:01.357161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.091 [2024-04-18 12:06:01.357177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.091 qpair failed and we were unable to recover it. 00:30:11.091 [2024-04-18 12:06:01.357434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.091 [2024-04-18 12:06:01.357783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.091 [2024-04-18 12:06:01.357800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.091 qpair failed and we were unable to recover it. 00:30:11.091 [2024-04-18 12:06:01.358151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.091 [2024-04-18 12:06:01.358447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.091 [2024-04-18 12:06:01.358474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.091 qpair failed and we were unable to recover it. 00:30:11.091 [2024-04-18 12:06:01.358785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.091 [2024-04-18 12:06:01.359131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.091 [2024-04-18 12:06:01.359147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.091 qpair failed and we were unable to recover it. 00:30:11.091 [2024-04-18 12:06:01.359424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.091 [2024-04-18 12:06:01.359680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.091 [2024-04-18 12:06:01.359696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.091 qpair failed and we were unable to recover it. 00:30:11.091 [2024-04-18 12:06:01.359977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.091 [2024-04-18 12:06:01.360344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.091 [2024-04-18 12:06:01.360360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.091 qpair failed and we were unable to recover it. 00:30:11.091 [2024-04-18 12:06:01.360683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.091 [2024-04-18 12:06:01.361026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.091 [2024-04-18 12:06:01.361042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.091 qpair failed and we were unable to recover it. 00:30:11.091 [2024-04-18 12:06:01.361365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.091 [2024-04-18 12:06:01.361624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.091 [2024-04-18 12:06:01.361640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.091 qpair failed and we were unable to recover it. 00:30:11.091 [2024-04-18 12:06:01.362016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.091 [2024-04-18 12:06:01.362228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.091 [2024-04-18 12:06:01.362244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.091 qpair failed and we were unable to recover it. 00:30:11.091 [2024-04-18 12:06:01.362522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.091 [2024-04-18 12:06:01.362870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.091 [2024-04-18 12:06:01.362886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.091 qpair failed and we were unable to recover it. 00:30:11.091 [2024-04-18 12:06:01.363208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.091 [2024-04-18 12:06:01.363472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.091 [2024-04-18 12:06:01.363488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.091 qpair failed and we were unable to recover it. 00:30:11.091 [2024-04-18 12:06:01.363821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.091 [2024-04-18 12:06:01.364192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.091 [2024-04-18 12:06:01.364208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.091 qpair failed and we were unable to recover it. 00:30:11.091 [2024-04-18 12:06:01.364475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.091 [2024-04-18 12:06:01.364818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.091 [2024-04-18 12:06:01.364835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.091 qpair failed and we were unable to recover it. 00:30:11.091 [2024-04-18 12:06:01.365111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.091 [2024-04-18 12:06:01.365486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.091 [2024-04-18 12:06:01.365503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.091 qpair failed and we were unable to recover it. 00:30:11.091 [2024-04-18 12:06:01.365790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.091 [2024-04-18 12:06:01.366133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.091 [2024-04-18 12:06:01.366149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.091 qpair failed and we were unable to recover it. 00:30:11.091 [2024-04-18 12:06:01.366466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.091 [2024-04-18 12:06:01.366732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.091 [2024-04-18 12:06:01.366749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.091 qpair failed and we were unable to recover it. 00:30:11.091 [2024-04-18 12:06:01.367095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.091 [2024-04-18 12:06:01.367391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.091 [2024-04-18 12:06:01.367408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.091 qpair failed and we were unable to recover it. 00:30:11.091 [2024-04-18 12:06:01.367758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.091 [2024-04-18 12:06:01.368130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.091 [2024-04-18 12:06:01.368146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.091 qpair failed and we were unable to recover it. 00:30:11.091 [2024-04-18 12:06:01.368415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.091 [2024-04-18 12:06:01.368757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.091 [2024-04-18 12:06:01.368774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.091 qpair failed and we were unable to recover it. 00:30:11.091 [2024-04-18 12:06:01.369114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.092 [2024-04-18 12:06:01.369459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.092 [2024-04-18 12:06:01.369475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.092 qpair failed and we were unable to recover it. 00:30:11.092 [2024-04-18 12:06:01.369747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.092 [2024-04-18 12:06:01.370017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.092 [2024-04-18 12:06:01.370033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.092 qpair failed and we were unable to recover it. 00:30:11.092 [2024-04-18 12:06:01.370381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.092 [2024-04-18 12:06:01.370687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.092 [2024-04-18 12:06:01.370703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.092 qpair failed and we were unable to recover it. 00:30:11.092 [2024-04-18 12:06:01.370930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.092 [2024-04-18 12:06:01.371183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.092 [2024-04-18 12:06:01.371199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.092 qpair failed and we were unable to recover it. 00:30:11.092 [2024-04-18 12:06:01.371440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.092 [2024-04-18 12:06:01.371744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.092 [2024-04-18 12:06:01.371760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.092 qpair failed and we were unable to recover it. 00:30:11.092 [2024-04-18 12:06:01.372028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.092 [2024-04-18 12:06:01.372285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.092 [2024-04-18 12:06:01.372300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.092 qpair failed and we were unable to recover it. 00:30:11.092 [2024-04-18 12:06:01.372630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.092 [2024-04-18 12:06:01.372865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.092 [2024-04-18 12:06:01.372881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.092 qpair failed and we were unable to recover it. 00:30:11.092 [2024-04-18 12:06:01.373253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.092 [2024-04-18 12:06:01.373507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.092 [2024-04-18 12:06:01.373523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.092 qpair failed and we were unable to recover it. 00:30:11.092 [2024-04-18 12:06:01.373872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.092 [2024-04-18 12:06:01.374145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.092 [2024-04-18 12:06:01.374163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.092 qpair failed and we were unable to recover it. 00:30:11.092 [2024-04-18 12:06:01.374511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.092 [2024-04-18 12:06:01.374833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.092 [2024-04-18 12:06:01.374849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.092 qpair failed and we were unable to recover it. 00:30:11.092 [2024-04-18 12:06:01.375139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.092 [2024-04-18 12:06:01.375411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.092 [2024-04-18 12:06:01.375427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.092 qpair failed and we were unable to recover it. 00:30:11.092 [2024-04-18 12:06:01.375780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.092 [2024-04-18 12:06:01.376109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.092 [2024-04-18 12:06:01.376125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.092 qpair failed and we were unable to recover it. 00:30:11.092 [2024-04-18 12:06:01.376470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.092 [2024-04-18 12:06:01.376838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.092 [2024-04-18 12:06:01.376854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.092 qpair failed and we were unable to recover it. 00:30:11.092 [2024-04-18 12:06:01.377128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.092 [2024-04-18 12:06:01.377400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.092 [2024-04-18 12:06:01.377417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.092 qpair failed and we were unable to recover it. 00:30:11.092 [2024-04-18 12:06:01.377704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.092 [2024-04-18 12:06:01.377982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.092 [2024-04-18 12:06:01.377998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.092 qpair failed and we were unable to recover it. 00:30:11.092 [2024-04-18 12:06:01.378308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.092 [2024-04-18 12:06:01.378585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.092 [2024-04-18 12:06:01.378601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.092 qpair failed and we were unable to recover it. 00:30:11.092 [2024-04-18 12:06:01.378963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.092 [2024-04-18 12:06:01.379286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.092 [2024-04-18 12:06:01.379302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.092 qpair failed and we were unable to recover it. 00:30:11.092 [2024-04-18 12:06:01.379648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.092 [2024-04-18 12:06:01.379991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.092 [2024-04-18 12:06:01.380007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.092 qpair failed and we were unable to recover it. 00:30:11.092 [2024-04-18 12:06:01.380221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.092 [2024-04-18 12:06:01.380564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.092 [2024-04-18 12:06:01.380583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.092 qpair failed and we were unable to recover it. 00:30:11.092 [2024-04-18 12:06:01.380931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.092 [2024-04-18 12:06:01.381299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.092 [2024-04-18 12:06:01.381315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.092 qpair failed and we were unable to recover it. 00:30:11.092 [2024-04-18 12:06:01.381594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.092 [2024-04-18 12:06:01.381955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.092 [2024-04-18 12:06:01.381971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.092 qpair failed and we were unable to recover it. 00:30:11.092 [2024-04-18 12:06:01.382255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.092 [2024-04-18 12:06:01.382511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.092 [2024-04-18 12:06:01.382528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.092 qpair failed and we were unable to recover it. 00:30:11.092 [2024-04-18 12:06:01.382803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.092 [2024-04-18 12:06:01.383138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.092 [2024-04-18 12:06:01.383154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.092 qpair failed and we were unable to recover it. 00:30:11.092 [2024-04-18 12:06:01.383501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.092 [2024-04-18 12:06:01.383663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.092 [2024-04-18 12:06:01.383679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.092 qpair failed and we were unable to recover it. 00:30:11.092 [2024-04-18 12:06:01.384020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.092 [2024-04-18 12:06:01.384365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.092 [2024-04-18 12:06:01.384381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.092 qpair failed and we were unable to recover it. 00:30:11.092 [2024-04-18 12:06:01.384655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.092 [2024-04-18 12:06:01.385002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.092 [2024-04-18 12:06:01.385017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.092 qpair failed and we were unable to recover it. 00:30:11.092 [2024-04-18 12:06:01.385390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.092 [2024-04-18 12:06:01.385738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.092 [2024-04-18 12:06:01.385754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.092 qpair failed and we were unable to recover it. 00:30:11.092 [2024-04-18 12:06:01.386011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.092 [2024-04-18 12:06:01.386299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.092 [2024-04-18 12:06:01.386314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.092 qpair failed and we were unable to recover it. 00:30:11.092 [2024-04-18 12:06:01.386593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.092 [2024-04-18 12:06:01.386870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.092 [2024-04-18 12:06:01.386888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.093 qpair failed and we were unable to recover it. 00:30:11.093 [2024-04-18 12:06:01.387165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.093 [2024-04-18 12:06:01.387487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.093 [2024-04-18 12:06:01.387504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.093 qpair failed and we were unable to recover it. 00:30:11.093 [2024-04-18 12:06:01.387772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.093 [2024-04-18 12:06:01.388134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.093 [2024-04-18 12:06:01.388149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.093 qpair failed and we were unable to recover it. 00:30:11.093 [2024-04-18 12:06:01.388500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.093 [2024-04-18 12:06:01.388842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.093 [2024-04-18 12:06:01.388858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.093 qpair failed and we were unable to recover it. 00:30:11.093 [2024-04-18 12:06:01.389153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.093 [2024-04-18 12:06:01.389479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.093 [2024-04-18 12:06:01.389495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.093 qpair failed and we were unable to recover it. 00:30:11.093 [2024-04-18 12:06:01.389772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.093 [2024-04-18 12:06:01.390103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.093 [2024-04-18 12:06:01.390119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.093 qpair failed and we were unable to recover it. 00:30:11.093 [2024-04-18 12:06:01.390461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.093 [2024-04-18 12:06:01.390786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.093 [2024-04-18 12:06:01.390803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.093 qpair failed and we were unable to recover it. 00:30:11.093 [2024-04-18 12:06:01.391153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.093 [2024-04-18 12:06:01.391498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.093 [2024-04-18 12:06:01.391514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.093 qpair failed and we were unable to recover it. 00:30:11.093 [2024-04-18 12:06:01.391886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.093 [2024-04-18 12:06:01.392120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.093 [2024-04-18 12:06:01.392136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.093 qpair failed and we were unable to recover it. 00:30:11.093 [2024-04-18 12:06:01.392417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.093 [2024-04-18 12:06:01.392690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.093 [2024-04-18 12:06:01.392706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.093 qpair failed and we were unable to recover it. 00:30:11.093 [2024-04-18 12:06:01.393051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.093 [2024-04-18 12:06:01.393282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.093 [2024-04-18 12:06:01.393300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.093 qpair failed and we were unable to recover it. 00:30:11.093 [2024-04-18 12:06:01.393627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.093 [2024-04-18 12:06:01.393986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.093 [2024-04-18 12:06:01.394002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.093 qpair failed and we were unable to recover it. 00:30:11.093 [2024-04-18 12:06:01.394325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.093 [2024-04-18 12:06:01.394586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.093 [2024-04-18 12:06:01.394602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.093 qpair failed and we were unable to recover it. 00:30:11.093 [2024-04-18 12:06:01.394894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.093 [2024-04-18 12:06:01.395217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.093 [2024-04-18 12:06:01.395233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.093 qpair failed and we were unable to recover it. 00:30:11.093 [2024-04-18 12:06:01.395593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.093 [2024-04-18 12:06:01.395961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.093 [2024-04-18 12:06:01.395977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.093 qpair failed and we were unable to recover it. 00:30:11.093 [2024-04-18 12:06:01.396236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.093 [2024-04-18 12:06:01.396521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.093 [2024-04-18 12:06:01.396538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.093 qpair failed and we were unable to recover it. 00:30:11.093 [2024-04-18 12:06:01.396806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.093 [2024-04-18 12:06:01.397129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.093 [2024-04-18 12:06:01.397145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.093 qpair failed and we were unable to recover it. 00:30:11.093 [2024-04-18 12:06:01.397341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.093 [2024-04-18 12:06:01.397595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.093 [2024-04-18 12:06:01.397612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.093 qpair failed and we were unable to recover it. 00:30:11.093 [2024-04-18 12:06:01.397913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.093 [2024-04-18 12:06:01.398131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.093 [2024-04-18 12:06:01.398147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.093 qpair failed and we were unable to recover it. 00:30:11.093 [2024-04-18 12:06:01.398485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.093 [2024-04-18 12:06:01.398756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.093 [2024-04-18 12:06:01.398773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.093 qpair failed and we were unable to recover it. 00:30:11.093 [2024-04-18 12:06:01.399117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.093 [2024-04-18 12:06:01.399247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.093 [2024-04-18 12:06:01.399263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.093 qpair failed and we were unable to recover it. 00:30:11.093 [2024-04-18 12:06:01.399635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.093 [2024-04-18 12:06:01.399909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.093 [2024-04-18 12:06:01.399930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.093 qpair failed and we were unable to recover it. 00:30:11.093 [2024-04-18 12:06:01.400265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.093 [2024-04-18 12:06:01.400610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.093 [2024-04-18 12:06:01.400626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.093 qpair failed and we were unable to recover it. 00:30:11.093 [2024-04-18 12:06:01.400996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.093 [2024-04-18 12:06:01.401178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.093 [2024-04-18 12:06:01.401194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.093 qpair failed and we were unable to recover it. 00:30:11.093 [2024-04-18 12:06:01.401304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.093 [2024-04-18 12:06:01.401651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.093 [2024-04-18 12:06:01.401667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.093 qpair failed and we were unable to recover it. 00:30:11.093 [2024-04-18 12:06:01.401877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.093 [2024-04-18 12:06:01.402173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.093 [2024-04-18 12:06:01.402189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.093 qpair failed and we were unable to recover it. 00:30:11.093 [2024-04-18 12:06:01.402490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.093 [2024-04-18 12:06:01.402762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.093 [2024-04-18 12:06:01.402778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.093 qpair failed and we were unable to recover it. 00:30:11.093 [2024-04-18 12:06:01.403102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.093 [2024-04-18 12:06:01.403435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.093 [2024-04-18 12:06:01.403466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.093 qpair failed and we were unable to recover it. 00:30:11.093 [2024-04-18 12:06:01.403841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.093 [2024-04-18 12:06:01.404040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.093 [2024-04-18 12:06:01.404058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.093 qpair failed and we were unable to recover it. 00:30:11.093 [2024-04-18 12:06:01.404279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.093 [2024-04-18 12:06:01.404622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.094 [2024-04-18 12:06:01.404640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.094 qpair failed and we were unable to recover it. 00:30:11.094 [2024-04-18 12:06:01.404904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.094 [2024-04-18 12:06:01.405249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.094 [2024-04-18 12:06:01.405266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.094 qpair failed and we were unable to recover it. 00:30:11.094 [2024-04-18 12:06:01.405390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.094 [2024-04-18 12:06:01.405578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.094 [2024-04-18 12:06:01.405595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.094 qpair failed and we were unable to recover it. 00:30:11.094 [2024-04-18 12:06:01.405883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.094 [2024-04-18 12:06:01.406081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.094 [2024-04-18 12:06:01.406097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.094 qpair failed and we were unable to recover it. 00:30:11.094 [2024-04-18 12:06:01.406413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.094 [2024-04-18 12:06:01.406759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.094 [2024-04-18 12:06:01.406776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.094 qpair failed and we were unable to recover it. 00:30:11.094 [2024-04-18 12:06:01.407125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.094 [2024-04-18 12:06:01.407444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.094 [2024-04-18 12:06:01.407465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.094 qpair failed and we were unable to recover it. 00:30:11.094 [2024-04-18 12:06:01.407722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.094 [2024-04-18 12:06:01.407899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.094 [2024-04-18 12:06:01.407916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.094 qpair failed and we were unable to recover it. 00:30:11.094 [2024-04-18 12:06:01.408268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.094 [2024-04-18 12:06:01.408613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.094 [2024-04-18 12:06:01.408630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.094 qpair failed and we were unable to recover it. 00:30:11.094 [2024-04-18 12:06:01.408976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.094 [2024-04-18 12:06:01.409303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.094 [2024-04-18 12:06:01.409321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.094 qpair failed and we were unable to recover it. 00:30:11.094 [2024-04-18 12:06:01.409581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.094 [2024-04-18 12:06:01.409772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.094 [2024-04-18 12:06:01.409789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.094 qpair failed and we were unable to recover it. 00:30:11.094 [2024-04-18 12:06:01.410090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.094 [2024-04-18 12:06:01.410437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.094 [2024-04-18 12:06:01.410459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.094 qpair failed and we were unable to recover it. 00:30:11.094 [2024-04-18 12:06:01.410642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.094 [2024-04-18 12:06:01.410941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.094 [2024-04-18 12:06:01.410959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.094 qpair failed and we were unable to recover it. 00:30:11.094 [2024-04-18 12:06:01.411366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.094 [2024-04-18 12:06:01.411580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.094 [2024-04-18 12:06:01.411597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.094 qpair failed and we were unable to recover it. 00:30:11.094 [2024-04-18 12:06:01.411830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.094 [2024-04-18 12:06:01.412113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.094 [2024-04-18 12:06:01.412129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.094 qpair failed and we were unable to recover it. 00:30:11.094 [2024-04-18 12:06:01.412460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.094 [2024-04-18 12:06:01.412730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.094 [2024-04-18 12:06:01.412745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.094 qpair failed and we were unable to recover it. 00:30:11.094 [2024-04-18 12:06:01.412944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.094 [2024-04-18 12:06:01.413285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.094 [2024-04-18 12:06:01.413302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.094 qpair failed and we were unable to recover it. 00:30:11.094 [2024-04-18 12:06:01.413641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.094 [2024-04-18 12:06:01.413939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.094 [2024-04-18 12:06:01.413955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.094 qpair failed and we were unable to recover it. 00:30:11.094 [2024-04-18 12:06:01.414219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.094 [2024-04-18 12:06:01.414432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.094 [2024-04-18 12:06:01.414448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.094 qpair failed and we were unable to recover it. 00:30:11.094 [2024-04-18 12:06:01.414721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.094 [2024-04-18 12:06:01.414974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.094 [2024-04-18 12:06:01.414990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.094 qpair failed and we were unable to recover it. 00:30:11.094 [2024-04-18 12:06:01.415337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.094 [2024-04-18 12:06:01.415598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.094 [2024-04-18 12:06:01.415619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.094 qpair failed and we were unable to recover it. 00:30:11.094 [2024-04-18 12:06:01.415890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.094 [2024-04-18 12:06:01.416233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.094 [2024-04-18 12:06:01.416249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.094 qpair failed and we were unable to recover it. 00:30:11.094 [2024-04-18 12:06:01.416518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.094 [2024-04-18 12:06:01.416860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.094 [2024-04-18 12:06:01.416876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.094 qpair failed and we were unable to recover it. 00:30:11.094 [2024-04-18 12:06:01.417251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.094 [2024-04-18 12:06:01.417527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.094 [2024-04-18 12:06:01.417544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.094 qpair failed and we were unable to recover it. 00:30:11.094 [2024-04-18 12:06:01.417894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.094 [2024-04-18 12:06:01.418163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.094 [2024-04-18 12:06:01.418179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.094 qpair failed and we were unable to recover it. 00:30:11.094 [2024-04-18 12:06:01.418512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.094 [2024-04-18 12:06:01.418766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.094 [2024-04-18 12:06:01.418782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.094 qpair failed and we were unable to recover it. 00:30:11.094 [2024-04-18 12:06:01.419150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.094 [2024-04-18 12:06:01.419472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.094 [2024-04-18 12:06:01.419488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.094 qpair failed and we were unable to recover it. 00:30:11.094 [2024-04-18 12:06:01.419742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.094 [2024-04-18 12:06:01.420009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.094 [2024-04-18 12:06:01.420026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.094 qpair failed and we were unable to recover it. 00:30:11.094 [2024-04-18 12:06:01.420401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.094 [2024-04-18 12:06:01.420680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.094 [2024-04-18 12:06:01.420697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.094 qpair failed and we were unable to recover it. 00:30:11.094 [2024-04-18 12:06:01.421002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.094 [2024-04-18 12:06:01.421207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.094 [2024-04-18 12:06:01.421223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.094 qpair failed and we were unable to recover it. 00:30:11.094 [2024-04-18 12:06:01.421480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.094 [2024-04-18 12:06:01.421825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.095 [2024-04-18 12:06:01.421841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.095 qpair failed and we were unable to recover it. 00:30:11.095 [2024-04-18 12:06:01.422163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.095 [2024-04-18 12:06:01.422417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.095 [2024-04-18 12:06:01.422433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.095 qpair failed and we were unable to recover it. 00:30:11.095 [2024-04-18 12:06:01.422784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.095 [2024-04-18 12:06:01.423088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.095 [2024-04-18 12:06:01.423105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.095 qpair failed and we were unable to recover it. 00:30:11.095 [2024-04-18 12:06:01.423464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.095 [2024-04-18 12:06:01.423814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.095 [2024-04-18 12:06:01.423830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.095 qpair failed and we were unable to recover it. 00:30:11.095 [2024-04-18 12:06:01.424103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.095 [2024-04-18 12:06:01.424312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.095 [2024-04-18 12:06:01.424328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.095 qpair failed and we were unable to recover it. 00:30:11.095 [2024-04-18 12:06:01.424681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.095 [2024-04-18 12:06:01.424905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.095 [2024-04-18 12:06:01.424921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.095 qpair failed and we were unable to recover it. 00:30:11.095 [2024-04-18 12:06:01.425250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.095 [2024-04-18 12:06:01.425576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.095 [2024-04-18 12:06:01.425593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.095 qpair failed and we were unable to recover it. 00:30:11.095 [2024-04-18 12:06:01.425914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.095 [2024-04-18 12:06:01.426197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.095 [2024-04-18 12:06:01.426213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.095 qpair failed and we were unable to recover it. 00:30:11.095 [2024-04-18 12:06:01.426586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.095 [2024-04-18 12:06:01.426791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.095 [2024-04-18 12:06:01.426807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.095 qpair failed and we were unable to recover it. 00:30:11.095 [2024-04-18 12:06:01.427132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.095 [2024-04-18 12:06:01.427448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.095 [2024-04-18 12:06:01.427469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.095 qpair failed and we were unable to recover it. 00:30:11.095 [2024-04-18 12:06:01.427738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.095 [2024-04-18 12:06:01.428046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.095 [2024-04-18 12:06:01.428062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.095 qpair failed and we were unable to recover it. 00:30:11.095 [2024-04-18 12:06:01.428421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.095 [2024-04-18 12:06:01.428675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.095 [2024-04-18 12:06:01.428691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.095 qpair failed and we were unable to recover it. 00:30:11.095 [2024-04-18 12:06:01.428960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.095 [2024-04-18 12:06:01.429280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.095 [2024-04-18 12:06:01.429296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.095 qpair failed and we were unable to recover it. 00:30:11.095 [2024-04-18 12:06:01.429623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.095 [2024-04-18 12:06:01.429877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.095 [2024-04-18 12:06:01.429893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.095 qpair failed and we were unable to recover it. 00:30:11.095 [2024-04-18 12:06:01.430240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.095 [2024-04-18 12:06:01.430498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.095 [2024-04-18 12:06:01.430514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.095 qpair failed and we were unable to recover it. 00:30:11.095 [2024-04-18 12:06:01.430797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.095 [2024-04-18 12:06:01.431015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.095 [2024-04-18 12:06:01.431031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.095 qpair failed and we were unable to recover it. 00:30:11.095 [2024-04-18 12:06:01.431252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.095 [2024-04-18 12:06:01.431362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.095 [2024-04-18 12:06:01.431377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.095 qpair failed and we were unable to recover it. 00:30:11.095 [2024-04-18 12:06:01.431633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.095 [2024-04-18 12:06:01.431929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.095 [2024-04-18 12:06:01.431944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.095 qpair failed and we were unable to recover it. 00:30:11.095 [2024-04-18 12:06:01.432209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.095 [2024-04-18 12:06:01.432382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.095 [2024-04-18 12:06:01.432398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.095 qpair failed and we were unable to recover it. 00:30:11.095 [2024-04-18 12:06:01.432750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.095 [2024-04-18 12:06:01.432960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.095 [2024-04-18 12:06:01.432976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.095 qpair failed and we were unable to recover it. 00:30:11.095 [2024-04-18 12:06:01.433232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.095 [2024-04-18 12:06:01.433426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.095 [2024-04-18 12:06:01.433442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.095 qpair failed and we were unable to recover it. 00:30:11.095 [2024-04-18 12:06:01.433715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.095 [2024-04-18 12:06:01.434062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.095 [2024-04-18 12:06:01.434077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.095 qpair failed and we were unable to recover it. 00:30:11.095 [2024-04-18 12:06:01.434367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.095 [2024-04-18 12:06:01.434708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.095 [2024-04-18 12:06:01.434724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.095 qpair failed and we were unable to recover it. 00:30:11.095 [2024-04-18 12:06:01.435051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.095 [2024-04-18 12:06:01.435395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.095 [2024-04-18 12:06:01.435411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.095 qpair failed and we were unable to recover it. 00:30:11.095 [2024-04-18 12:06:01.435756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.096 [2024-04-18 12:06:01.436048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.096 [2024-04-18 12:06:01.436063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.096 qpair failed and we were unable to recover it. 00:30:11.096 [2024-04-18 12:06:01.436322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.096 [2024-04-18 12:06:01.436642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.096 [2024-04-18 12:06:01.436658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.096 qpair failed and we were unable to recover it. 00:30:11.096 [2024-04-18 12:06:01.436920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.096 [2024-04-18 12:06:01.437132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.096 [2024-04-18 12:06:01.437148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.096 qpair failed and we were unable to recover it. 00:30:11.096 [2024-04-18 12:06:01.437420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.096 [2024-04-18 12:06:01.437724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.096 [2024-04-18 12:06:01.437741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.096 qpair failed and we were unable to recover it. 00:30:11.096 [2024-04-18 12:06:01.438010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.096 [2024-04-18 12:06:01.438313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.096 [2024-04-18 12:06:01.438329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.096 qpair failed and we were unable to recover it. 00:30:11.096 [2024-04-18 12:06:01.438656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.096 [2024-04-18 12:06:01.439009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.096 [2024-04-18 12:06:01.439025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.096 qpair failed and we were unable to recover it. 00:30:11.096 [2024-04-18 12:06:01.439399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.096 [2024-04-18 12:06:01.439725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.096 [2024-04-18 12:06:01.439741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.096 qpair failed and we were unable to recover it. 00:30:11.096 [2024-04-18 12:06:01.440084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.096 [2024-04-18 12:06:01.440357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.096 [2024-04-18 12:06:01.440373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.096 qpair failed and we were unable to recover it. 00:30:11.096 [2024-04-18 12:06:01.440765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.096 [2024-04-18 12:06:01.441084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.096 [2024-04-18 12:06:01.441101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.096 qpair failed and we were unable to recover it. 00:30:11.096 [2024-04-18 12:06:01.441462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.096 [2024-04-18 12:06:01.441754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.096 [2024-04-18 12:06:01.441770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.096 qpair failed and we were unable to recover it. 00:30:11.096 [2024-04-18 12:06:01.442050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.096 [2024-04-18 12:06:01.442412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.096 [2024-04-18 12:06:01.442428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.096 qpair failed and we were unable to recover it. 00:30:11.096 [2024-04-18 12:06:01.442767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.096 [2024-04-18 12:06:01.443090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.096 [2024-04-18 12:06:01.443106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.096 qpair failed and we were unable to recover it. 00:30:11.096 [2024-04-18 12:06:01.443424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.096 [2024-04-18 12:06:01.443789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.096 [2024-04-18 12:06:01.443805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.096 qpair failed and we were unable to recover it. 00:30:11.096 [2024-04-18 12:06:01.444150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.096 [2024-04-18 12:06:01.444517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.096 [2024-04-18 12:06:01.444533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.096 qpair failed and we were unable to recover it. 00:30:11.096 [2024-04-18 12:06:01.444834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.096 [2024-04-18 12:06:01.445178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.096 [2024-04-18 12:06:01.445194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.096 qpair failed and we were unable to recover it. 00:30:11.096 [2024-04-18 12:06:01.445515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.096 [2024-04-18 12:06:01.445794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.096 [2024-04-18 12:06:01.445810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.096 qpair failed and we were unable to recover it. 00:30:11.096 [2024-04-18 12:06:01.446139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.096 [2024-04-18 12:06:01.446497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.096 [2024-04-18 12:06:01.446513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.096 qpair failed and we were unable to recover it. 00:30:11.096 [2024-04-18 12:06:01.446746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.096 [2024-04-18 12:06:01.446972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.096 [2024-04-18 12:06:01.446988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.096 qpair failed and we were unable to recover it. 00:30:11.096 [2024-04-18 12:06:01.447283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.096 [2024-04-18 12:06:01.447476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.096 [2024-04-18 12:06:01.447492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.096 qpair failed and we were unable to recover it. 00:30:11.096 [2024-04-18 12:06:01.447720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.096 [2024-04-18 12:06:01.448063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.096 [2024-04-18 12:06:01.448079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.096 qpair failed and we were unable to recover it. 00:30:11.096 [2024-04-18 12:06:01.448375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.096 [2024-04-18 12:06:01.448677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.096 [2024-04-18 12:06:01.448694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.096 qpair failed and we were unable to recover it. 00:30:11.096 [2024-04-18 12:06:01.448989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.096 [2024-04-18 12:06:01.449380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.096 [2024-04-18 12:06:01.449396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.096 qpair failed and we were unable to recover it. 00:30:11.096 [2024-04-18 12:06:01.449740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.096 [2024-04-18 12:06:01.449945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.096 [2024-04-18 12:06:01.449961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.096 qpair failed and we were unable to recover it. 00:30:11.096 [2024-04-18 12:06:01.450287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.096 [2024-04-18 12:06:01.450543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.096 [2024-04-18 12:06:01.450560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.096 qpair failed and we were unable to recover it. 00:30:11.096 [2024-04-18 12:06:01.450864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.096 [2024-04-18 12:06:01.451145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.096 [2024-04-18 12:06:01.451161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.096 qpair failed and we were unable to recover it. 00:30:11.096 [2024-04-18 12:06:01.451497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.096 [2024-04-18 12:06:01.451773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.096 [2024-04-18 12:06:01.451789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.096 qpair failed and we were unable to recover it. 00:30:11.096 [2024-04-18 12:06:01.452086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.096 [2024-04-18 12:06:01.452357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.096 [2024-04-18 12:06:01.452372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.096 qpair failed and we were unable to recover it. 00:30:11.096 [2024-04-18 12:06:01.452678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.096 [2024-04-18 12:06:01.453019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.096 [2024-04-18 12:06:01.453035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.096 qpair failed and we were unable to recover it. 00:30:11.096 [2024-04-18 12:06:01.453290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.096 [2024-04-18 12:06:01.453648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.096 [2024-04-18 12:06:01.453664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.096 qpair failed and we were unable to recover it. 00:30:11.096 [2024-04-18 12:06:01.454022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.097 [2024-04-18 12:06:01.454393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.097 [2024-04-18 12:06:01.454409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.097 qpair failed and we were unable to recover it. 00:30:11.097 [2024-04-18 12:06:01.454733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.097 [2024-04-18 12:06:01.454995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.097 [2024-04-18 12:06:01.455011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.097 qpair failed and we were unable to recover it. 00:30:11.097 [2024-04-18 12:06:01.455333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.097 [2024-04-18 12:06:01.455706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.097 [2024-04-18 12:06:01.455722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.097 qpair failed and we were unable to recover it. 00:30:11.097 [2024-04-18 12:06:01.455996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.097 [2024-04-18 12:06:01.456271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.097 [2024-04-18 12:06:01.456286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.097 qpair failed and we were unable to recover it. 00:30:11.097 [2024-04-18 12:06:01.456640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.097 [2024-04-18 12:06:01.456895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.097 [2024-04-18 12:06:01.456912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.097 qpair failed and we were unable to recover it. 00:30:11.097 [2024-04-18 12:06:01.457266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.097 [2024-04-18 12:06:01.457531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.097 [2024-04-18 12:06:01.457548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.097 qpair failed and we were unable to recover it. 00:30:11.097 [2024-04-18 12:06:01.457829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.097 [2024-04-18 12:06:01.458158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.097 [2024-04-18 12:06:01.458174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.097 qpair failed and we were unable to recover it. 00:30:11.097 [2024-04-18 12:06:01.458523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.097 [2024-04-18 12:06:01.458794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.097 [2024-04-18 12:06:01.458810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.097 qpair failed and we were unable to recover it. 00:30:11.097 [2024-04-18 12:06:01.459184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.097 [2024-04-18 12:06:01.459531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.097 [2024-04-18 12:06:01.459547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.097 qpair failed and we were unable to recover it. 00:30:11.097 [2024-04-18 12:06:01.459770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.097 [2024-04-18 12:06:01.460045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.097 [2024-04-18 12:06:01.460061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.097 qpair failed and we were unable to recover it. 00:30:11.097 [2024-04-18 12:06:01.460354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.097 [2024-04-18 12:06:01.460687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.097 [2024-04-18 12:06:01.460704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.097 qpair failed and we were unable to recover it. 00:30:11.097 [2024-04-18 12:06:01.461003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.097 [2024-04-18 12:06:01.461277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.097 [2024-04-18 12:06:01.461293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.097 qpair failed and we were unable to recover it. 00:30:11.097 [2024-04-18 12:06:01.461659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.097 [2024-04-18 12:06:01.462010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.097 [2024-04-18 12:06:01.462026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.097 qpair failed and we were unable to recover it. 00:30:11.097 [2024-04-18 12:06:01.462366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.097 [2024-04-18 12:06:01.462699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.097 [2024-04-18 12:06:01.462727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.097 qpair failed and we were unable to recover it. 00:30:11.097 [2024-04-18 12:06:01.463054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.097 [2024-04-18 12:06:01.463397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.097 [2024-04-18 12:06:01.463414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.097 qpair failed and we were unable to recover it. 00:30:11.097 [2024-04-18 12:06:01.463795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.097 [2024-04-18 12:06:01.464069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.097 [2024-04-18 12:06:01.464085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.097 qpair failed and we were unable to recover it. 00:30:11.097 [2024-04-18 12:06:01.464461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.097 [2024-04-18 12:06:01.464737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.097 [2024-04-18 12:06:01.464753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.097 qpair failed and we were unable to recover it. 00:30:11.097 [2024-04-18 12:06:01.465078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.097 [2024-04-18 12:06:01.465367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.097 [2024-04-18 12:06:01.465383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.097 qpair failed and we were unable to recover it. 00:30:11.097 [2024-04-18 12:06:01.465609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.097 [2024-04-18 12:06:01.465881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.097 [2024-04-18 12:06:01.465897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.097 qpair failed and we were unable to recover it. 00:30:11.097 [2024-04-18 12:06:01.466221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.097 [2024-04-18 12:06:01.466548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.097 [2024-04-18 12:06:01.466565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.097 qpair failed and we were unable to recover it. 00:30:11.097 [2024-04-18 12:06:01.466778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.097 [2024-04-18 12:06:01.467125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.097 [2024-04-18 12:06:01.467142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.097 qpair failed and we were unable to recover it. 00:30:11.097 [2024-04-18 12:06:01.467488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.097 [2024-04-18 12:06:01.467742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.097 [2024-04-18 12:06:01.467758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.097 qpair failed and we were unable to recover it. 00:30:11.097 [2024-04-18 12:06:01.468108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.097 [2024-04-18 12:06:01.468481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.097 [2024-04-18 12:06:01.468498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.097 qpair failed and we were unable to recover it. 00:30:11.097 [2024-04-18 12:06:01.468781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.097 [2024-04-18 12:06:01.469109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.097 [2024-04-18 12:06:01.469125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.097 qpair failed and we were unable to recover it. 00:30:11.097 [2024-04-18 12:06:01.469427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.097 [2024-04-18 12:06:01.469662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.097 [2024-04-18 12:06:01.469679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.097 qpair failed and we were unable to recover it. 00:30:11.097 [2024-04-18 12:06:01.470015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.097 [2024-04-18 12:06:01.470377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.097 [2024-04-18 12:06:01.470393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.097 qpair failed and we were unable to recover it. 00:30:11.097 [2024-04-18 12:06:01.470651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.097 [2024-04-18 12:06:01.470910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.097 [2024-04-18 12:06:01.470926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.097 qpair failed and we were unable to recover it. 00:30:11.097 [2024-04-18 12:06:01.471212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.097 [2024-04-18 12:06:01.471562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.097 [2024-04-18 12:06:01.471578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.097 qpair failed and we were unable to recover it. 00:30:11.097 [2024-04-18 12:06:01.471860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.097 [2024-04-18 12:06:01.472073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.097 [2024-04-18 12:06:01.472088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.097 qpair failed and we were unable to recover it. 00:30:11.098 [2024-04-18 12:06:01.472345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-04-18 12:06:01.472618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-04-18 12:06:01.472635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.098 qpair failed and we were unable to recover it. 00:30:11.098 [2024-04-18 12:06:01.472852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-04-18 12:06:01.473119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-04-18 12:06:01.473137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.098 qpair failed and we were unable to recover it. 00:30:11.098 [2024-04-18 12:06:01.473483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-04-18 12:06:01.473810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-04-18 12:06:01.473826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.098 qpair failed and we were unable to recover it. 00:30:11.098 [2024-04-18 12:06:01.474201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-04-18 12:06:01.474499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-04-18 12:06:01.474515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.098 qpair failed and we were unable to recover it. 00:30:11.098 [2024-04-18 12:06:01.474771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-04-18 12:06:01.475120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-04-18 12:06:01.475136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.098 qpair failed and we were unable to recover it. 00:30:11.098 [2024-04-18 12:06:01.475490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-04-18 12:06:01.475768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-04-18 12:06:01.475784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.098 qpair failed and we were unable to recover it. 00:30:11.098 [2024-04-18 12:06:01.476039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-04-18 12:06:01.476391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-04-18 12:06:01.476407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.098 qpair failed and we were unable to recover it. 00:30:11.098 [2024-04-18 12:06:01.476724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-04-18 12:06:01.476928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-04-18 12:06:01.476944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.098 qpair failed and we were unable to recover it. 00:30:11.098 [2024-04-18 12:06:01.477217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-04-18 12:06:01.477497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-04-18 12:06:01.477514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.098 qpair failed and we were unable to recover it. 00:30:11.098 [2024-04-18 12:06:01.477841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-04-18 12:06:01.478120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-04-18 12:06:01.478141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.098 qpair failed and we were unable to recover it. 00:30:11.098 [2024-04-18 12:06:01.478329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-04-18 12:06:01.478544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-04-18 12:06:01.478560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.098 qpair failed and we were unable to recover it. 00:30:11.098 [2024-04-18 12:06:01.478896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-04-18 12:06:01.479216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-04-18 12:06:01.479234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.098 qpair failed and we were unable to recover it. 00:30:11.098 [2024-04-18 12:06:01.479476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-04-18 12:06:01.479734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-04-18 12:06:01.479750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.098 qpair failed and we were unable to recover it. 00:30:11.098 [2024-04-18 12:06:01.480075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-04-18 12:06:01.480279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-04-18 12:06:01.480295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.098 qpair failed and we were unable to recover it. 00:30:11.098 [2024-04-18 12:06:01.480521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-04-18 12:06:01.480714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-04-18 12:06:01.480729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.098 qpair failed and we were unable to recover it. 00:30:11.098 [2024-04-18 12:06:01.480935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-04-18 12:06:01.481334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-04-18 12:06:01.481350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.098 qpair failed and we were unable to recover it. 00:30:11.098 [2024-04-18 12:06:01.481679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-04-18 12:06:01.481971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-04-18 12:06:01.481986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.098 qpair failed and we were unable to recover it. 00:30:11.098 [2024-04-18 12:06:01.482244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-04-18 12:06:01.482504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-04-18 12:06:01.482521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.098 qpair failed and we were unable to recover it. 00:30:11.098 [2024-04-18 12:06:01.482845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-04-18 12:06:01.483133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-04-18 12:06:01.483148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.098 qpair failed and we were unable to recover it. 00:30:11.098 [2024-04-18 12:06:01.483336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-04-18 12:06:01.483538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-04-18 12:06:01.483554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.098 qpair failed and we were unable to recover it. 00:30:11.098 [2024-04-18 12:06:01.483743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-04-18 12:06:01.483931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-04-18 12:06:01.483947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.098 qpair failed and we were unable to recover it. 00:30:11.098 [2024-04-18 12:06:01.484245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-04-18 12:06:01.484494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-04-18 12:06:01.484513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.098 qpair failed and we were unable to recover it. 00:30:11.098 [2024-04-18 12:06:01.484768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-04-18 12:06:01.485111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-04-18 12:06:01.485127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.098 qpair failed and we were unable to recover it. 00:30:11.098 [2024-04-18 12:06:01.485493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-04-18 12:06:01.485753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-04-18 12:06:01.485769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.098 qpair failed and we were unable to recover it. 00:30:11.098 [2024-04-18 12:06:01.486046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-04-18 12:06:01.486257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-04-18 12:06:01.486273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.098 qpair failed and we were unable to recover it. 00:30:11.098 [2024-04-18 12:06:01.486526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-04-18 12:06:01.486744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-04-18 12:06:01.486760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.098 qpair failed and we were unable to recover it. 00:30:11.098 [2024-04-18 12:06:01.487018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-04-18 12:06:01.487306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-04-18 12:06:01.487321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.098 qpair failed and we were unable to recover it. 00:30:11.098 [2024-04-18 12:06:01.487641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-04-18 12:06:01.487989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-04-18 12:06:01.488005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.098 qpair failed and we were unable to recover it. 00:30:11.098 [2024-04-18 12:06:01.488222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-04-18 12:06:01.488518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-04-18 12:06:01.488534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.098 qpair failed and we were unable to recover it. 00:30:11.099 [2024-04-18 12:06:01.488859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.099 [2024-04-18 12:06:01.489250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.099 [2024-04-18 12:06:01.489266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.099 qpair failed and we were unable to recover it. 00:30:11.099 [2024-04-18 12:06:01.489482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.099 [2024-04-18 12:06:01.489735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.099 [2024-04-18 12:06:01.489751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.099 qpair failed and we were unable to recover it. 00:30:11.099 [2024-04-18 12:06:01.490072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.099 [2024-04-18 12:06:01.490336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.099 [2024-04-18 12:06:01.490354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.099 qpair failed and we were unable to recover it. 00:30:11.099 [2024-04-18 12:06:01.490638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.099 [2024-04-18 12:06:01.490838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.099 [2024-04-18 12:06:01.490853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.099 qpair failed and we were unable to recover it. 00:30:11.099 [2024-04-18 12:06:01.491198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.099 [2024-04-18 12:06:01.491409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.099 [2024-04-18 12:06:01.491424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.099 qpair failed and we were unable to recover it. 00:30:11.099 [2024-04-18 12:06:01.491730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.099 [2024-04-18 12:06:01.491943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.099 [2024-04-18 12:06:01.491959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.099 qpair failed and we were unable to recover it. 00:30:11.099 [2024-04-18 12:06:01.492238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.099 [2024-04-18 12:06:01.492458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.099 [2024-04-18 12:06:01.492474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.099 qpair failed and we were unable to recover it. 00:30:11.099 [2024-04-18 12:06:01.492800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.099 [2024-04-18 12:06:01.493132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.099 [2024-04-18 12:06:01.493148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.099 qpair failed and we were unable to recover it. 00:30:11.099 [2024-04-18 12:06:01.493325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.099 [2024-04-18 12:06:01.493611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.099 [2024-04-18 12:06:01.493627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.099 qpair failed and we were unable to recover it. 00:30:11.099 [2024-04-18 12:06:01.493907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.099 [2024-04-18 12:06:01.494185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.099 [2024-04-18 12:06:01.494201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.099 qpair failed and we were unable to recover it. 00:30:11.099 [2024-04-18 12:06:01.494475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.099 [2024-04-18 12:06:01.494685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.099 [2024-04-18 12:06:01.494701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.099 qpair failed and we were unable to recover it. 00:30:11.099 [2024-04-18 12:06:01.494960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.099 [2024-04-18 12:06:01.495300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.099 [2024-04-18 12:06:01.495326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.099 qpair failed and we were unable to recover it. 00:30:11.099 [2024-04-18 12:06:01.495676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.099 [2024-04-18 12:06:01.496014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.099 [2024-04-18 12:06:01.496029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.099 qpair failed and we were unable to recover it. 00:30:11.099 [2024-04-18 12:06:01.496381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.099 [2024-04-18 12:06:01.496708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.099 [2024-04-18 12:06:01.496724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.099 qpair failed and we were unable to recover it. 00:30:11.099 [2024-04-18 12:06:01.497054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.099 [2024-04-18 12:06:01.497325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.099 [2024-04-18 12:06:01.497341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.099 qpair failed and we were unable to recover it. 00:30:11.099 [2024-04-18 12:06:01.497613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.099 [2024-04-18 12:06:01.497722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.099 [2024-04-18 12:06:01.497738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.099 qpair failed and we were unable to recover it. 00:30:11.099 [2024-04-18 12:06:01.498006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.099 [2024-04-18 12:06:01.498278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.099 [2024-04-18 12:06:01.498295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.099 qpair failed and we were unable to recover it. 00:30:11.099 [2024-04-18 12:06:01.498646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.099 [2024-04-18 12:06:01.498909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.099 [2024-04-18 12:06:01.498926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.099 qpair failed and we were unable to recover it. 00:30:11.099 [2024-04-18 12:06:01.499274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.099 [2024-04-18 12:06:01.499478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.099 [2024-04-18 12:06:01.499494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.099 qpair failed and we were unable to recover it. 00:30:11.099 [2024-04-18 12:06:01.499761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.099 [2024-04-18 12:06:01.499968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.099 [2024-04-18 12:06:01.499984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.099 qpair failed and we were unable to recover it. 00:30:11.099 [2024-04-18 12:06:01.500240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.099 [2024-04-18 12:06:01.500512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.099 [2024-04-18 12:06:01.500528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.099 qpair failed and we were unable to recover it. 00:30:11.099 [2024-04-18 12:06:01.500785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.099 [2024-04-18 12:06:01.501061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.099 [2024-04-18 12:06:01.501077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.099 qpair failed and we were unable to recover it. 00:30:11.099 [2024-04-18 12:06:01.501405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.099 [2024-04-18 12:06:01.501726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.099 [2024-04-18 12:06:01.501743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.099 qpair failed and we were unable to recover it. 00:30:11.099 [2024-04-18 12:06:01.502039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.099 [2024-04-18 12:06:01.502390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.099 [2024-04-18 12:06:01.502407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.099 qpair failed and we were unable to recover it. 00:30:11.099 [2024-04-18 12:06:01.502624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.100 [2024-04-18 12:06:01.502904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.100 [2024-04-18 12:06:01.502920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.100 qpair failed and we were unable to recover it. 00:30:11.100 [2024-04-18 12:06:01.503030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.100 [2024-04-18 12:06:01.503228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.100 [2024-04-18 12:06:01.503244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.100 qpair failed and we were unable to recover it. 00:30:11.100 [2024-04-18 12:06:01.503570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.100 [2024-04-18 12:06:01.503892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.100 [2024-04-18 12:06:01.503908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.100 qpair failed and we were unable to recover it. 00:30:11.100 [2024-04-18 12:06:01.504201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.100 [2024-04-18 12:06:01.504414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.100 [2024-04-18 12:06:01.504430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.100 qpair failed and we were unable to recover it. 00:30:11.100 [2024-04-18 12:06:01.504768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.100 [2024-04-18 12:06:01.505020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.100 [2024-04-18 12:06:01.505037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.100 qpair failed and we were unable to recover it. 00:30:11.100 [2024-04-18 12:06:01.505225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.100 [2024-04-18 12:06:01.505421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.100 [2024-04-18 12:06:01.505437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.100 qpair failed and we were unable to recover it. 00:30:11.100 [2024-04-18 12:06:01.505672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.100 [2024-04-18 12:06:01.505921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.100 [2024-04-18 12:06:01.505937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.100 qpair failed and we were unable to recover it. 00:30:11.100 [2024-04-18 12:06:01.506237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.100 [2024-04-18 12:06:01.506461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.100 [2024-04-18 12:06:01.506477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.100 qpair failed and we were unable to recover it. 00:30:11.100 [2024-04-18 12:06:01.506723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.100 [2024-04-18 12:06:01.506912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.100 [2024-04-18 12:06:01.506927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.100 qpair failed and we were unable to recover it. 00:30:11.100 [2024-04-18 12:06:01.507183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.100 [2024-04-18 12:06:01.507373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.100 [2024-04-18 12:06:01.507389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.100 qpair failed and we were unable to recover it. 00:30:11.100 [2024-04-18 12:06:01.507657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.100 [2024-04-18 12:06:01.507928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.100 [2024-04-18 12:06:01.507944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.100 qpair failed and we were unable to recover it. 00:30:11.100 [2024-04-18 12:06:01.508197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.100 [2024-04-18 12:06:01.508477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.100 [2024-04-18 12:06:01.508493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.100 qpair failed and we were unable to recover it. 00:30:11.100 [2024-04-18 12:06:01.508763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.100 [2024-04-18 12:06:01.508981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.100 [2024-04-18 12:06:01.508998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.100 qpair failed and we were unable to recover it. 00:30:11.100 [2024-04-18 12:06:01.509327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.100 [2024-04-18 12:06:01.509609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.100 [2024-04-18 12:06:01.509625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.100 qpair failed and we were unable to recover it. 00:30:11.100 [2024-04-18 12:06:01.509971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.100 [2024-04-18 12:06:01.510273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.100 [2024-04-18 12:06:01.510289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.100 qpair failed and we were unable to recover it. 00:30:11.100 [2024-04-18 12:06:01.510622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.100 [2024-04-18 12:06:01.510863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.100 [2024-04-18 12:06:01.510879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.100 qpair failed and we were unable to recover it. 00:30:11.100 [2024-04-18 12:06:01.511273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.100 [2024-04-18 12:06:01.511625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.100 [2024-04-18 12:06:01.511641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.100 qpair failed and we were unable to recover it. 00:30:11.100 [2024-04-18 12:06:01.511850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.100 [2024-04-18 12:06:01.512222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.100 [2024-04-18 12:06:01.512238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.100 qpair failed and we were unable to recover it. 00:30:11.100 [2024-04-18 12:06:01.512607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.100 [2024-04-18 12:06:01.512867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.100 [2024-04-18 12:06:01.512883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.100 qpair failed and we were unable to recover it. 00:30:11.100 [2024-04-18 12:06:01.513129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.100 [2024-04-18 12:06:01.513423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.100 [2024-04-18 12:06:01.513439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.100 qpair failed and we were unable to recover it. 00:30:11.100 [2024-04-18 12:06:01.513781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.100 [2024-04-18 12:06:01.514039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.100 [2024-04-18 12:06:01.514055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.100 qpair failed and we were unable to recover it. 00:30:11.100 [2024-04-18 12:06:01.514403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.100 [2024-04-18 12:06:01.514985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.100 [2024-04-18 12:06:01.515016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.100 qpair failed and we were unable to recover it. 00:30:11.100 [2024-04-18 12:06:01.515396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.100 [2024-04-18 12:06:01.515401] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:11.100 [2024-04-18 12:06:01.515437] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:11.100 [2024-04-18 12:06:01.515454] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:11.100 [2024-04-18 12:06:01.515468] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:11.100 [2024-04-18 12:06:01.515478] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:11.100 [2024-04-18 12:06:01.515651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:30:11.100 [2024-04-18 12:06:01.515718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.100 [2024-04-18 12:06:01.515735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.100 qpair failed and we were unable to recover it. 00:30:11.100 [2024-04-18 12:06:01.515743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:30:11.100 [2024-04-18 12:06:01.516039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.100 [2024-04-18 12:06:01.516309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.100 [2024-04-18 12:06:01.516326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.100 qpair failed and we were unable to recover it. 00:30:11.100 [2024-04-18 12:06:01.516546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.100 [2024-04-18 12:06:01.516822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.100 [2024-04-18 12:06:01.516839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.100 qpair failed and we were unable to recover it. 00:30:11.100 [2024-04-18 12:06:01.517108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.100 [2024-04-18 12:06:01.517304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.100 [2024-04-18 12:06:01.517320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.100 qpair failed and we were unable to recover it. 00:30:11.100 [2024-04-18 12:06:01.517627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.100 [2024-04-18 12:06:01.517903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.100 [2024-04-18 12:06:01.517919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.100 qpair failed and we were unable to recover it. 00:30:11.100 [2024-04-18 12:06:01.518270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.101 [2024-04-18 12:06:01.518530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:30:11.101 [2024-04-18 12:06:01.518606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.101 [2024-04-18 12:06:01.518552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:30:11.101 [2024-04-18 12:06:01.518622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.101 qpair failed and we were unable to recover it. 00:30:11.101 [2024-04-18 12:06:01.518838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.101 [2024-04-18 12:06:01.519065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.101 [2024-04-18 12:06:01.519081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.101 qpair failed and we were unable to recover it. 00:30:11.101 [2024-04-18 12:06:01.522468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.101 [2024-04-18 12:06:01.522778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.101 [2024-04-18 12:06:01.522803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.101 qpair failed and we were unable to recover it. 00:30:11.101 [2024-04-18 12:06:01.523095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.101 [2024-04-18 12:06:01.523434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.101 [2024-04-18 12:06:01.523470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.101 qpair failed and we were unable to recover it. 00:30:11.101 [2024-04-18 12:06:01.523815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.101 [2024-04-18 12:06:01.524095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.101 [2024-04-18 12:06:01.524113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.101 qpair failed and we were unable to recover it. 00:30:11.101 [2024-04-18 12:06:01.524384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.101 [2024-04-18 12:06:01.524661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.101 [2024-04-18 12:06:01.524686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.101 qpair failed and we were unable to recover it. 00:30:11.101 [2024-04-18 12:06:01.525026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.101 [2024-04-18 12:06:01.525405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.101 [2024-04-18 12:06:01.525423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.101 qpair failed and we were unable to recover it. 00:30:11.101 [2024-04-18 12:06:01.525708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.101 [2024-04-18 12:06:01.525981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.101 [2024-04-18 12:06:01.525997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.101 qpair failed and we were unable to recover it. 00:30:11.101 [2024-04-18 12:06:01.526363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.101 [2024-04-18 12:06:01.526696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.101 [2024-04-18 12:06:01.526712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.101 qpair failed and we were unable to recover it. 00:30:11.101 [2024-04-18 12:06:01.526945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.101 [2024-04-18 12:06:01.527293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.101 [2024-04-18 12:06:01.527312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.101 qpair failed and we were unable to recover it. 00:30:11.101 [2024-04-18 12:06:01.527642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.101 [2024-04-18 12:06:01.527921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.101 [2024-04-18 12:06:01.527938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.101 qpair failed and we were unable to recover it. 00:30:11.101 [2024-04-18 12:06:01.528205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.101 [2024-04-18 12:06:01.528460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.101 [2024-04-18 12:06:01.528477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.101 qpair failed and we were unable to recover it. 00:30:11.101 [2024-04-18 12:06:01.528803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.101 [2024-04-18 12:06:01.528980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.101 [2024-04-18 12:06:01.528996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.101 qpair failed and we were unable to recover it. 00:30:11.101 [2024-04-18 12:06:01.529341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.101 [2024-04-18 12:06:01.529689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.101 [2024-04-18 12:06:01.529706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.101 qpair failed and we were unable to recover it. 00:30:11.101 [2024-04-18 12:06:01.530012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.101 [2024-04-18 12:06:01.530280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.101 [2024-04-18 12:06:01.530296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.101 qpair failed and we were unable to recover it. 00:30:11.101 [2024-04-18 12:06:01.530561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.101 [2024-04-18 12:06:01.530837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.101 [2024-04-18 12:06:01.530853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.101 qpair failed and we were unable to recover it. 00:30:11.101 [2024-04-18 12:06:01.531059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.101 [2024-04-18 12:06:01.531409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.101 [2024-04-18 12:06:01.531426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.101 qpair failed and we were unable to recover it. 00:30:11.101 [2024-04-18 12:06:01.531754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.101 [2024-04-18 12:06:01.532022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.101 [2024-04-18 12:06:01.532038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.101 qpair failed and we were unable to recover it. 00:30:11.101 [2024-04-18 12:06:01.532313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.101 [2024-04-18 12:06:01.532605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.101 [2024-04-18 12:06:01.532621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.101 qpair failed and we were unable to recover it. 00:30:11.101 [2024-04-18 12:06:01.532826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.101 [2024-04-18 12:06:01.533123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.101 [2024-04-18 12:06:01.533139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.101 qpair failed and we were unable to recover it. 00:30:11.101 [2024-04-18 12:06:01.533414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.101 [2024-04-18 12:06:01.533712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.101 [2024-04-18 12:06:01.533729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.101 qpair failed and we were unable to recover it. 00:30:11.101 [2024-04-18 12:06:01.534075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.101 [2024-04-18 12:06:01.534331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.101 [2024-04-18 12:06:01.534347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.101 qpair failed and we were unable to recover it. 00:30:11.101 [2024-04-18 12:06:01.534663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.101 [2024-04-18 12:06:01.534940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.101 [2024-04-18 12:06:01.534956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.101 qpair failed and we were unable to recover it. 00:30:11.101 [2024-04-18 12:06:01.535183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.101 [2024-04-18 12:06:01.535443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.101 [2024-04-18 12:06:01.535464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.101 qpair failed and we were unable to recover it. 00:30:11.101 [2024-04-18 12:06:01.535824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.101 [2024-04-18 12:06:01.536101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.101 [2024-04-18 12:06:01.536118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.101 qpair failed and we were unable to recover it. 00:30:11.101 [2024-04-18 12:06:01.536455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.101 [2024-04-18 12:06:01.536817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.101 [2024-04-18 12:06:01.536834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.101 qpair failed and we were unable to recover it. 00:30:11.101 [2024-04-18 12:06:01.537038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.101 [2024-04-18 12:06:01.537385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.101 [2024-04-18 12:06:01.537401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.101 qpair failed and we were unable to recover it. 00:30:11.101 [2024-04-18 12:06:01.537735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.101 [2024-04-18 12:06:01.538012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.101 [2024-04-18 12:06:01.538029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.101 qpair failed and we were unable to recover it. 00:30:11.101 [2024-04-18 12:06:01.538303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.101 [2024-04-18 12:06:01.538594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.101 [2024-04-18 12:06:01.538610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.102 qpair failed and we were unable to recover it. 00:30:11.102 [2024-04-18 12:06:01.538745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.102 [2024-04-18 12:06:01.539092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.102 [2024-04-18 12:06:01.539108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.102 qpair failed and we were unable to recover it. 00:30:11.102 [2024-04-18 12:06:01.539405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.102 [2024-04-18 12:06:01.539834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.102 [2024-04-18 12:06:01.539851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.102 qpair failed and we were unable to recover it. 00:30:11.102 [2024-04-18 12:06:01.540177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.102 [2024-04-18 12:06:01.540323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.102 [2024-04-18 12:06:01.540339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.102 qpair failed and we were unable to recover it. 00:30:11.102 [2024-04-18 12:06:01.540665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.102 [2024-04-18 12:06:01.540962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.102 [2024-04-18 12:06:01.540978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.102 qpair failed and we were unable to recover it. 00:30:11.102 [2024-04-18 12:06:01.541313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.102 [2024-04-18 12:06:01.541425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.102 [2024-04-18 12:06:01.541441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.102 qpair failed and we were unable to recover it. 00:30:11.102 [2024-04-18 12:06:01.541719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.102 [2024-04-18 12:06:01.541992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.102 [2024-04-18 12:06:01.542008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.102 qpair failed and we were unable to recover it. 00:30:11.102 [2024-04-18 12:06:01.542263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.102 [2024-04-18 12:06:01.542554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.102 [2024-04-18 12:06:01.542571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.102 qpair failed and we were unable to recover it. 00:30:11.102 [2024-04-18 12:06:01.542899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.102 [2024-04-18 12:06:01.543032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.102 [2024-04-18 12:06:01.543048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.102 qpair failed and we were unable to recover it. 00:30:11.102 [2024-04-18 12:06:01.543316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.102 [2024-04-18 12:06:01.543633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.102 [2024-04-18 12:06:01.543650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.102 qpair failed and we were unable to recover it. 00:30:11.102 [2024-04-18 12:06:01.543855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.102 [2024-04-18 12:06:01.544073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.102 [2024-04-18 12:06:01.544089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.102 qpair failed and we were unable to recover it. 00:30:11.102 [2024-04-18 12:06:01.544414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.102 [2024-04-18 12:06:01.544690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.102 [2024-04-18 12:06:01.544708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.102 qpair failed and we were unable to recover it. 00:30:11.102 [2024-04-18 12:06:01.545015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.102 [2024-04-18 12:06:01.545338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.102 [2024-04-18 12:06:01.545354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.102 qpair failed and we were unable to recover it. 00:30:11.102 [2024-04-18 12:06:01.545681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.102 [2024-04-18 12:06:01.546027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.102 [2024-04-18 12:06:01.546043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.102 qpair failed and we were unable to recover it. 00:30:11.102 [2024-04-18 12:06:01.546235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.102 [2024-04-18 12:06:01.546556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.102 [2024-04-18 12:06:01.546573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.102 qpair failed and we were unable to recover it. 00:30:11.102 [2024-04-18 12:06:01.546897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.102 [2024-04-18 12:06:01.547220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.102 [2024-04-18 12:06:01.547236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.102 qpair failed and we were unable to recover it. 00:30:11.102 [2024-04-18 12:06:01.547484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.102 [2024-04-18 12:06:01.547811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.102 [2024-04-18 12:06:01.547828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.102 qpair failed and we were unable to recover it. 00:30:11.102 [2024-04-18 12:06:01.548090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.102 [2024-04-18 12:06:01.548411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.102 [2024-04-18 12:06:01.548428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.102 qpair failed and we were unable to recover it. 00:30:11.102 [2024-04-18 12:06:01.548690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.102 [2024-04-18 12:06:01.549012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.102 [2024-04-18 12:06:01.549028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.102 qpair failed and we were unable to recover it. 00:30:11.102 [2024-04-18 12:06:01.549248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.102 [2024-04-18 12:06:01.549478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.102 [2024-04-18 12:06:01.549495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.102 qpair failed and we were unable to recover it. 00:30:11.102 [2024-04-18 12:06:01.549867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.102 [2024-04-18 12:06:01.550084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.102 [2024-04-18 12:06:01.550101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.102 qpair failed and we were unable to recover it. 00:30:11.102 [2024-04-18 12:06:01.550314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.102 [2024-04-18 12:06:01.550585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.102 [2024-04-18 12:06:01.550602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.102 qpair failed and we were unable to recover it. 00:30:11.102 [2024-04-18 12:06:01.550938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.102 [2024-04-18 12:06:01.551208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.102 [2024-04-18 12:06:01.551224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.102 qpair failed and we were unable to recover it. 00:30:11.102 [2024-04-18 12:06:01.551554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.102 [2024-04-18 12:06:01.551744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.102 [2024-04-18 12:06:01.551760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.102 qpair failed and we were unable to recover it. 00:30:11.102 [2024-04-18 12:06:01.551881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.102 [2024-04-18 12:06:01.552220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.102 [2024-04-18 12:06:01.552236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.102 qpair failed and we were unable to recover it. 00:30:11.102 [2024-04-18 12:06:01.552561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.102 [2024-04-18 12:06:01.552846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.102 [2024-04-18 12:06:01.552862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.102 qpair failed and we were unable to recover it. 00:30:11.102 [2024-04-18 12:06:01.553132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.102 [2024-04-18 12:06:01.553397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.102 [2024-04-18 12:06:01.553413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.102 qpair failed and we were unable to recover it. 00:30:11.102 [2024-04-18 12:06:01.553689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.102 [2024-04-18 12:06:01.553884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.102 [2024-04-18 12:06:01.553900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.102 qpair failed and we were unable to recover it. 00:30:11.102 [2024-04-18 12:06:01.554120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.102 [2024-04-18 12:06:01.554312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.102 [2024-04-18 12:06:01.554328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.102 qpair failed and we were unable to recover it. 00:30:11.102 [2024-04-18 12:06:01.554585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.102 [2024-04-18 12:06:01.554867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.103 [2024-04-18 12:06:01.554887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.103 qpair failed and we were unable to recover it. 00:30:11.103 [2024-04-18 12:06:01.555144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.103 [2024-04-18 12:06:01.555331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.103 [2024-04-18 12:06:01.555348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.103 qpair failed and we were unable to recover it. 00:30:11.103 [2024-04-18 12:06:01.555517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.103 [2024-04-18 12:06:01.555720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.103 [2024-04-18 12:06:01.555736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.103 qpair failed and we were unable to recover it. 00:30:11.103 [2024-04-18 12:06:01.556087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.103 [2024-04-18 12:06:01.556292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.103 [2024-04-18 12:06:01.556308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.103 qpair failed and we were unable to recover it. 00:30:11.103 [2024-04-18 12:06:01.556564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.103 [2024-04-18 12:06:01.556909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.103 [2024-04-18 12:06:01.556925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.103 qpair failed and we were unable to recover it. 00:30:11.103 [2024-04-18 12:06:01.557219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.103 [2024-04-18 12:06:01.557485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.103 [2024-04-18 12:06:01.557502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.103 qpair failed and we were unable to recover it. 00:30:11.103 [2024-04-18 12:06:01.557828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.103 [2024-04-18 12:06:01.558114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.103 [2024-04-18 12:06:01.558130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.103 qpair failed and we were unable to recover it. 00:30:11.103 [2024-04-18 12:06:01.558469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.103 [2024-04-18 12:06:01.558732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.103 [2024-04-18 12:06:01.558748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.103 qpair failed and we were unable to recover it. 00:30:11.103 [2024-04-18 12:06:01.558955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.103 [2024-04-18 12:06:01.559159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.103 [2024-04-18 12:06:01.559175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.103 qpair failed and we were unable to recover it. 00:30:11.103 [2024-04-18 12:06:01.559379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.103 [2024-04-18 12:06:01.559664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.103 [2024-04-18 12:06:01.559680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.103 qpair failed and we were unable to recover it. 00:30:11.103 [2024-04-18 12:06:01.559953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.103 [2024-04-18 12:06:01.560170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.103 [2024-04-18 12:06:01.560186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.103 qpair failed and we were unable to recover it. 00:30:11.103 [2024-04-18 12:06:01.560398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.103 [2024-04-18 12:06:01.560653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.103 [2024-04-18 12:06:01.560669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.103 qpair failed and we were unable to recover it. 00:30:11.103 [2024-04-18 12:06:01.560940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.103 [2024-04-18 12:06:01.561210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.103 [2024-04-18 12:06:01.561225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.103 qpair failed and we were unable to recover it. 00:30:11.103 [2024-04-18 12:06:01.561502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.103 [2024-04-18 12:06:01.561793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.103 [2024-04-18 12:06:01.561809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.103 qpair failed and we were unable to recover it. 00:30:11.103 [2024-04-18 12:06:01.562085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.103 [2024-04-18 12:06:01.562218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.103 [2024-04-18 12:06:01.562234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.103 qpair failed and we were unable to recover it. 00:30:11.103 [2024-04-18 12:06:01.562443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.103 [2024-04-18 12:06:01.562713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.103 [2024-04-18 12:06:01.562729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.103 qpair failed and we were unable to recover it. 00:30:11.103 [2024-04-18 12:06:01.563015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.103 [2024-04-18 12:06:01.563198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.103 [2024-04-18 12:06:01.563214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.103 qpair failed and we were unable to recover it. 00:30:11.103 [2024-04-18 12:06:01.563558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.103 [2024-04-18 12:06:01.563759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.103 [2024-04-18 12:06:01.563775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.103 qpair failed and we were unable to recover it. 00:30:11.103 [2024-04-18 12:06:01.564037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.103 [2024-04-18 12:06:01.564288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.103 [2024-04-18 12:06:01.564305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.103 qpair failed and we were unable to recover it. 00:30:11.103 [2024-04-18 12:06:01.564571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.103 [2024-04-18 12:06:01.564761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.103 [2024-04-18 12:06:01.564775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.103 qpair failed and we were unable to recover it. 00:30:11.103 [2024-04-18 12:06:01.565098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.103 [2024-04-18 12:06:01.565303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.103 [2024-04-18 12:06:01.565319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.103 qpair failed and we were unable to recover it. 00:30:11.103 [2024-04-18 12:06:01.565652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.103 [2024-04-18 12:06:01.565922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.103 [2024-04-18 12:06:01.565938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.103 qpair failed and we were unable to recover it. 00:30:11.103 [2024-04-18 12:06:01.566215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.103 [2024-04-18 12:06:01.566546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.103 [2024-04-18 12:06:01.566562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.103 qpair failed and we were unable to recover it. 00:30:11.103 [2024-04-18 12:06:01.566763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.103 [2024-04-18 12:06:01.567003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.103 [2024-04-18 12:06:01.567018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.103 qpair failed and we were unable to recover it. 00:30:11.103 [2024-04-18 12:06:01.567226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.103 [2024-04-18 12:06:01.567481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.103 [2024-04-18 12:06:01.567498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.103 qpair failed and we were unable to recover it. 00:30:11.103 [2024-04-18 12:06:01.567850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.103 [2024-04-18 12:06:01.568120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.103 [2024-04-18 12:06:01.568135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.103 qpair failed and we were unable to recover it. 00:30:11.103 [2024-04-18 12:06:01.568484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.103 [2024-04-18 12:06:01.568702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.103 [2024-04-18 12:06:01.568718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.103 qpair failed and we were unable to recover it. 00:30:11.103 [2024-04-18 12:06:01.568989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.103 [2024-04-18 12:06:01.569276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.103 [2024-04-18 12:06:01.569292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.103 qpair failed and we were unable to recover it. 00:30:11.103 [2024-04-18 12:06:01.569617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.103 [2024-04-18 12:06:01.569889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.103 [2024-04-18 12:06:01.569905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.103 qpair failed and we were unable to recover it. 00:30:11.103 [2024-04-18 12:06:01.570112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.103 [2024-04-18 12:06:01.570417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.104 [2024-04-18 12:06:01.570433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.104 qpair failed and we were unable to recover it. 00:30:11.104 [2024-04-18 12:06:01.570780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.104 [2024-04-18 12:06:01.571019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.104 [2024-04-18 12:06:01.571045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:30:11.104 qpair failed and we were unable to recover it. 00:30:11.104 [2024-04-18 12:06:01.571356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.104 [2024-04-18 12:06:01.571720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.104 [2024-04-18 12:06:01.571741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:30:11.104 qpair failed and we were unable to recover it. 00:30:11.104 [2024-04-18 12:06:01.572034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.104 [2024-04-18 12:06:01.572395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.104 [2024-04-18 12:06:01.572417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:30:11.104 qpair failed and we were unable to recover it. 00:30:11.104 [2024-04-18 12:06:01.572686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.104 [2024-04-18 12:06:01.572957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.104 [2024-04-18 12:06:01.572973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.104 qpair failed and we were unable to recover it. 00:30:11.104 [2024-04-18 12:06:01.573312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.104 [2024-04-18 12:06:01.573655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.104 [2024-04-18 12:06:01.573672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.104 qpair failed and we were unable to recover it. 00:30:11.104 [2024-04-18 12:06:01.573853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.104 [2024-04-18 12:06:01.573982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.104 [2024-04-18 12:06:01.573998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.104 qpair failed and we were unable to recover it. 00:30:11.104 [2024-04-18 12:06:01.574276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.104 [2024-04-18 12:06:01.574549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.104 [2024-04-18 12:06:01.574565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.104 qpair failed and we were unable to recover it. 00:30:11.104 [2024-04-18 12:06:01.574799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.104 [2024-04-18 12:06:01.575093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.104 [2024-04-18 12:06:01.575109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.104 qpair failed and we were unable to recover it. 00:30:11.104 [2024-04-18 12:06:01.575394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.104 [2024-04-18 12:06:01.575650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.104 [2024-04-18 12:06:01.575666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.104 qpair failed and we were unable to recover it. 00:30:11.104 [2024-04-18 12:06:01.575974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.104 [2024-04-18 12:06:01.576322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.104 [2024-04-18 12:06:01.576338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.104 qpair failed and we were unable to recover it. 00:30:11.104 [2024-04-18 12:06:01.576615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.104 [2024-04-18 12:06:01.576959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.104 [2024-04-18 12:06:01.576975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.104 qpair failed and we were unable to recover it. 00:30:11.104 [2024-04-18 12:06:01.577201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.104 [2024-04-18 12:06:01.577524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.104 [2024-04-18 12:06:01.577540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.104 qpair failed and we were unable to recover it. 00:30:11.104 [2024-04-18 12:06:01.577832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.104 [2024-04-18 12:06:01.578087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.104 [2024-04-18 12:06:01.578103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.104 qpair failed and we were unable to recover it. 00:30:11.104 [2024-04-18 12:06:01.578304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.104 [2024-04-18 12:06:01.578421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.104 [2024-04-18 12:06:01.578436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.104 qpair failed and we were unable to recover it. 00:30:11.104 [2024-04-18 12:06:01.578741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.104 [2024-04-18 12:06:01.579008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.104 [2024-04-18 12:06:01.579024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.104 qpair failed and we were unable to recover it. 00:30:11.104 [2024-04-18 12:06:01.579321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.104 [2024-04-18 12:06:01.579599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.104 [2024-04-18 12:06:01.579616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.104 qpair failed and we were unable to recover it. 00:30:11.104 [2024-04-18 12:06:01.579886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.104 [2024-04-18 12:06:01.580193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.104 [2024-04-18 12:06:01.580209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.104 qpair failed and we were unable to recover it. 00:30:11.104 [2024-04-18 12:06:01.580478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.104 [2024-04-18 12:06:01.580742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.104 [2024-04-18 12:06:01.580758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.104 qpair failed and we were unable to recover it. 00:30:11.104 [2024-04-18 12:06:01.581071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.104 [2024-04-18 12:06:01.581395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.104 [2024-04-18 12:06:01.581411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.104 qpair failed and we were unable to recover it. 00:30:11.104 [2024-04-18 12:06:01.581743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.104 [2024-04-18 12:06:01.582014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.104 [2024-04-18 12:06:01.582030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.104 qpair failed and we were unable to recover it. 00:30:11.104 [2024-04-18 12:06:01.582377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.104 [2024-04-18 12:06:01.582587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.104 [2024-04-18 12:06:01.582603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.104 qpair failed and we were unable to recover it. 00:30:11.104 [2024-04-18 12:06:01.582957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.104 [2024-04-18 12:06:01.583267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.104 [2024-04-18 12:06:01.583283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.104 qpair failed and we were unable to recover it. 00:30:11.104 [2024-04-18 12:06:01.583575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.104 [2024-04-18 12:06:01.583856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.104 [2024-04-18 12:06:01.583872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.104 qpair failed and we were unable to recover it. 00:30:11.104 [2024-04-18 12:06:01.584212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.104 [2024-04-18 12:06:01.584561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.104 [2024-04-18 12:06:01.584577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.105 qpair failed and we were unable to recover it. 00:30:11.105 [2024-04-18 12:06:01.584841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.105 [2024-04-18 12:06:01.585042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.105 [2024-04-18 12:06:01.585058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.105 qpair failed and we were unable to recover it. 00:30:11.105 [2024-04-18 12:06:01.585327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.105 [2024-04-18 12:06:01.585590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.105 [2024-04-18 12:06:01.585605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.105 qpair failed and we were unable to recover it. 00:30:11.105 [2024-04-18 12:06:01.585881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.105 [2024-04-18 12:06:01.586220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.105 [2024-04-18 12:06:01.586236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.105 qpair failed and we were unable to recover it. 00:30:11.105 [2024-04-18 12:06:01.586516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.105 [2024-04-18 12:06:01.586725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.105 [2024-04-18 12:06:01.586740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.105 qpair failed and we were unable to recover it. 00:30:11.105 [2024-04-18 12:06:01.587029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.105 [2024-04-18 12:06:01.587325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.105 [2024-04-18 12:06:01.587340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.105 qpair failed and we were unable to recover it. 00:30:11.105 [2024-04-18 12:06:01.587684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.105 [2024-04-18 12:06:01.587980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.105 [2024-04-18 12:06:01.587996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.105 qpair failed and we were unable to recover it. 00:30:11.105 [2024-04-18 12:06:01.588262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.105 [2024-04-18 12:06:01.588533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.105 [2024-04-18 12:06:01.588549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.105 qpair failed and we were unable to recover it. 00:30:11.105 [2024-04-18 12:06:01.588821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.105 [2024-04-18 12:06:01.589088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.105 [2024-04-18 12:06:01.589104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.105 qpair failed and we were unable to recover it. 00:30:11.105 [2024-04-18 12:06:01.589372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.105 [2024-04-18 12:06:01.589605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.105 [2024-04-18 12:06:01.589621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.105 qpair failed and we were unable to recover it. 00:30:11.105 [2024-04-18 12:06:01.589923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.105 [2024-04-18 12:06:01.590221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.105 [2024-04-18 12:06:01.590238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.105 qpair failed and we were unable to recover it. 00:30:11.105 [2024-04-18 12:06:01.590653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.105 [2024-04-18 12:06:01.590864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.105 [2024-04-18 12:06:01.590880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.105 qpair failed and we were unable to recover it. 00:30:11.105 [2024-04-18 12:06:01.591099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.105 [2024-04-18 12:06:01.591467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.105 [2024-04-18 12:06:01.591483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.105 qpair failed and we were unable to recover it. 00:30:11.105 [2024-04-18 12:06:01.591823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.105 [2024-04-18 12:06:01.591993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.105 [2024-04-18 12:06:01.592009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.105 qpair failed and we were unable to recover it. 00:30:11.105 [2024-04-18 12:06:01.592360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.105 [2024-04-18 12:06:01.592643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.105 [2024-04-18 12:06:01.592659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.105 qpair failed and we were unable to recover it. 00:30:11.105 [2024-04-18 12:06:01.592857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.105 [2024-04-18 12:06:01.593131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.105 [2024-04-18 12:06:01.593147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.105 qpair failed and we were unable to recover it. 00:30:11.105 [2024-04-18 12:06:01.593471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.105 [2024-04-18 12:06:01.593745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.105 [2024-04-18 12:06:01.593760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.105 qpair failed and we were unable to recover it. 00:30:11.105 [2024-04-18 12:06:01.593917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.105 [2024-04-18 12:06:01.594133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.105 [2024-04-18 12:06:01.594149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.105 qpair failed and we were unable to recover it. 00:30:11.105 [2024-04-18 12:06:01.594506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.105 [2024-04-18 12:06:01.594851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.105 [2024-04-18 12:06:01.594868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.105 qpair failed and we were unable to recover it. 00:30:11.105 [2024-04-18 12:06:01.595221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.105 [2024-04-18 12:06:01.595595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.105 [2024-04-18 12:06:01.595612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.105 qpair failed and we were unable to recover it. 00:30:11.105 [2024-04-18 12:06:01.595876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.105 [2024-04-18 12:06:01.596104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.105 [2024-04-18 12:06:01.596120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.105 qpair failed and we were unable to recover it. 00:30:11.105 [2024-04-18 12:06:01.596474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.105 [2024-04-18 12:06:01.596799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.105 [2024-04-18 12:06:01.596815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.105 qpair failed and we were unable to recover it. 00:30:11.105 [2024-04-18 12:06:01.597095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.105 [2024-04-18 12:06:01.597465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.105 [2024-04-18 12:06:01.597480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.105 qpair failed and we were unable to recover it. 00:30:11.105 [2024-04-18 12:06:01.597699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.105 [2024-04-18 12:06:01.597975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.105 [2024-04-18 12:06:01.597991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.105 qpair failed and we were unable to recover it. 00:30:11.105 [2024-04-18 12:06:01.598282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.105 [2024-04-18 12:06:01.598569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.105 [2024-04-18 12:06:01.598585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.105 qpair failed and we were unable to recover it. 00:30:11.105 [2024-04-18 12:06:01.598862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.105 [2024-04-18 12:06:01.599134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.105 [2024-04-18 12:06:01.599150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.105 qpair failed and we were unable to recover it. 00:30:11.105 [2024-04-18 12:06:01.599440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.105 [2024-04-18 12:06:01.599687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.105 [2024-04-18 12:06:01.599704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.105 qpair failed and we were unable to recover it. 00:30:11.105 [2024-04-18 12:06:01.600030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.105 [2024-04-18 12:06:01.600308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.105 [2024-04-18 12:06:01.600325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.105 qpair failed and we were unable to recover it. 00:30:11.105 [2024-04-18 12:06:01.600619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.105 [2024-04-18 12:06:01.600917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.105 [2024-04-18 12:06:01.600934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.105 qpair failed and we were unable to recover it. 00:30:11.105 [2024-04-18 12:06:01.601211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.105 [2024-04-18 12:06:01.601483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.105 [2024-04-18 12:06:01.601500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.106 qpair failed and we were unable to recover it. 00:30:11.106 [2024-04-18 12:06:01.601718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.106 [2024-04-18 12:06:01.602065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.106 [2024-04-18 12:06:01.602083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.106 qpair failed and we were unable to recover it. 00:30:11.106 [2024-04-18 12:06:01.602337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.106 [2024-04-18 12:06:01.602597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.106 [2024-04-18 12:06:01.602614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.106 qpair failed and we were unable to recover it. 00:30:11.106 [2024-04-18 12:06:01.602893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.106 [2024-04-18 12:06:01.603176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.106 [2024-04-18 12:06:01.603192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.106 qpair failed and we were unable to recover it. 00:30:11.106 [2024-04-18 12:06:01.603542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.106 [2024-04-18 12:06:01.603818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.106 [2024-04-18 12:06:01.603834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.106 qpair failed and we were unable to recover it. 00:30:11.106 [2024-04-18 12:06:01.604121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.106 [2024-04-18 12:06:01.604364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.106 [2024-04-18 12:06:01.604381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.106 qpair failed and we were unable to recover it. 00:30:11.106 [2024-04-18 12:06:01.604718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.106 [2024-04-18 12:06:01.604986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.106 [2024-04-18 12:06:01.605003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.106 qpair failed and we were unable to recover it. 00:30:11.106 [2024-04-18 12:06:01.605319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.106 [2024-04-18 12:06:01.605684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.106 [2024-04-18 12:06:01.605702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.106 qpair failed and we were unable to recover it. 00:30:11.106 [2024-04-18 12:06:01.605962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.106 [2024-04-18 12:06:01.606196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.106 [2024-04-18 12:06:01.606212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.106 qpair failed and we were unable to recover it. 00:30:11.106 [2024-04-18 12:06:01.606540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.106 [2024-04-18 12:06:01.606862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.106 [2024-04-18 12:06:01.606880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.106 qpair failed and we were unable to recover it. 00:30:11.106 [2024-04-18 12:06:01.607029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.106 [2024-04-18 12:06:01.607404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.106 [2024-04-18 12:06:01.607422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.106 qpair failed and we were unable to recover it. 00:30:11.106 [2024-04-18 12:06:01.607733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.106 [2024-04-18 12:06:01.607996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.106 [2024-04-18 12:06:01.608017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.106 qpair failed and we were unable to recover it. 00:30:11.106 [2024-04-18 12:06:01.608358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.106 [2024-04-18 12:06:01.608628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.106 [2024-04-18 12:06:01.608645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.106 qpair failed and we were unable to recover it. 00:30:11.106 [2024-04-18 12:06:01.608923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.106 [2024-04-18 12:06:01.609188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.106 [2024-04-18 12:06:01.609205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.106 qpair failed and we were unable to recover it. 00:30:11.106 [2024-04-18 12:06:01.609498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.106 [2024-04-18 12:06:01.609791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.106 [2024-04-18 12:06:01.609809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.106 qpair failed and we were unable to recover it. 00:30:11.106 [2024-04-18 12:06:01.610038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.106 [2024-04-18 12:06:01.610353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.106 [2024-04-18 12:06:01.610372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.106 qpair failed and we were unable to recover it. 00:30:11.106 [2024-04-18 12:06:01.610639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.106 [2024-04-18 12:06:01.610857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.106 [2024-04-18 12:06:01.610875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.106 qpair failed and we were unable to recover it. 00:30:11.106 [2024-04-18 12:06:01.611137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.106 [2024-04-18 12:06:01.611435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.106 [2024-04-18 12:06:01.611459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.106 qpair failed and we were unable to recover it. 00:30:11.106 [2024-04-18 12:06:01.611673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.106 [2024-04-18 12:06:01.612024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.106 [2024-04-18 12:06:01.612044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.106 qpair failed and we were unable to recover it. 00:30:11.106 [2024-04-18 12:06:01.612253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.106 [2024-04-18 12:06:01.612557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.106 [2024-04-18 12:06:01.612576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.106 qpair failed and we were unable to recover it. 00:30:11.106 [2024-04-18 12:06:01.612779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.106 [2024-04-18 12:06:01.612966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.106 [2024-04-18 12:06:01.612984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.106 qpair failed and we were unable to recover it. 00:30:11.106 [2024-04-18 12:06:01.613189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.106 [2024-04-18 12:06:01.613399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.106 [2024-04-18 12:06:01.613423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.106 qpair failed and we were unable to recover it. 00:30:11.106 [2024-04-18 12:06:01.613782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.106 [2024-04-18 12:06:01.614025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.106 [2024-04-18 12:06:01.614042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.106 qpair failed and we were unable to recover it. 00:30:11.106 [2024-04-18 12:06:01.614190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.106 [2024-04-18 12:06:01.614476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.106 [2024-04-18 12:06:01.614494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.106 qpair failed and we were unable to recover it. 00:30:11.106 [2024-04-18 12:06:01.614700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.106 [2024-04-18 12:06:01.615041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.106 [2024-04-18 12:06:01.615059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.106 qpair failed and we were unable to recover it. 00:30:11.106 [2024-04-18 12:06:01.615262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.106 [2024-04-18 12:06:01.615375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.106 [2024-04-18 12:06:01.615391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.106 qpair failed and we were unable to recover it. 00:30:11.106 [2024-04-18 12:06:01.615591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.106 [2024-04-18 12:06:01.615871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.106 [2024-04-18 12:06:01.615889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.106 qpair failed and we were unable to recover it. 00:30:11.106 [2024-04-18 12:06:01.616016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.106 [2024-04-18 12:06:01.616335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.106 [2024-04-18 12:06:01.616352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.106 qpair failed and we were unable to recover it. 00:30:11.106 [2024-04-18 12:06:01.616569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.106 [2024-04-18 12:06:01.616787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.106 [2024-04-18 12:06:01.616805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.106 qpair failed and we were unable to recover it. 00:30:11.106 [2024-04-18 12:06:01.617158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.106 [2024-04-18 12:06:01.617360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.107 [2024-04-18 12:06:01.617378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.107 qpair failed and we were unable to recover it. 00:30:11.107 [2024-04-18 12:06:01.617651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.107 [2024-04-18 12:06:01.617850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.107 [2024-04-18 12:06:01.617867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.107 qpair failed and we were unable to recover it. 00:30:11.107 [2024-04-18 12:06:01.618123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.107 [2024-04-18 12:06:01.618322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.107 [2024-04-18 12:06:01.618342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.107 qpair failed and we were unable to recover it. 00:30:11.107 [2024-04-18 12:06:01.618607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.107 [2024-04-18 12:06:01.618775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.107 [2024-04-18 12:06:01.618792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.107 qpair failed and we were unable to recover it. 00:30:11.107 [2024-04-18 12:06:01.619125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.107 [2024-04-18 12:06:01.619388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.107 [2024-04-18 12:06:01.619405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.107 qpair failed and we were unable to recover it. 00:30:11.107 [2024-04-18 12:06:01.619681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.107 [2024-04-18 12:06:01.619880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.107 [2024-04-18 12:06:01.619897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.107 qpair failed and we were unable to recover it. 00:30:11.107 [2024-04-18 12:06:01.620229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.107 [2024-04-18 12:06:01.620428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.107 [2024-04-18 12:06:01.620444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.107 qpair failed and we were unable to recover it. 00:30:11.107 [2024-04-18 12:06:01.620651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.107 [2024-04-18 12:06:01.620941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.107 [2024-04-18 12:06:01.620958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.107 qpair failed and we were unable to recover it. 00:30:11.107 [2024-04-18 12:06:01.621284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.107 [2024-04-18 12:06:01.621534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.107 [2024-04-18 12:06:01.621552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.107 qpair failed and we were unable to recover it. 00:30:11.107 [2024-04-18 12:06:01.621779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.107 [2024-04-18 12:06:01.622039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.107 [2024-04-18 12:06:01.622056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.107 qpair failed and we were unable to recover it. 00:30:11.107 [2024-04-18 12:06:01.622259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.107 [2024-04-18 12:06:01.622525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.107 [2024-04-18 12:06:01.622542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.107 qpair failed and we were unable to recover it. 00:30:11.107 [2024-04-18 12:06:01.622821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.107 [2024-04-18 12:06:01.622939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.107 [2024-04-18 12:06:01.622955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.107 qpair failed and we were unable to recover it. 00:30:11.107 [2024-04-18 12:06:01.623213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.107 [2024-04-18 12:06:01.623399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.107 [2024-04-18 12:06:01.623415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.107 qpair failed and we were unable to recover it. 00:30:11.107 [2024-04-18 12:06:01.623749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.107 [2024-04-18 12:06:01.623878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.107 [2024-04-18 12:06:01.623894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.107 qpair failed and we were unable to recover it. 00:30:11.107 [2024-04-18 12:06:01.624163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.107 [2024-04-18 12:06:01.624430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.107 [2024-04-18 12:06:01.624446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.107 qpair failed and we were unable to recover it. 00:30:11.107 [2024-04-18 12:06:01.624683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.107 [2024-04-18 12:06:01.624881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.107 [2024-04-18 12:06:01.624898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.107 qpair failed and we were unable to recover it. 00:30:11.107 [2024-04-18 12:06:01.625153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.107 [2024-04-18 12:06:01.625416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.107 [2024-04-18 12:06:01.625432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.107 qpair failed and we were unable to recover it. 00:30:11.107 [2024-04-18 12:06:01.625711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.107 [2024-04-18 12:06:01.626040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.107 [2024-04-18 12:06:01.626056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.107 qpair failed and we were unable to recover it. 00:30:11.107 [2024-04-18 12:06:01.626320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.107 [2024-04-18 12:06:01.626608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.107 [2024-04-18 12:06:01.626624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.107 qpair failed and we were unable to recover it. 00:30:11.107 [2024-04-18 12:06:01.626959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.107 [2024-04-18 12:06:01.627232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.107 [2024-04-18 12:06:01.627248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.107 qpair failed and we were unable to recover it. 00:30:11.107 [2024-04-18 12:06:01.627522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.107 [2024-04-18 12:06:01.627846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.107 [2024-04-18 12:06:01.627867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.107 qpair failed and we were unable to recover it. 00:30:11.107 [2024-04-18 12:06:01.628219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.107 [2024-04-18 12:06:01.628485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.107 [2024-04-18 12:06:01.628501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.107 qpair failed and we were unable to recover it. 00:30:11.107 [2024-04-18 12:06:01.628791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.376 [2024-04-18 12:06:01.629091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.376 [2024-04-18 12:06:01.629107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.376 qpair failed and we were unable to recover it. 00:30:11.376 [2024-04-18 12:06:01.629305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.376 [2024-04-18 12:06:01.629489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.376 [2024-04-18 12:06:01.629505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.376 qpair failed and we were unable to recover it. 00:30:11.376 [2024-04-18 12:06:01.629725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.376 [2024-04-18 12:06:01.630070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.377 [2024-04-18 12:06:01.630086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.377 qpair failed and we were unable to recover it. 00:30:11.377 [2024-04-18 12:06:01.630406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.377 [2024-04-18 12:06:01.630751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.377 [2024-04-18 12:06:01.630768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.377 qpair failed and we were unable to recover it. 00:30:11.377 [2024-04-18 12:06:01.630989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.377 [2024-04-18 12:06:01.631103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.377 [2024-04-18 12:06:01.631118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.377 qpair failed and we were unable to recover it. 00:30:11.377 [2024-04-18 12:06:01.631313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.377 [2024-04-18 12:06:01.631529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.377 [2024-04-18 12:06:01.631545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.377 qpair failed and we were unable to recover it. 00:30:11.377 [2024-04-18 12:06:01.631878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.377 [2024-04-18 12:06:01.632137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.377 [2024-04-18 12:06:01.632154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.377 qpair failed and we were unable to recover it. 00:30:11.377 [2024-04-18 12:06:01.632423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.377 [2024-04-18 12:06:01.632707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.377 [2024-04-18 12:06:01.632723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.377 qpair failed and we were unable to recover it. 00:30:11.377 [2024-04-18 12:06:01.633049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.377 [2024-04-18 12:06:01.633323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.377 [2024-04-18 12:06:01.633338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.377 qpair failed and we were unable to recover it. 00:30:11.377 [2024-04-18 12:06:01.633616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.377 [2024-04-18 12:06:01.633837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.377 [2024-04-18 12:06:01.633852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.377 qpair failed and we were unable to recover it. 00:30:11.377 [2024-04-18 12:06:01.634049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.377 [2024-04-18 12:06:01.634319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.377 [2024-04-18 12:06:01.634335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.377 qpair failed and we were unable to recover it. 00:30:11.377 [2024-04-18 12:06:01.634687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.377 [2024-04-18 12:06:01.634975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.377 [2024-04-18 12:06:01.634991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.377 qpair failed and we were unable to recover it. 00:30:11.377 [2024-04-18 12:06:01.635182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.377 [2024-04-18 12:06:01.635448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.377 [2024-04-18 12:06:01.635468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.377 qpair failed and we were unable to recover it. 00:30:11.377 [2024-04-18 12:06:01.635684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.377 [2024-04-18 12:06:01.635938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.377 [2024-04-18 12:06:01.635954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.377 qpair failed and we were unable to recover it. 00:30:11.377 [2024-04-18 12:06:01.636231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.377 [2024-04-18 12:06:01.636573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.377 [2024-04-18 12:06:01.636590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.377 qpair failed and we were unable to recover it. 00:30:11.377 [2024-04-18 12:06:01.636847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.377 [2024-04-18 12:06:01.637100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.377 [2024-04-18 12:06:01.637115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.377 qpair failed and we were unable to recover it. 00:30:11.377 [2024-04-18 12:06:01.637373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.377 [2024-04-18 12:06:01.637661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.377 [2024-04-18 12:06:01.637676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.377 qpair failed and we were unable to recover it. 00:30:11.377 [2024-04-18 12:06:01.637912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.377 [2024-04-18 12:06:01.638182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.377 [2024-04-18 12:06:01.638198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.377 qpair failed and we were unable to recover it. 00:30:11.377 [2024-04-18 12:06:01.638486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.377 [2024-04-18 12:06:01.638749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.377 [2024-04-18 12:06:01.638765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.377 qpair failed and we were unable to recover it. 00:30:11.377 [2024-04-18 12:06:01.639086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.377 [2024-04-18 12:06:01.639431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.377 [2024-04-18 12:06:01.639447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.377 qpair failed and we were unable to recover it. 00:30:11.377 [2024-04-18 12:06:01.639746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.377 [2024-04-18 12:06:01.640037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.377 [2024-04-18 12:06:01.640053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.377 qpair failed and we were unable to recover it. 00:30:11.377 [2024-04-18 12:06:01.640329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.377 [2024-04-18 12:06:01.640527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.377 [2024-04-18 12:06:01.640543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.377 qpair failed and we were unable to recover it. 00:30:11.377 [2024-04-18 12:06:01.640787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.377 [2024-04-18 12:06:01.641136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.377 [2024-04-18 12:06:01.641151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.377 qpair failed and we were unable to recover it. 00:30:11.377 [2024-04-18 12:06:01.641493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.377 [2024-04-18 12:06:01.641767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.377 [2024-04-18 12:06:01.641783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.377 qpair failed and we were unable to recover it. 00:30:11.377 [2024-04-18 12:06:01.641915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.377 [2024-04-18 12:06:01.642216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.377 [2024-04-18 12:06:01.642231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.377 qpair failed and we were unable to recover it. 00:30:11.377 [2024-04-18 12:06:01.642522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.377 [2024-04-18 12:06:01.642788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.377 [2024-04-18 12:06:01.642804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.377 qpair failed and we were unable to recover it. 00:30:11.377 [2024-04-18 12:06:01.643155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.377 [2024-04-18 12:06:01.643478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.377 [2024-04-18 12:06:01.643494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.377 qpair failed and we were unable to recover it. 00:30:11.377 [2024-04-18 12:06:01.643779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.377 [2024-04-18 12:06:01.643994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.377 [2024-04-18 12:06:01.644010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.377 qpair failed and we were unable to recover it. 00:30:11.377 [2024-04-18 12:06:01.644265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.377 [2024-04-18 12:06:01.644529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.377 [2024-04-18 12:06:01.644545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.377 qpair failed and we were unable to recover it. 00:30:11.377 [2024-04-18 12:06:01.644891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.378 [2024-04-18 12:06:01.645213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.378 [2024-04-18 12:06:01.645229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.378 qpair failed and we were unable to recover it. 00:30:11.378 [2024-04-18 12:06:01.645552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.378 [2024-04-18 12:06:01.645746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.378 [2024-04-18 12:06:01.645763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.378 qpair failed and we were unable to recover it. 00:30:11.378 [2024-04-18 12:06:01.645974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.378 [2024-04-18 12:06:01.646277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.378 [2024-04-18 12:06:01.646293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.378 qpair failed and we were unable to recover it. 00:30:11.378 [2024-04-18 12:06:01.646490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.378 [2024-04-18 12:06:01.646803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.378 [2024-04-18 12:06:01.646819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.378 qpair failed and we were unable to recover it. 00:30:11.378 [2024-04-18 12:06:01.647133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.378 [2024-04-18 12:06:01.647456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.378 [2024-04-18 12:06:01.647472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.378 qpair failed and we were unable to recover it. 00:30:11.378 [2024-04-18 12:06:01.647819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.378 [2024-04-18 12:06:01.648075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.378 [2024-04-18 12:06:01.648091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.378 qpair failed and we were unable to recover it. 00:30:11.378 [2024-04-18 12:06:01.648374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.378 [2024-04-18 12:06:01.648709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.378 [2024-04-18 12:06:01.648725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.378 qpair failed and we were unable to recover it. 00:30:11.378 [2024-04-18 12:06:01.648952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.378 [2024-04-18 12:06:01.649231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.378 [2024-04-18 12:06:01.649248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.378 qpair failed and we were unable to recover it. 00:30:11.378 [2024-04-18 12:06:01.649519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.378 [2024-04-18 12:06:01.649715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.378 [2024-04-18 12:06:01.649731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.378 qpair failed and we were unable to recover it. 00:30:11.378 [2024-04-18 12:06:01.649953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.378 [2024-04-18 12:06:01.650279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.378 [2024-04-18 12:06:01.650295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.378 qpair failed and we were unable to recover it. 00:30:11.378 [2024-04-18 12:06:01.650514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.378 [2024-04-18 12:06:01.650783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.378 [2024-04-18 12:06:01.650799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.378 qpair failed and we were unable to recover it. 00:30:11.378 [2024-04-18 12:06:01.651136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.378 [2024-04-18 12:06:01.651479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.378 [2024-04-18 12:06:01.651495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.378 qpair failed and we were unable to recover it. 00:30:11.378 [2024-04-18 12:06:01.651771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.378 [2024-04-18 12:06:01.652047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.378 [2024-04-18 12:06:01.652063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.378 qpair failed and we were unable to recover it. 00:30:11.378 [2024-04-18 12:06:01.652318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.378 [2024-04-18 12:06:01.652643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.378 [2024-04-18 12:06:01.652660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.378 qpair failed and we were unable to recover it. 00:30:11.378 [2024-04-18 12:06:01.652983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.378 [2024-04-18 12:06:01.653377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.378 [2024-04-18 12:06:01.653393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.378 qpair failed and we were unable to recover it. 00:30:11.378 [2024-04-18 12:06:01.653649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.378 [2024-04-18 12:06:01.653971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.378 [2024-04-18 12:06:01.653987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.378 qpair failed and we were unable to recover it. 00:30:11.378 [2024-04-18 12:06:01.654278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.378 [2024-04-18 12:06:01.654657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.378 [2024-04-18 12:06:01.654674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.378 qpair failed and we were unable to recover it. 00:30:11.378 [2024-04-18 12:06:01.654984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.378 [2024-04-18 12:06:01.655307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.378 [2024-04-18 12:06:01.655323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.378 qpair failed and we were unable to recover it. 00:30:11.378 [2024-04-18 12:06:01.655537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.378 [2024-04-18 12:06:01.655803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.378 [2024-04-18 12:06:01.655819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.378 qpair failed and we were unable to recover it. 00:30:11.378 [2024-04-18 12:06:01.656166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.378 [2024-04-18 12:06:01.656438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.378 [2024-04-18 12:06:01.656459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.378 qpair failed and we were unable to recover it. 00:30:11.378 [2024-04-18 12:06:01.656759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.378 [2024-04-18 12:06:01.656953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.378 [2024-04-18 12:06:01.656969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.378 qpair failed and we were unable to recover it. 00:30:11.378 [2024-04-18 12:06:01.657245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.378 [2024-04-18 12:06:01.657577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.378 [2024-04-18 12:06:01.657593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.378 qpair failed and we were unable to recover it. 00:30:11.378 [2024-04-18 12:06:01.657804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.378 [2024-04-18 12:06:01.658149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.378 [2024-04-18 12:06:01.658164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.378 qpair failed and we were unable to recover it. 00:30:11.378 [2024-04-18 12:06:01.658425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.378 [2024-04-18 12:06:01.658800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.378 [2024-04-18 12:06:01.658817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.378 qpair failed and we were unable to recover it. 00:30:11.378 [2024-04-18 12:06:01.659075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.378 [2024-04-18 12:06:01.659468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.378 [2024-04-18 12:06:01.659485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.378 qpair failed and we were unable to recover it. 00:30:11.378 [2024-04-18 12:06:01.659710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.378 [2024-04-18 12:06:01.659995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.378 [2024-04-18 12:06:01.660011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.378 qpair failed and we were unable to recover it. 00:30:11.378 [2024-04-18 12:06:01.660370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.378 [2024-04-18 12:06:01.660646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.378 [2024-04-18 12:06:01.660662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.378 qpair failed and we were unable to recover it. 00:30:11.378 [2024-04-18 12:06:01.660890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.378 [2024-04-18 12:06:01.661158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.378 [2024-04-18 12:06:01.661174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.378 qpair failed and we were unable to recover it. 00:30:11.379 [2024-04-18 12:06:01.661418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.379 [2024-04-18 12:06:01.661674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.379 [2024-04-18 12:06:01.661690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.379 qpair failed and we were unable to recover it. 00:30:11.379 [2024-04-18 12:06:01.661910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.379 [2024-04-18 12:06:01.662106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.379 [2024-04-18 12:06:01.662122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.379 qpair failed and we were unable to recover it. 00:30:11.379 [2024-04-18 12:06:01.662456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.379 [2024-04-18 12:06:01.662746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.379 [2024-04-18 12:06:01.662762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.379 qpair failed and we were unable to recover it. 00:30:11.379 [2024-04-18 12:06:01.662977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.379 [2024-04-18 12:06:01.663185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.379 [2024-04-18 12:06:01.663201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.379 qpair failed and we were unable to recover it. 00:30:11.379 [2024-04-18 12:06:01.663460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.379 [2024-04-18 12:06:01.663701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.379 [2024-04-18 12:06:01.663717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.379 qpair failed and we were unable to recover it. 00:30:11.379 [2024-04-18 12:06:01.663941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.379 [2024-04-18 12:06:01.664212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.379 [2024-04-18 12:06:01.664228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.379 qpair failed and we were unable to recover it. 00:30:11.379 [2024-04-18 12:06:01.664509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.379 [2024-04-18 12:06:01.664836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.379 [2024-04-18 12:06:01.664851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.379 qpair failed and we were unable to recover it. 00:30:11.379 [2024-04-18 12:06:01.665115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.379 [2024-04-18 12:06:01.665400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.379 [2024-04-18 12:06:01.665416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.379 qpair failed and we were unable to recover it. 00:30:11.379 [2024-04-18 12:06:01.665696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.379 [2024-04-18 12:06:01.665972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.379 [2024-04-18 12:06:01.665988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.379 qpair failed and we were unable to recover it. 00:30:11.379 [2024-04-18 12:06:01.666348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.379 [2024-04-18 12:06:01.666566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.379 [2024-04-18 12:06:01.666582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.379 qpair failed and we were unable to recover it. 00:30:11.379 [2024-04-18 12:06:01.666861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.379 [2024-04-18 12:06:01.667202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.379 [2024-04-18 12:06:01.667218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.379 qpair failed and we were unable to recover it. 00:30:11.379 [2024-04-18 12:06:01.667510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.379 [2024-04-18 12:06:01.667785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.379 [2024-04-18 12:06:01.667802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.379 qpair failed and we were unable to recover it. 00:30:11.379 [2024-04-18 12:06:01.668011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.379 [2024-04-18 12:06:01.668301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.379 [2024-04-18 12:06:01.668317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.379 qpair failed and we were unable to recover it. 00:30:11.379 [2024-04-18 12:06:01.668641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.379 [2024-04-18 12:06:01.668864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.379 [2024-04-18 12:06:01.668881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.379 qpair failed and we were unable to recover it. 00:30:11.379 [2024-04-18 12:06:01.669112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.379 [2024-04-18 12:06:01.669444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.379 [2024-04-18 12:06:01.669472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.379 qpair failed and we were unable to recover it. 00:30:11.379 [2024-04-18 12:06:01.669678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.379 [2024-04-18 12:06:01.669860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.379 [2024-04-18 12:06:01.669875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.379 qpair failed and we were unable to recover it. 00:30:11.379 [2024-04-18 12:06:01.670149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.379 [2024-04-18 12:06:01.670508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.379 [2024-04-18 12:06:01.670524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.379 qpair failed and we were unable to recover it. 00:30:11.379 [2024-04-18 12:06:01.670853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.379 [2024-04-18 12:06:01.671200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.379 [2024-04-18 12:06:01.671216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.379 qpair failed and we were unable to recover it. 00:30:11.379 [2024-04-18 12:06:01.671409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.379 [2024-04-18 12:06:01.671670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.379 [2024-04-18 12:06:01.671687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.379 qpair failed and we were unable to recover it. 00:30:11.379 [2024-04-18 12:06:01.671953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.379 [2024-04-18 12:06:01.672222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.379 [2024-04-18 12:06:01.672238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.379 qpair failed and we were unable to recover it. 00:30:11.379 [2024-04-18 12:06:01.672565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.379 [2024-04-18 12:06:01.672836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.379 [2024-04-18 12:06:01.672852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.379 qpair failed and we were unable to recover it. 00:30:11.379 [2024-04-18 12:06:01.673188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.379 [2024-04-18 12:06:01.673509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.379 [2024-04-18 12:06:01.673525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.379 qpair failed and we were unable to recover it. 00:30:11.379 [2024-04-18 12:06:01.673828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.379 [2024-04-18 12:06:01.674047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.379 [2024-04-18 12:06:01.674063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.379 qpair failed and we were unable to recover it. 00:30:11.379 [2024-04-18 12:06:01.674332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.379 [2024-04-18 12:06:01.674602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.379 [2024-04-18 12:06:01.674619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.379 qpair failed and we were unable to recover it. 00:30:11.379 [2024-04-18 12:06:01.674902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.379 [2024-04-18 12:06:01.675236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.379 [2024-04-18 12:06:01.675253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.379 qpair failed and we were unable to recover it. 00:30:11.379 [2024-04-18 12:06:01.675606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.379 [2024-04-18 12:06:01.675886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.379 [2024-04-18 12:06:01.675902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.379 qpair failed and we were unable to recover it. 00:30:11.379 [2024-04-18 12:06:01.676281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.379 [2024-04-18 12:06:01.676559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.379 [2024-04-18 12:06:01.676575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.379 qpair failed and we were unable to recover it. 00:30:11.379 [2024-04-18 12:06:01.676847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.379 [2024-04-18 12:06:01.677124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.379 [2024-04-18 12:06:01.677139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.379 qpair failed and we were unable to recover it. 00:30:11.379 [2024-04-18 12:06:01.677477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.380 [2024-04-18 12:06:01.677838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.380 [2024-04-18 12:06:01.677854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.380 qpair failed and we were unable to recover it. 00:30:11.380 [2024-04-18 12:06:01.678143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.380 [2024-04-18 12:06:01.678488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.380 [2024-04-18 12:06:01.678505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.380 qpair failed and we were unable to recover it. 00:30:11.380 [2024-04-18 12:06:01.678743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.380 [2024-04-18 12:06:01.679065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.380 [2024-04-18 12:06:01.679081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.380 qpair failed and we were unable to recover it. 00:30:11.380 [2024-04-18 12:06:01.679371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.380 [2024-04-18 12:06:01.679671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.380 [2024-04-18 12:06:01.679687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.380 qpair failed and we were unable to recover it. 00:30:11.380 [2024-04-18 12:06:01.679965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.380 [2024-04-18 12:06:01.680297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.380 [2024-04-18 12:06:01.680313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.380 qpair failed and we were unable to recover it. 00:30:11.380 [2024-04-18 12:06:01.680578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.380 [2024-04-18 12:06:01.680807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.380 [2024-04-18 12:06:01.680823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.380 qpair failed and we were unable to recover it. 00:30:11.380 [2024-04-18 12:06:01.681202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.380 [2024-04-18 12:06:01.681478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.380 [2024-04-18 12:06:01.681494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.380 qpair failed and we were unable to recover it. 00:30:11.380 [2024-04-18 12:06:01.681717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.380 [2024-04-18 12:06:01.681985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.380 [2024-04-18 12:06:01.682001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.380 qpair failed and we were unable to recover it. 00:30:11.380 [2024-04-18 12:06:01.682233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.380 [2024-04-18 12:06:01.682493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.380 [2024-04-18 12:06:01.682510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.380 qpair failed and we were unable to recover it. 00:30:11.380 [2024-04-18 12:06:01.682777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.380 [2024-04-18 12:06:01.683125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.380 [2024-04-18 12:06:01.683141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.380 qpair failed and we were unable to recover it. 00:30:11.380 [2024-04-18 12:06:01.683469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.380 [2024-04-18 12:06:01.683800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.380 [2024-04-18 12:06:01.683816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.380 qpair failed and we were unable to recover it. 00:30:11.380 [2024-04-18 12:06:01.684077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.380 [2024-04-18 12:06:01.684342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.380 [2024-04-18 12:06:01.684358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.380 qpair failed and we were unable to recover it. 00:30:11.380 [2024-04-18 12:06:01.684662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.380 [2024-04-18 12:06:01.684983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.380 [2024-04-18 12:06:01.684999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.380 qpair failed and we were unable to recover it. 00:30:11.380 [2024-04-18 12:06:01.685239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.380 [2024-04-18 12:06:01.685522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.380 [2024-04-18 12:06:01.685538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.380 qpair failed and we were unable to recover it. 00:30:11.380 [2024-04-18 12:06:01.685800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.380 [2024-04-18 12:06:01.686068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.380 [2024-04-18 12:06:01.686085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.380 qpair failed and we were unable to recover it. 00:30:11.380 [2024-04-18 12:06:01.686351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.380 [2024-04-18 12:06:01.686683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.380 [2024-04-18 12:06:01.686699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.380 qpair failed and we were unable to recover it. 00:30:11.380 [2024-04-18 12:06:01.686903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.380 [2024-04-18 12:06:01.687127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.380 [2024-04-18 12:06:01.687143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.380 qpair failed and we were unable to recover it. 00:30:11.380 [2024-04-18 12:06:01.687494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.380 [2024-04-18 12:06:01.687840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.380 [2024-04-18 12:06:01.687856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.380 qpair failed and we were unable to recover it. 00:30:11.380 [2024-04-18 12:06:01.688223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.380 [2024-04-18 12:06:01.688493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.380 [2024-04-18 12:06:01.688509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.380 qpair failed and we were unable to recover it. 00:30:11.380 [2024-04-18 12:06:01.688715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.380 [2024-04-18 12:06:01.689014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.380 [2024-04-18 12:06:01.689030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.380 qpair failed and we were unable to recover it. 00:30:11.380 [2024-04-18 12:06:01.689396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.380 [2024-04-18 12:06:01.689616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.380 [2024-04-18 12:06:01.689632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.380 qpair failed and we were unable to recover it. 00:30:11.380 [2024-04-18 12:06:01.689842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.380 [2024-04-18 12:06:01.690165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.380 [2024-04-18 12:06:01.690181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.380 qpair failed and we were unable to recover it. 00:30:11.380 [2024-04-18 12:06:01.690510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.380 [2024-04-18 12:06:01.690802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.380 [2024-04-18 12:06:01.690818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.380 qpair failed and we were unable to recover it. 00:30:11.380 [2024-04-18 12:06:01.691113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.380 [2024-04-18 12:06:01.691320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.380 [2024-04-18 12:06:01.691336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.380 qpair failed and we were unable to recover it. 00:30:11.380 [2024-04-18 12:06:01.691734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.380 [2024-04-18 12:06:01.691947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.380 [2024-04-18 12:06:01.691963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.380 qpair failed and we were unable to recover it. 00:30:11.380 [2024-04-18 12:06:01.692308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.380 [2024-04-18 12:06:01.692684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.380 [2024-04-18 12:06:01.692700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.380 qpair failed and we were unable to recover it. 00:30:11.380 [2024-04-18 12:06:01.692913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.380 [2024-04-18 12:06:01.693215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.380 [2024-04-18 12:06:01.693234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.380 qpair failed and we were unable to recover it. 00:30:11.380 [2024-04-18 12:06:01.693559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.380 [2024-04-18 12:06:01.693828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.380 [2024-04-18 12:06:01.693844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.380 qpair failed and we were unable to recover it. 00:30:11.380 [2024-04-18 12:06:01.694054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.380 [2024-04-18 12:06:01.694260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.381 [2024-04-18 12:06:01.694277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.381 qpair failed and we were unable to recover it. 00:30:11.381 [2024-04-18 12:06:01.694565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.381 [2024-04-18 12:06:01.694773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.381 [2024-04-18 12:06:01.694788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.381 qpair failed and we were unable to recover it. 00:30:11.381 [2024-04-18 12:06:01.695031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.381 [2024-04-18 12:06:01.695323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.381 [2024-04-18 12:06:01.695339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.381 qpair failed and we were unable to recover it. 00:30:11.381 [2024-04-18 12:06:01.695596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.381 [2024-04-18 12:06:01.695851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.381 [2024-04-18 12:06:01.695866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.381 qpair failed and we were unable to recover it. 00:30:11.381 [2024-04-18 12:06:01.696193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.381 [2024-04-18 12:06:01.696482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.381 [2024-04-18 12:06:01.696498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.381 qpair failed and we were unable to recover it. 00:30:11.381 [2024-04-18 12:06:01.696746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.381 [2024-04-18 12:06:01.696939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.381 [2024-04-18 12:06:01.696956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.381 qpair failed and we were unable to recover it. 00:30:11.381 [2024-04-18 12:06:01.697231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.381 [2024-04-18 12:06:01.697507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.381 [2024-04-18 12:06:01.697524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.381 qpair failed and we were unable to recover it. 00:30:11.381 [2024-04-18 12:06:01.697812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.381 [2024-04-18 12:06:01.698069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.381 [2024-04-18 12:06:01.698085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.381 qpair failed and we were unable to recover it. 00:30:11.381 [2024-04-18 12:06:01.698300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.381 [2024-04-18 12:06:01.698662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.381 [2024-04-18 12:06:01.698682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.381 qpair failed and we were unable to recover it. 00:30:11.381 [2024-04-18 12:06:01.698967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.381 [2024-04-18 12:06:01.699175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.381 [2024-04-18 12:06:01.699191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.381 qpair failed and we were unable to recover it. 00:30:11.381 [2024-04-18 12:06:01.699459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.381 [2024-04-18 12:06:01.699679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.381 [2024-04-18 12:06:01.699694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.381 qpair failed and we were unable to recover it. 00:30:11.381 [2024-04-18 12:06:01.699898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.381 [2024-04-18 12:06:01.700227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.381 [2024-04-18 12:06:01.700243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.381 qpair failed and we were unable to recover it. 00:30:11.381 [2024-04-18 12:06:01.700468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.381 [2024-04-18 12:06:01.700759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.381 [2024-04-18 12:06:01.700775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.381 qpair failed and we were unable to recover it. 00:30:11.381 [2024-04-18 12:06:01.701040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.381 [2024-04-18 12:06:01.701299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.381 [2024-04-18 12:06:01.701320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.381 qpair failed and we were unable to recover it. 00:30:11.381 [2024-04-18 12:06:01.701615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.381 [2024-04-18 12:06:01.701957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.381 [2024-04-18 12:06:01.701973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.381 qpair failed and we were unable to recover it. 00:30:11.381 [2024-04-18 12:06:01.702231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.381 [2024-04-18 12:06:01.702430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.381 [2024-04-18 12:06:01.702446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.381 qpair failed and we were unable to recover it. 00:30:11.381 [2024-04-18 12:06:01.702800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.381 [2024-04-18 12:06:01.703121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.381 [2024-04-18 12:06:01.703137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.381 qpair failed and we were unable to recover it. 00:30:11.381 [2024-04-18 12:06:01.703339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.381 [2024-04-18 12:06:01.703638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.381 [2024-04-18 12:06:01.703655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.381 qpair failed and we were unable to recover it. 00:30:11.381 [2024-04-18 12:06:01.703921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.381 [2024-04-18 12:06:01.704267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.381 [2024-04-18 12:06:01.704285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.381 qpair failed and we were unable to recover it. 00:30:11.381 [2024-04-18 12:06:01.704549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.381 [2024-04-18 12:06:01.704818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.381 [2024-04-18 12:06:01.704833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.381 qpair failed and we were unable to recover it. 00:30:11.381 [2024-04-18 12:06:01.705180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.381 [2024-04-18 12:06:01.705442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.381 [2024-04-18 12:06:01.705462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.381 qpair failed and we were unable to recover it. 00:30:11.381 [2024-04-18 12:06:01.705789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.381 [2024-04-18 12:06:01.705992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.381 [2024-04-18 12:06:01.706008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.381 qpair failed and we were unable to recover it. 00:30:11.382 [2024-04-18 12:06:01.706351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.382 [2024-04-18 12:06:01.706460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.382 [2024-04-18 12:06:01.706477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.382 qpair failed and we were unable to recover it. 00:30:11.382 [2024-04-18 12:06:01.706674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.382 [2024-04-18 12:06:01.706996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.382 [2024-04-18 12:06:01.707012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.382 qpair failed and we were unable to recover it. 00:30:11.382 [2024-04-18 12:06:01.707274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.382 [2024-04-18 12:06:01.707472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.382 [2024-04-18 12:06:01.707487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.382 qpair failed and we were unable to recover it. 00:30:11.382 [2024-04-18 12:06:01.707745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.382 [2024-04-18 12:06:01.708067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.382 [2024-04-18 12:06:01.708083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.382 qpair failed and we were unable to recover it. 00:30:11.382 [2024-04-18 12:06:01.708271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.382 [2024-04-18 12:06:01.708539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.382 [2024-04-18 12:06:01.708555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.382 qpair failed and we were unable to recover it. 00:30:11.382 [2024-04-18 12:06:01.708838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.382 [2024-04-18 12:06:01.709160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.382 [2024-04-18 12:06:01.709175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.382 qpair failed and we were unable to recover it. 00:30:11.382 [2024-04-18 12:06:01.709443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.382 [2024-04-18 12:06:01.709666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.382 [2024-04-18 12:06:01.709685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.382 qpair failed and we were unable to recover it. 00:30:11.382 [2024-04-18 12:06:01.709976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.382 [2024-04-18 12:06:01.710279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.382 [2024-04-18 12:06:01.710295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.382 qpair failed and we were unable to recover it. 00:30:11.382 [2024-04-18 12:06:01.710557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.382 [2024-04-18 12:06:01.710915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.382 [2024-04-18 12:06:01.710931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.382 qpair failed and we were unable to recover it. 00:30:11.382 [2024-04-18 12:06:01.711130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.382 [2024-04-18 12:06:01.711388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.382 [2024-04-18 12:06:01.711404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.382 qpair failed and we were unable to recover it. 00:30:11.382 [2024-04-18 12:06:01.711627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.382 [2024-04-18 12:06:01.711733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.382 [2024-04-18 12:06:01.711748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.382 qpair failed and we were unable to recover it. 00:30:11.382 [2024-04-18 12:06:01.712098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.382 [2024-04-18 12:06:01.712419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.382 [2024-04-18 12:06:01.712435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.382 qpair failed and we were unable to recover it. 00:30:11.382 [2024-04-18 12:06:01.712697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.382 [2024-04-18 12:06:01.713019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.382 [2024-04-18 12:06:01.713035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.382 qpair failed and we were unable to recover it. 00:30:11.382 [2024-04-18 12:06:01.713329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.382 [2024-04-18 12:06:01.713651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.382 [2024-04-18 12:06:01.713667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.382 qpair failed and we were unable to recover it. 00:30:11.382 [2024-04-18 12:06:01.713967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.382 [2024-04-18 12:06:01.714264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.382 [2024-04-18 12:06:01.714280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.382 qpair failed and we were unable to recover it. 00:30:11.382 [2024-04-18 12:06:01.714532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.382 [2024-04-18 12:06:01.714854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.382 [2024-04-18 12:06:01.714869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.382 qpair failed and we were unable to recover it. 00:30:11.382 [2024-04-18 12:06:01.715193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.382 [2024-04-18 12:06:01.715409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.382 [2024-04-18 12:06:01.715425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.382 qpair failed and we were unable to recover it. 00:30:11.382 [2024-04-18 12:06:01.715758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.382 [2024-04-18 12:06:01.715881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.382 [2024-04-18 12:06:01.715897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.382 qpair failed and we were unable to recover it. 00:30:11.382 [2024-04-18 12:06:01.716097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.382 [2024-04-18 12:06:01.716308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.382 [2024-04-18 12:06:01.716325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.382 qpair failed and we were unable to recover it. 00:30:11.382 [2024-04-18 12:06:01.716592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.382 [2024-04-18 12:06:01.716911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.382 [2024-04-18 12:06:01.716927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.382 qpair failed and we were unable to recover it. 00:30:11.382 [2024-04-18 12:06:01.717153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.382 [2024-04-18 12:06:01.717423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.382 [2024-04-18 12:06:01.717439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.382 qpair failed and we were unable to recover it. 00:30:11.382 [2024-04-18 12:06:01.717640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.382 [2024-04-18 12:06:01.717973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.382 [2024-04-18 12:06:01.717988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.382 qpair failed and we were unable to recover it. 00:30:11.382 [2024-04-18 12:06:01.718211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.382 [2024-04-18 12:06:01.718482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.382 [2024-04-18 12:06:01.718500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.382 qpair failed and we were unable to recover it. 00:30:11.382 [2024-04-18 12:06:01.718798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.382 [2024-04-18 12:06:01.718977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.382 [2024-04-18 12:06:01.718993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.382 qpair failed and we were unable to recover it. 00:30:11.382 [2024-04-18 12:06:01.719180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.382 [2024-04-18 12:06:01.719426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.382 [2024-04-18 12:06:01.719442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.382 qpair failed and we were unable to recover it. 00:30:11.382 [2024-04-18 12:06:01.719670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.382 [2024-04-18 12:06:01.719952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.382 [2024-04-18 12:06:01.719969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.382 qpair failed and we were unable to recover it. 00:30:11.382 [2024-04-18 12:06:01.720174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.382 [2024-04-18 12:06:01.720368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.382 [2024-04-18 12:06:01.720384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.382 qpair failed and we were unable to recover it. 00:30:11.382 [2024-04-18 12:06:01.720556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.382 [2024-04-18 12:06:01.720878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.382 [2024-04-18 12:06:01.720893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.382 qpair failed and we were unable to recover it. 00:30:11.382 [2024-04-18 12:06:01.721092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.382 [2024-04-18 12:06:01.721374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.383 [2024-04-18 12:06:01.721390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.383 qpair failed and we were unable to recover it. 00:30:11.383 [2024-04-18 12:06:01.721683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.383 [2024-04-18 12:06:01.722006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.383 [2024-04-18 12:06:01.722021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.383 qpair failed and we were unable to recover it. 00:30:11.383 [2024-04-18 12:06:01.722231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.383 [2024-04-18 12:06:01.722431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.383 [2024-04-18 12:06:01.722447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.383 qpair failed and we were unable to recover it. 00:30:11.383 [2024-04-18 12:06:01.722718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.383 [2024-04-18 12:06:01.723061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.383 [2024-04-18 12:06:01.723076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.383 qpair failed and we were unable to recover it. 00:30:11.383 [2024-04-18 12:06:01.723278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.383 [2024-04-18 12:06:01.723544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.383 [2024-04-18 12:06:01.723561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.383 qpair failed and we were unable to recover it. 00:30:11.383 [2024-04-18 12:06:01.723780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.383 [2024-04-18 12:06:01.724090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.383 [2024-04-18 12:06:01.724106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.383 qpair failed and we were unable to recover it. 00:30:11.383 [2024-04-18 12:06:01.724358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.383 [2024-04-18 12:06:01.724683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.383 [2024-04-18 12:06:01.724699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.383 qpair failed and we were unable to recover it. 00:30:11.383 [2024-04-18 12:06:01.724908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.383 [2024-04-18 12:06:01.725225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.383 [2024-04-18 12:06:01.725241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.383 qpair failed and we were unable to recover it. 00:30:11.383 [2024-04-18 12:06:01.725582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.383 [2024-04-18 12:06:01.725902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.383 [2024-04-18 12:06:01.725918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.383 qpair failed and we were unable to recover it. 00:30:11.383 [2024-04-18 12:06:01.726252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.383 [2024-04-18 12:06:01.726519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.383 [2024-04-18 12:06:01.726535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.383 qpair failed and we were unable to recover it. 00:30:11.383 [2024-04-18 12:06:01.726804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.383 [2024-04-18 12:06:01.727173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.383 [2024-04-18 12:06:01.727189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.383 qpair failed and we were unable to recover it. 00:30:11.383 [2024-04-18 12:06:01.727547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.383 [2024-04-18 12:06:01.727917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.383 [2024-04-18 12:06:01.727933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.383 qpair failed and we were unable to recover it. 00:30:11.383 [2024-04-18 12:06:01.728209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.383 [2024-04-18 12:06:01.728532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.383 [2024-04-18 12:06:01.728548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.383 qpair failed and we were unable to recover it. 00:30:11.383 [2024-04-18 12:06:01.728895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.383 [2024-04-18 12:06:01.729168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.383 [2024-04-18 12:06:01.729184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.383 qpair failed and we were unable to recover it. 00:30:11.383 [2024-04-18 12:06:01.729528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.383 [2024-04-18 12:06:01.729751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.383 [2024-04-18 12:06:01.729766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.383 qpair failed and we were unable to recover it. 00:30:11.383 [2024-04-18 12:06:01.730103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.383 [2024-04-18 12:06:01.730445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.383 [2024-04-18 12:06:01.730464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.383 qpair failed and we were unable to recover it. 00:30:11.383 [2024-04-18 12:06:01.730728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.383 [2024-04-18 12:06:01.731018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.383 [2024-04-18 12:06:01.731034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.383 qpair failed and we were unable to recover it. 00:30:11.383 [2024-04-18 12:06:01.731330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.383 [2024-04-18 12:06:01.731694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.383 [2024-04-18 12:06:01.731710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.383 qpair failed and we were unable to recover it. 00:30:11.383 [2024-04-18 12:06:01.731930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.383 [2024-04-18 12:06:01.732251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.383 [2024-04-18 12:06:01.732266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.383 qpair failed and we were unable to recover it. 00:30:11.383 [2024-04-18 12:06:01.732584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.383 [2024-04-18 12:06:01.732866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.383 [2024-04-18 12:06:01.732882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.383 qpair failed and we were unable to recover it. 00:30:11.383 [2024-04-18 12:06:01.733179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.383 [2024-04-18 12:06:01.733439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.383 [2024-04-18 12:06:01.733459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.383 qpair failed and we were unable to recover it. 00:30:11.383 [2024-04-18 12:06:01.733807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.383 [2024-04-18 12:06:01.734008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.383 [2024-04-18 12:06:01.734022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.383 qpair failed and we were unable to recover it. 00:30:11.383 [2024-04-18 12:06:01.734282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.383 [2024-04-18 12:06:01.734610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.383 [2024-04-18 12:06:01.734626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.383 qpair failed and we were unable to recover it. 00:30:11.383 [2024-04-18 12:06:01.734924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.383 [2024-04-18 12:06:01.735253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.383 [2024-04-18 12:06:01.735269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.383 qpair failed and we were unable to recover it. 00:30:11.383 [2024-04-18 12:06:01.735613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.383 [2024-04-18 12:06:01.735989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.383 [2024-04-18 12:06:01.736006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.383 qpair failed and we were unable to recover it. 00:30:11.383 [2024-04-18 12:06:01.736291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.383 [2024-04-18 12:06:01.736637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.383 [2024-04-18 12:06:01.736653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.383 qpair failed and we were unable to recover it. 00:30:11.383 [2024-04-18 12:06:01.736997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.383 [2024-04-18 12:06:01.737366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.383 [2024-04-18 12:06:01.737382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.383 qpair failed and we were unable to recover it. 00:30:11.383 [2024-04-18 12:06:01.737658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.383 [2024-04-18 12:06:01.738007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.383 [2024-04-18 12:06:01.738023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.383 qpair failed and we were unable to recover it. 00:30:11.383 [2024-04-18 12:06:01.738279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.383 [2024-04-18 12:06:01.738579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.383 [2024-04-18 12:06:01.738595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.383 qpair failed and we were unable to recover it. 00:30:11.383 [2024-04-18 12:06:01.738906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.384 [2024-04-18 12:06:01.739184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.384 [2024-04-18 12:06:01.739200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.384 qpair failed and we were unable to recover it. 00:30:11.384 [2024-04-18 12:06:01.739488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.384 [2024-04-18 12:06:01.739801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.384 [2024-04-18 12:06:01.739818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.384 qpair failed and we were unable to recover it. 00:30:11.384 [2024-04-18 12:06:01.740120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.384 [2024-04-18 12:06:01.740495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.384 [2024-04-18 12:06:01.740511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.384 qpair failed and we were unable to recover it. 00:30:11.384 [2024-04-18 12:06:01.740738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.384 [2024-04-18 12:06:01.740999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.384 [2024-04-18 12:06:01.741014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.384 qpair failed and we were unable to recover it. 00:30:11.384 [2024-04-18 12:06:01.741361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.384 [2024-04-18 12:06:01.741698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.384 [2024-04-18 12:06:01.741714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.384 qpair failed and we were unable to recover it. 00:30:11.384 [2024-04-18 12:06:01.742091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.384 [2024-04-18 12:06:01.742354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.384 [2024-04-18 12:06:01.742371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.384 qpair failed and we were unable to recover it. 00:30:11.384 [2024-04-18 12:06:01.742675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.384 [2024-04-18 12:06:01.742934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.384 [2024-04-18 12:06:01.742950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.384 qpair failed and we were unable to recover it. 00:30:11.384 [2024-04-18 12:06:01.743291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.384 [2024-04-18 12:06:01.743567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.384 [2024-04-18 12:06:01.743583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.384 qpair failed and we were unable to recover it. 00:30:11.384 [2024-04-18 12:06:01.743860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.384 [2024-04-18 12:06:01.744189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.384 [2024-04-18 12:06:01.744206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.384 qpair failed and we were unable to recover it. 00:30:11.384 [2024-04-18 12:06:01.744488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.384 [2024-04-18 12:06:01.744781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.384 [2024-04-18 12:06:01.744797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.384 qpair failed and we were unable to recover it. 00:30:11.384 [2024-04-18 12:06:01.745193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.384 [2024-04-18 12:06:01.745411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.384 [2024-04-18 12:06:01.745428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.384 qpair failed and we were unable to recover it. 00:30:11.384 [2024-04-18 12:06:01.745728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.384 [2024-04-18 12:06:01.745975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.384 [2024-04-18 12:06:01.745991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.384 qpair failed and we were unable to recover it. 00:30:11.384 [2024-04-18 12:06:01.746384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.384 [2024-04-18 12:06:01.746701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.384 [2024-04-18 12:06:01.746717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.384 qpair failed and we were unable to recover it. 00:30:11.384 [2024-04-18 12:06:01.746978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.384 [2024-04-18 12:06:01.747206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.384 [2024-04-18 12:06:01.747221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.384 qpair failed and we were unable to recover it. 00:30:11.384 [2024-04-18 12:06:01.747494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.384 [2024-04-18 12:06:01.747860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.384 [2024-04-18 12:06:01.747876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.384 qpair failed and we were unable to recover it. 00:30:11.384 [2024-04-18 12:06:01.748184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.384 [2024-04-18 12:06:01.748402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.384 [2024-04-18 12:06:01.748417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.384 qpair failed and we were unable to recover it. 00:30:11.384 [2024-04-18 12:06:01.748730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.384 [2024-04-18 12:06:01.749004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.384 [2024-04-18 12:06:01.749022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.384 qpair failed and we were unable to recover it. 00:30:11.384 [2024-04-18 12:06:01.749383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.384 [2024-04-18 12:06:01.749729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.384 [2024-04-18 12:06:01.749745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.384 qpair failed and we were unable to recover it. 00:30:11.384 [2024-04-18 12:06:01.750022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.384 [2024-04-18 12:06:01.750359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.384 [2024-04-18 12:06:01.750375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.384 qpair failed and we were unable to recover it. 00:30:11.384 [2024-04-18 12:06:01.750578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.384 [2024-04-18 12:06:01.750902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.384 [2024-04-18 12:06:01.750918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.384 qpair failed and we were unable to recover it. 00:30:11.384 [2024-04-18 12:06:01.751159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.384 [2024-04-18 12:06:01.751441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.384 [2024-04-18 12:06:01.751464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.384 qpair failed and we were unable to recover it. 00:30:11.384 [2024-04-18 12:06:01.751826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.384 [2024-04-18 12:06:01.752151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.384 [2024-04-18 12:06:01.752167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.384 qpair failed and we were unable to recover it. 00:30:11.384 [2024-04-18 12:06:01.752467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.384 [2024-04-18 12:06:01.752749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.384 [2024-04-18 12:06:01.752765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.384 qpair failed and we were unable to recover it. 00:30:11.384 [2024-04-18 12:06:01.752963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.384 [2024-04-18 12:06:01.753267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.384 [2024-04-18 12:06:01.753282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.384 qpair failed and we were unable to recover it. 00:30:11.384 [2024-04-18 12:06:01.753562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.384 [2024-04-18 12:06:01.753819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.384 [2024-04-18 12:06:01.753835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.384 qpair failed and we were unable to recover it. 00:30:11.384 [2024-04-18 12:06:01.754098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.384 [2024-04-18 12:06:01.754373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.384 [2024-04-18 12:06:01.754388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.384 qpair failed and we were unable to recover it. 00:30:11.384 [2024-04-18 12:06:01.754682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.384 [2024-04-18 12:06:01.754950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.384 [2024-04-18 12:06:01.754966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.384 qpair failed and we were unable to recover it. 00:30:11.384 [2024-04-18 12:06:01.755342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.384 [2024-04-18 12:06:01.755550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.384 [2024-04-18 12:06:01.755566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.384 qpair failed and we were unable to recover it. 00:30:11.384 [2024-04-18 12:06:01.755780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.384 [2024-04-18 12:06:01.756124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.385 [2024-04-18 12:06:01.756140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.385 qpair failed and we were unable to recover it. 00:30:11.385 [2024-04-18 12:06:01.756365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.385 [2024-04-18 12:06:01.756544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.385 [2024-04-18 12:06:01.756560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.385 qpair failed and we were unable to recover it. 00:30:11.385 [2024-04-18 12:06:01.756968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.385 [2024-04-18 12:06:01.757343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.385 [2024-04-18 12:06:01.757371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:30:11.385 qpair failed and we were unable to recover it. 00:30:11.385 [2024-04-18 12:06:01.757679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.385 [2024-04-18 12:06:01.757907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.385 [2024-04-18 12:06:01.757929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:30:11.385 qpair failed and we were unable to recover it. 00:30:11.385 [2024-04-18 12:06:01.758162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.385 [2024-04-18 12:06:01.758456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.385 [2024-04-18 12:06:01.758479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:30:11.385 qpair failed and we were unable to recover it. 00:30:11.385 [2024-04-18 12:06:01.758788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.385 [2024-04-18 12:06:01.759144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.385 [2024-04-18 12:06:01.759166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:30:11.385 qpair failed and we were unable to recover it. 00:30:11.385 [2024-04-18 12:06:01.759444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.385 [2024-04-18 12:06:01.759712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.385 [2024-04-18 12:06:01.759734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:30:11.385 qpair failed and we were unable to recover it. 00:30:11.385 [2024-04-18 12:06:01.759989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.385 [2024-04-18 12:06:01.760209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.385 [2024-04-18 12:06:01.760231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:30:11.385 qpair failed and we were unable to recover it. 00:30:11.385 [2024-04-18 12:06:01.760591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.385 [2024-04-18 12:06:01.760777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.385 [2024-04-18 12:06:01.760794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.385 qpair failed and we were unable to recover it. 00:30:11.385 [2024-04-18 12:06:01.761007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.385 [2024-04-18 12:06:01.761364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.385 [2024-04-18 12:06:01.761380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.385 qpair failed and we were unable to recover it. 00:30:11.385 [2024-04-18 12:06:01.761601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.385 [2024-04-18 12:06:01.761880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.385 [2024-04-18 12:06:01.761896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.385 qpair failed and we were unable to recover it. 00:30:11.385 [2024-04-18 12:06:01.762211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.385 [2024-04-18 12:06:01.762576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.385 [2024-04-18 12:06:01.762593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.385 qpair failed and we were unable to recover it. 00:30:11.385 [2024-04-18 12:06:01.762919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.385 [2024-04-18 12:06:01.763130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.385 [2024-04-18 12:06:01.763147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.385 qpair failed and we were unable to recover it. 00:30:11.385 [2024-04-18 12:06:01.763344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.385 [2024-04-18 12:06:01.763675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.385 [2024-04-18 12:06:01.763692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.385 qpair failed and we were unable to recover it. 00:30:11.385 [2024-04-18 12:06:01.764020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.385 [2024-04-18 12:06:01.764344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.385 [2024-04-18 12:06:01.764361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.385 qpair failed and we were unable to recover it. 00:30:11.385 [2024-04-18 12:06:01.764635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.385 [2024-04-18 12:06:01.764892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.385 [2024-04-18 12:06:01.764909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.385 qpair failed and we were unable to recover it. 00:30:11.385 [2024-04-18 12:06:01.765131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.385 [2024-04-18 12:06:01.765353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.385 [2024-04-18 12:06:01.765371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.385 qpair failed and we were unable to recover it. 00:30:11.385 [2024-04-18 12:06:01.765593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.385 [2024-04-18 12:06:01.765797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.385 [2024-04-18 12:06:01.765814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.385 qpair failed and we were unable to recover it. 00:30:11.385 [2024-04-18 12:06:01.766081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.385 [2024-04-18 12:06:01.766300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.385 [2024-04-18 12:06:01.766317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.385 qpair failed and we were unable to recover it. 00:30:11.385 [2024-04-18 12:06:01.766538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.385 [2024-04-18 12:06:01.766663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.385 [2024-04-18 12:06:01.766680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.385 qpair failed and we were unable to recover it. 00:30:11.385 [2024-04-18 12:06:01.767055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.385 [2024-04-18 12:06:01.767317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.385 [2024-04-18 12:06:01.767334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.385 qpair failed and we were unable to recover it. 00:30:11.385 [2024-04-18 12:06:01.767662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.385 [2024-04-18 12:06:01.767918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.385 [2024-04-18 12:06:01.767935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.385 qpair failed and we were unable to recover it. 00:30:11.385 [2024-04-18 12:06:01.768155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.385 [2024-04-18 12:06:01.768446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.385 [2024-04-18 12:06:01.768472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.385 qpair failed and we were unable to recover it. 00:30:11.385 [2024-04-18 12:06:01.768706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.385 [2024-04-18 12:06:01.768966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.385 [2024-04-18 12:06:01.768987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.385 qpair failed and we were unable to recover it. 00:30:11.385 [2024-04-18 12:06:01.769313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.385 [2024-04-18 12:06:01.769530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.385 [2024-04-18 12:06:01.769548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.385 qpair failed and we were unable to recover it. 00:30:11.385 [2024-04-18 12:06:01.769834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.385 [2024-04-18 12:06:01.770082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.385 [2024-04-18 12:06:01.770099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.385 qpair failed and we were unable to recover it. 00:30:11.385 [2024-04-18 12:06:01.770388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.385 [2024-04-18 12:06:01.770590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.385 [2024-04-18 12:06:01.770607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.385 qpair failed and we were unable to recover it. 00:30:11.385 [2024-04-18 12:06:01.770933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.385 [2024-04-18 12:06:01.771216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.385 [2024-04-18 12:06:01.771232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.385 qpair failed and we were unable to recover it. 00:30:11.385 [2024-04-18 12:06:01.771510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.385 [2024-04-18 12:06:01.771784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.386 [2024-04-18 12:06:01.771802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.386 qpair failed and we were unable to recover it. 00:30:11.386 [2024-04-18 12:06:01.772017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.386 [2024-04-18 12:06:01.772291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.386 [2024-04-18 12:06:01.772307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.386 qpair failed and we were unable to recover it. 00:30:11.386 [2024-04-18 12:06:01.772526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.386 [2024-04-18 12:06:01.772795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.386 [2024-04-18 12:06:01.772811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.386 qpair failed and we were unable to recover it. 00:30:11.386 [2024-04-18 12:06:01.773010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.386 [2024-04-18 12:06:01.773267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.386 [2024-04-18 12:06:01.773285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.386 qpair failed and we were unable to recover it. 00:30:11.386 [2024-04-18 12:06:01.773556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.386 [2024-04-18 12:06:01.773704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.386 [2024-04-18 12:06:01.773721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.386 qpair failed and we were unable to recover it. 00:30:11.386 [2024-04-18 12:06:01.773934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.386 [2024-04-18 12:06:01.774155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.386 [2024-04-18 12:06:01.774173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.386 qpair failed and we were unable to recover it. 00:30:11.386 [2024-04-18 12:06:01.774383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.386 [2024-04-18 12:06:01.774752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.386 [2024-04-18 12:06:01.774771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.386 qpair failed and we were unable to recover it. 00:30:11.386 [2024-04-18 12:06:01.775040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.386 [2024-04-18 12:06:01.775294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.386 [2024-04-18 12:06:01.775311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.386 qpair failed and we were unable to recover it. 00:30:11.386 [2024-04-18 12:06:01.775521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.386 [2024-04-18 12:06:01.775777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.386 [2024-04-18 12:06:01.775796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.386 qpair failed and we were unable to recover it. 00:30:11.386 [2024-04-18 12:06:01.776051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.386 [2024-04-18 12:06:01.776325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.386 [2024-04-18 12:06:01.776342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.386 qpair failed and we were unable to recover it. 00:30:11.386 [2024-04-18 12:06:01.776668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.386 [2024-04-18 12:06:01.776992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.386 [2024-04-18 12:06:01.777008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.386 qpair failed and we were unable to recover it. 00:30:11.386 [2024-04-18 12:06:01.777275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.386 [2024-04-18 12:06:01.777548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.386 [2024-04-18 12:06:01.777565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.386 qpair failed and we were unable to recover it. 00:30:11.386 [2024-04-18 12:06:01.777894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.386 [2024-04-18 12:06:01.778160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.386 [2024-04-18 12:06:01.778199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.386 qpair failed and we were unable to recover it. 00:30:11.386 [2024-04-18 12:06:01.778504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.386 [2024-04-18 12:06:01.778689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.386 [2024-04-18 12:06:01.778705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.386 qpair failed and we were unable to recover it. 00:30:11.386 [2024-04-18 12:06:01.779001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.386 [2024-04-18 12:06:01.779272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.386 [2024-04-18 12:06:01.779291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.386 qpair failed and we were unable to recover it. 00:30:11.386 [2024-04-18 12:06:01.779566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.386 [2024-04-18 12:06:01.779756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.386 [2024-04-18 12:06:01.779772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.386 qpair failed and we were unable to recover it. 00:30:11.386 [2024-04-18 12:06:01.780049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.386 [2024-04-18 12:06:01.780242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.386 [2024-04-18 12:06:01.780258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.386 qpair failed and we were unable to recover it. 00:30:11.386 [2024-04-18 12:06:01.780494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.386 [2024-04-18 12:06:01.780838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.386 [2024-04-18 12:06:01.780854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.386 qpair failed and we were unable to recover it. 00:30:11.386 [2024-04-18 12:06:01.781126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.386 [2024-04-18 12:06:01.781448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.386 [2024-04-18 12:06:01.781472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.386 qpair failed and we were unable to recover it. 00:30:11.386 [2024-04-18 12:06:01.781686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.386 [2024-04-18 12:06:01.782029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.386 [2024-04-18 12:06:01.782045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.386 qpair failed and we were unable to recover it. 00:30:11.386 [2024-04-18 12:06:01.782265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.386 [2024-04-18 12:06:01.782536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.386 [2024-04-18 12:06:01.782552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.386 qpair failed and we were unable to recover it. 00:30:11.386 [2024-04-18 12:06:01.782882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.386 [2024-04-18 12:06:01.783080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.386 [2024-04-18 12:06:01.783096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.386 qpair failed and we were unable to recover it. 00:30:11.386 [2024-04-18 12:06:01.783363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.386 [2024-04-18 12:06:01.783620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.386 [2024-04-18 12:06:01.783636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.386 qpair failed and we were unable to recover it. 00:30:11.386 [2024-04-18 12:06:01.783960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.386 [2024-04-18 12:06:01.784150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.386 [2024-04-18 12:06:01.784167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.386 qpair failed and we were unable to recover it. 00:30:11.387 [2024-04-18 12:06:01.784443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.387 [2024-04-18 12:06:01.784729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.387 [2024-04-18 12:06:01.784748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.387 qpair failed and we were unable to recover it. 00:30:11.387 [2024-04-18 12:06:01.785093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.387 [2024-04-18 12:06:01.785295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.387 [2024-04-18 12:06:01.785311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.387 qpair failed and we were unable to recover it. 00:30:11.387 [2024-04-18 12:06:01.785524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.387 [2024-04-18 12:06:01.785846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.387 [2024-04-18 12:06:01.785862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.387 qpair failed and we were unable to recover it. 00:30:11.387 [2024-04-18 12:06:01.786067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.387 [2024-04-18 12:06:01.786390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.387 [2024-04-18 12:06:01.786407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.387 qpair failed and we were unable to recover it. 00:30:11.387 [2024-04-18 12:06:01.786675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.387 [2024-04-18 12:06:01.786883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.387 [2024-04-18 12:06:01.786899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.387 qpair failed and we were unable to recover it. 00:30:11.387 [2024-04-18 12:06:01.787110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.387 [2024-04-18 12:06:01.787402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.387 [2024-04-18 12:06:01.787418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.387 qpair failed and we were unable to recover it. 00:30:11.387 [2024-04-18 12:06:01.787546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.387 [2024-04-18 12:06:01.787814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.387 [2024-04-18 12:06:01.787830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.387 qpair failed and we were unable to recover it. 00:30:11.387 [2024-04-18 12:06:01.788104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.387 [2024-04-18 12:06:01.788312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.387 [2024-04-18 12:06:01.788328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.387 qpair failed and we were unable to recover it. 00:30:11.387 [2024-04-18 12:06:01.788663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.387 [2024-04-18 12:06:01.788998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.387 [2024-04-18 12:06:01.789014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.387 qpair failed and we were unable to recover it. 00:30:11.387 [2024-04-18 12:06:01.789329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.387 [2024-04-18 12:06:01.789601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.387 [2024-04-18 12:06:01.789617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.387 qpair failed and we were unable to recover it. 00:30:11.387 [2024-04-18 12:06:01.789906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.387 [2024-04-18 12:06:01.790074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.387 [2024-04-18 12:06:01.790092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.387 qpair failed and we were unable to recover it. 00:30:11.387 [2024-04-18 12:06:01.790306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.387 [2024-04-18 12:06:01.790571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.387 [2024-04-18 12:06:01.790588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.387 qpair failed and we were unable to recover it. 00:30:11.387 [2024-04-18 12:06:01.790869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.387 [2024-04-18 12:06:01.791127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.387 [2024-04-18 12:06:01.791143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.387 qpair failed and we were unable to recover it. 00:30:11.387 [2024-04-18 12:06:01.791460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.387 [2024-04-18 12:06:01.791716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.387 [2024-04-18 12:06:01.791732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.387 qpair failed and we were unable to recover it. 00:30:11.387 [2024-04-18 12:06:01.792005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.387 [2024-04-18 12:06:01.792316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.387 [2024-04-18 12:06:01.792331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.387 qpair failed and we were unable to recover it. 00:30:11.387 [2024-04-18 12:06:01.792507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.387 [2024-04-18 12:06:01.792710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.387 [2024-04-18 12:06:01.792725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.387 qpair failed and we were unable to recover it. 00:30:11.387 [2024-04-18 12:06:01.792868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.387 [2024-04-18 12:06:01.793159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.387 [2024-04-18 12:06:01.793175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.387 qpair failed and we were unable to recover it. 00:30:11.387 [2024-04-18 12:06:01.793382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.387 [2024-04-18 12:06:01.793714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.387 [2024-04-18 12:06:01.793731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.387 qpair failed and we were unable to recover it. 00:30:11.387 [2024-04-18 12:06:01.793940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.387 [2024-04-18 12:06:01.794187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.387 [2024-04-18 12:06:01.794204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.387 qpair failed and we were unable to recover it. 00:30:11.387 [2024-04-18 12:06:01.794418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.387 [2024-04-18 12:06:01.794671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.387 [2024-04-18 12:06:01.794687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.387 qpair failed and we were unable to recover it. 00:30:11.387 [2024-04-18 12:06:01.795038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.387 [2024-04-18 12:06:01.795298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.387 [2024-04-18 12:06:01.795316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.387 qpair failed and we were unable to recover it. 00:30:11.387 [2024-04-18 12:06:01.795498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.387 [2024-04-18 12:06:01.795846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.387 [2024-04-18 12:06:01.795861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.387 qpair failed and we were unable to recover it. 00:30:11.387 [2024-04-18 12:06:01.796123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.387 [2024-04-18 12:06:01.796377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.387 [2024-04-18 12:06:01.796393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.387 qpair failed and we were unable to recover it. 00:30:11.387 [2024-04-18 12:06:01.796652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.387 [2024-04-18 12:06:01.796931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.387 [2024-04-18 12:06:01.796947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.387 qpair failed and we were unable to recover it. 00:30:11.387 [2024-04-18 12:06:01.797166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.387 [2024-04-18 12:06:01.797371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.387 [2024-04-18 12:06:01.797387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.387 qpair failed and we were unable to recover it. 00:30:11.387 [2024-04-18 12:06:01.797735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.387 [2024-04-18 12:06:01.798011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.387 [2024-04-18 12:06:01.798027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.387 qpair failed and we were unable to recover it. 00:30:11.387 [2024-04-18 12:06:01.798303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.387 [2024-04-18 12:06:01.798580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.387 [2024-04-18 12:06:01.798596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.387 qpair failed and we were unable to recover it. 00:30:11.387 [2024-04-18 12:06:01.798852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.387 [2024-04-18 12:06:01.799130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.387 [2024-04-18 12:06:01.799146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.387 qpair failed and we were unable to recover it. 00:30:11.387 [2024-04-18 12:06:01.799416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.387 [2024-04-18 12:06:01.799521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.388 [2024-04-18 12:06:01.799537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.388 qpair failed and we were unable to recover it. 00:30:11.388 [2024-04-18 12:06:01.799835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.388 [2024-04-18 12:06:01.800016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.388 [2024-04-18 12:06:01.800032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.388 qpair failed and we were unable to recover it. 00:30:11.388 [2024-04-18 12:06:01.800246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.388 [2024-04-18 12:06:01.800512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.388 [2024-04-18 12:06:01.800529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.388 qpair failed and we were unable to recover it. 00:30:11.388 [2024-04-18 12:06:01.800696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.388 [2024-04-18 12:06:01.800968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.388 [2024-04-18 12:06:01.800984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.388 qpair failed and we were unable to recover it. 00:30:11.388 [2024-04-18 12:06:01.801271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.388 [2024-04-18 12:06:01.801485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.388 [2024-04-18 12:06:01.801502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.388 qpair failed and we were unable to recover it. 00:30:11.388 [2024-04-18 12:06:01.801770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.388 [2024-04-18 12:06:01.801886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.388 [2024-04-18 12:06:01.801902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.388 qpair failed and we were unable to recover it. 00:30:11.388 [2024-04-18 12:06:01.802109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.388 [2024-04-18 12:06:01.802397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.388 [2024-04-18 12:06:01.802414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.388 qpair failed and we were unable to recover it. 00:30:11.388 [2024-04-18 12:06:01.802625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.388 [2024-04-18 12:06:01.802856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.388 [2024-04-18 12:06:01.802872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.388 qpair failed and we were unable to recover it. 00:30:11.388 [2024-04-18 12:06:01.803199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.388 [2024-04-18 12:06:01.803313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.388 [2024-04-18 12:06:01.803330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.388 qpair failed and we were unable to recover it. 00:30:11.388 [2024-04-18 12:06:01.803588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.388 [2024-04-18 12:06:01.803807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.388 [2024-04-18 12:06:01.803824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.388 qpair failed and we were unable to recover it. 00:30:11.388 [2024-04-18 12:06:01.804058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.388 [2024-04-18 12:06:01.804344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.388 [2024-04-18 12:06:01.804359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.388 qpair failed and we were unable to recover it. 00:30:11.388 [2024-04-18 12:06:01.804605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.388 [2024-04-18 12:06:01.804799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.388 [2024-04-18 12:06:01.804815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.388 qpair failed and we were unable to recover it. 00:30:11.388 [2024-04-18 12:06:01.804918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.388 [2024-04-18 12:06:01.805262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.388 [2024-04-18 12:06:01.805278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.388 qpair failed and we were unable to recover it. 00:30:11.388 [2024-04-18 12:06:01.805611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.388 [2024-04-18 12:06:01.805801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.388 [2024-04-18 12:06:01.805817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.388 qpair failed and we were unable to recover it. 00:30:11.388 [2024-04-18 12:06:01.806089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.388 [2024-04-18 12:06:01.806470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.388 [2024-04-18 12:06:01.806487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.388 qpair failed and we were unable to recover it. 00:30:11.388 [2024-04-18 12:06:01.806753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.388 [2024-04-18 12:06:01.807122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.388 [2024-04-18 12:06:01.807138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.388 qpair failed and we were unable to recover it. 00:30:11.388 [2024-04-18 12:06:01.807334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.388 [2024-04-18 12:06:01.807550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.388 [2024-04-18 12:06:01.807567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.388 qpair failed and we were unable to recover it. 00:30:11.388 [2024-04-18 12:06:01.807839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.388 [2024-04-18 12:06:01.808059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.388 [2024-04-18 12:06:01.808075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.388 qpair failed and we were unable to recover it. 00:30:11.388 [2024-04-18 12:06:01.808401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.388 [2024-04-18 12:06:01.808563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.388 [2024-04-18 12:06:01.808579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.388 qpair failed and we were unable to recover it. 00:30:11.388 [2024-04-18 12:06:01.808849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.388 [2024-04-18 12:06:01.809032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.388 [2024-04-18 12:06:01.809049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.388 qpair failed and we were unable to recover it. 00:30:11.388 [2024-04-18 12:06:01.809321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.388 [2024-04-18 12:06:01.809529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.388 [2024-04-18 12:06:01.809545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.388 qpair failed and we were unable to recover it. 00:30:11.388 [2024-04-18 12:06:01.809890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.388 [2024-04-18 12:06:01.810208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.388 [2024-04-18 12:06:01.810224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.388 qpair failed and we were unable to recover it. 00:30:11.388 [2024-04-18 12:06:01.810492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.388 [2024-04-18 12:06:01.810697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.388 [2024-04-18 12:06:01.810713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.388 qpair failed and we were unable to recover it. 00:30:11.388 [2024-04-18 12:06:01.810948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.388 [2024-04-18 12:06:01.811214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.388 [2024-04-18 12:06:01.811230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.388 qpair failed and we were unable to recover it. 00:30:11.388 [2024-04-18 12:06:01.811438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.388 [2024-04-18 12:06:01.811653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.388 [2024-04-18 12:06:01.811670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.388 qpair failed and we were unable to recover it. 00:30:11.388 [2024-04-18 12:06:01.812019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.388 [2024-04-18 12:06:01.812227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.388 [2024-04-18 12:06:01.812243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.388 qpair failed and we were unable to recover it. 00:30:11.388 [2024-04-18 12:06:01.812580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.388 [2024-04-18 12:06:01.812862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.388 [2024-04-18 12:06:01.812878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.388 qpair failed and we were unable to recover it. 00:30:11.388 [2024-04-18 12:06:01.813138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.388 [2024-04-18 12:06:01.813424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.388 [2024-04-18 12:06:01.813440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.388 qpair failed and we were unable to recover it. 00:30:11.388 [2024-04-18 12:06:01.813722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.388 [2024-04-18 12:06:01.813978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.388 [2024-04-18 12:06:01.813994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.388 qpair failed and we were unable to recover it. 00:30:11.388 [2024-04-18 12:06:01.814298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.388 [2024-04-18 12:06:01.814625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.389 [2024-04-18 12:06:01.814641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.389 qpair failed and we were unable to recover it. 00:30:11.389 [2024-04-18 12:06:01.814866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.389 [2024-04-18 12:06:01.815139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.389 [2024-04-18 12:06:01.815154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.389 qpair failed and we were unable to recover it. 00:30:11.389 [2024-04-18 12:06:01.815447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.389 [2024-04-18 12:06:01.815568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.389 [2024-04-18 12:06:01.815584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.389 qpair failed and we were unable to recover it. 00:30:11.389 [2024-04-18 12:06:01.815885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.389 [2024-04-18 12:06:01.816146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.389 [2024-04-18 12:06:01.816162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.389 qpair failed and we were unable to recover it. 00:30:11.389 [2024-04-18 12:06:01.816437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.389 [2024-04-18 12:06:01.816714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.389 [2024-04-18 12:06:01.816731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.389 qpair failed and we were unable to recover it. 00:30:11.389 [2024-04-18 12:06:01.817022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.389 [2024-04-18 12:06:01.817169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.389 [2024-04-18 12:06:01.817185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.389 qpair failed and we were unable to recover it. 00:30:11.389 [2024-04-18 12:06:01.817387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.389 [2024-04-18 12:06:01.817685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.389 [2024-04-18 12:06:01.817701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.389 qpair failed and we were unable to recover it. 00:30:11.389 [2024-04-18 12:06:01.817850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.389 [2024-04-18 12:06:01.818222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.389 [2024-04-18 12:06:01.818238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.389 qpair failed and we were unable to recover it. 00:30:11.389 [2024-04-18 12:06:01.818519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.389 [2024-04-18 12:06:01.818722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.389 [2024-04-18 12:06:01.818738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.389 qpair failed and we were unable to recover it. 00:30:11.389 [2024-04-18 12:06:01.818942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.389 [2024-04-18 12:06:01.819150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.389 [2024-04-18 12:06:01.819166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.389 qpair failed and we were unable to recover it. 00:30:11.389 [2024-04-18 12:06:01.819356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.389 [2024-04-18 12:06:01.819577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.389 [2024-04-18 12:06:01.819593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.389 qpair failed and we were unable to recover it. 00:30:11.389 [2024-04-18 12:06:01.819722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.389 [2024-04-18 12:06:01.820063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.389 [2024-04-18 12:06:01.820079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.389 qpair failed and we were unable to recover it. 00:30:11.389 [2024-04-18 12:06:01.820213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.389 [2024-04-18 12:06:01.820562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.389 [2024-04-18 12:06:01.820578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.389 qpair failed and we were unable to recover it. 00:30:11.389 [2024-04-18 12:06:01.820780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.389 [2024-04-18 12:06:01.820991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.389 [2024-04-18 12:06:01.821007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.389 qpair failed and we were unable to recover it. 00:30:11.389 [2024-04-18 12:06:01.821345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.389 [2024-04-18 12:06:01.821598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.389 [2024-04-18 12:06:01.821615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.389 qpair failed and we were unable to recover it. 00:30:11.389 [2024-04-18 12:06:01.821880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.389 [2024-04-18 12:06:01.822137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.389 [2024-04-18 12:06:01.822153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.389 qpair failed and we were unable to recover it. 00:30:11.389 [2024-04-18 12:06:01.822460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.389 [2024-04-18 12:06:01.822735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.389 [2024-04-18 12:06:01.822751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.389 qpair failed and we were unable to recover it. 00:30:11.389 [2024-04-18 12:06:01.823046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.389 [2024-04-18 12:06:01.823260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.389 [2024-04-18 12:06:01.823276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.389 qpair failed and we were unable to recover it. 00:30:11.389 [2024-04-18 12:06:01.823427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.389 [2024-04-18 12:06:01.823701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.389 [2024-04-18 12:06:01.823717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.389 qpair failed and we were unable to recover it. 00:30:11.389 [2024-04-18 12:06:01.824014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.389 [2024-04-18 12:06:01.824277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.389 [2024-04-18 12:06:01.824293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.389 qpair failed and we were unable to recover it. 00:30:11.389 [2024-04-18 12:06:01.824498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.389 [2024-04-18 12:06:01.824676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.389 [2024-04-18 12:06:01.824692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.389 qpair failed and we were unable to recover it. 00:30:11.389 [2024-04-18 12:06:01.824957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.389 [2024-04-18 12:06:01.825180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.389 [2024-04-18 12:06:01.825196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.389 qpair failed and we were unable to recover it. 00:30:11.389 [2024-04-18 12:06:01.825471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.389 [2024-04-18 12:06:01.825762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.389 [2024-04-18 12:06:01.825778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.389 qpair failed and we were unable to recover it. 00:30:11.389 [2024-04-18 12:06:01.825994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.389 [2024-04-18 12:06:01.826206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.389 [2024-04-18 12:06:01.826223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.389 qpair failed and we were unable to recover it. 00:30:11.389 [2024-04-18 12:06:01.826445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.389 [2024-04-18 12:06:01.826659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.389 [2024-04-18 12:06:01.826675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.389 qpair failed and we were unable to recover it. 00:30:11.389 [2024-04-18 12:06:01.826946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.389 [2024-04-18 12:06:01.827225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.389 [2024-04-18 12:06:01.827241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.389 qpair failed and we were unable to recover it. 00:30:11.389 [2024-04-18 12:06:01.827445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.389 [2024-04-18 12:06:01.827658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.389 [2024-04-18 12:06:01.827674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.389 qpair failed and we were unable to recover it. 00:30:11.389 [2024-04-18 12:06:01.827864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.389 [2024-04-18 12:06:01.828072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.389 [2024-04-18 12:06:01.828088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.389 qpair failed and we were unable to recover it. 00:30:11.389 [2024-04-18 12:06:01.828302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.389 [2024-04-18 12:06:01.828427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.389 [2024-04-18 12:06:01.828443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.390 qpair failed and we were unable to recover it. 00:30:11.390 [2024-04-18 12:06:01.828666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.390 [2024-04-18 12:06:01.828864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.390 [2024-04-18 12:06:01.828879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.390 qpair failed and we were unable to recover it. 00:30:11.390 [2024-04-18 12:06:01.829156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.390 [2024-04-18 12:06:01.829355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.390 [2024-04-18 12:06:01.829370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.390 qpair failed and we were unable to recover it. 00:30:11.390 [2024-04-18 12:06:01.829654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.390 [2024-04-18 12:06:01.829908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.390 [2024-04-18 12:06:01.829924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.390 qpair failed and we were unable to recover it. 00:30:11.390 [2024-04-18 12:06:01.830134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.390 [2024-04-18 12:06:01.830330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.390 [2024-04-18 12:06:01.830345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.390 qpair failed and we were unable to recover it. 00:30:11.390 [2024-04-18 12:06:01.830670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.390 [2024-04-18 12:06:01.830874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.390 [2024-04-18 12:06:01.830890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.390 qpair failed and we were unable to recover it. 00:30:11.390 [2024-04-18 12:06:01.831169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.390 [2024-04-18 12:06:01.831398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.390 [2024-04-18 12:06:01.831413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.390 qpair failed and we were unable to recover it. 00:30:11.390 [2024-04-18 12:06:01.831658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.390 [2024-04-18 12:06:01.831896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.390 [2024-04-18 12:06:01.831912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.390 qpair failed and we were unable to recover it. 00:30:11.390 [2024-04-18 12:06:01.832114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.390 [2024-04-18 12:06:01.832306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.390 [2024-04-18 12:06:01.832322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.390 qpair failed and we were unable to recover it. 00:30:11.390 [2024-04-18 12:06:01.832610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.390 [2024-04-18 12:06:01.832861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.390 [2024-04-18 12:06:01.832877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.390 qpair failed and we were unable to recover it. 00:30:11.390 [2024-04-18 12:06:01.833074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.390 [2024-04-18 12:06:01.833277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.390 [2024-04-18 12:06:01.833292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.390 qpair failed and we were unable to recover it. 00:30:11.390 [2024-04-18 12:06:01.833572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.390 [2024-04-18 12:06:01.833796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.390 [2024-04-18 12:06:01.833812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.390 qpair failed and we were unable to recover it. 00:30:11.390 [2024-04-18 12:06:01.834078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.390 [2024-04-18 12:06:01.834284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.390 [2024-04-18 12:06:01.834300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.390 qpair failed and we were unable to recover it. 00:30:11.390 [2024-04-18 12:06:01.834514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.390 [2024-04-18 12:06:01.834724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.390 [2024-04-18 12:06:01.834740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.390 qpair failed and we were unable to recover it. 00:30:11.390 [2024-04-18 12:06:01.834996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.390 [2024-04-18 12:06:01.835178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.390 [2024-04-18 12:06:01.835193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.390 qpair failed and we were unable to recover it. 00:30:11.390 [2024-04-18 12:06:01.835422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.390 [2024-04-18 12:06:01.835615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.390 [2024-04-18 12:06:01.835631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.390 qpair failed and we were unable to recover it. 00:30:11.390 [2024-04-18 12:06:01.835934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.390 [2024-04-18 12:06:01.836163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.390 [2024-04-18 12:06:01.836184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:30:11.390 qpair failed and we were unable to recover it. 00:30:11.390 [2024-04-18 12:06:01.836231] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000002240 (9): Bad file descriptor 00:30:11.390 [2024-04-18 12:06:01.836501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.390 [2024-04-18 12:06:01.836672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.390 [2024-04-18 12:06:01.836694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:30:11.390 qpair failed and we were unable to recover it. 00:30:11.390 [2024-04-18 12:06:01.837026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.390 [2024-04-18 12:06:01.837246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.390 [2024-04-18 12:06:01.837268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:30:11.390 qpair failed and we were unable to recover it. 00:30:11.390 [2024-04-18 12:06:01.837653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.390 [2024-04-18 12:06:01.837892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.390 [2024-04-18 12:06:01.837913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:30:11.390 qpair failed and we were unable to recover it. 00:30:11.390 [2024-04-18 12:06:01.838800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.390 [2024-04-18 12:06:01.839067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.390 [2024-04-18 12:06:01.839093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:30:11.390 qpair failed and we were unable to recover it. 00:30:11.390 [2024-04-18 12:06:01.839302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.390 [2024-04-18 12:06:01.839584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.390 [2024-04-18 12:06:01.839607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:30:11.390 qpair failed and we were unable to recover it. 00:30:11.390 [2024-04-18 12:06:01.839886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.390 [2024-04-18 12:06:01.840089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.390 [2024-04-18 12:06:01.840111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:30:11.390 qpair failed and we were unable to recover it. 00:30:11.390 [2024-04-18 12:06:01.840340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.390 [2024-04-18 12:06:01.840553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.390 [2024-04-18 12:06:01.840575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:30:11.390 qpair failed and we were unable to recover it. 00:30:11.390 [2024-04-18 12:06:01.840794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.390 [2024-04-18 12:06:01.841049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.390 [2024-04-18 12:06:01.841070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:30:11.391 qpair failed and we were unable to recover it. 00:30:11.391 [2024-04-18 12:06:01.841361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.391 [2024-04-18 12:06:01.841587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.391 [2024-04-18 12:06:01.841610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:30:11.391 qpair failed and we were unable to recover it. 00:30:11.391 [2024-04-18 12:06:01.841829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.391 [2024-04-18 12:06:01.842041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.391 [2024-04-18 12:06:01.842063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:30:11.391 qpair failed and we were unable to recover it. 00:30:11.391 [2024-04-18 12:06:01.842270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.391 [2024-04-18 12:06:01.842481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.391 [2024-04-18 12:06:01.842502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:30:11.391 qpair failed and we were unable to recover it. 00:30:11.391 [2024-04-18 12:06:01.842719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.391 [2024-04-18 12:06:01.842912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.391 [2024-04-18 12:06:01.842933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:30:11.391 qpair failed and we were unable to recover it. 00:30:11.391 [2024-04-18 12:06:01.843215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.391 [2024-04-18 12:06:01.843336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.391 [2024-04-18 12:06:01.843356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:30:11.391 qpair failed and we were unable to recover it. 00:30:11.391 [2024-04-18 12:06:01.843574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.391 [2024-04-18 12:06:01.843820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.391 [2024-04-18 12:06:01.843840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:30:11.391 qpair failed and we were unable to recover it. 00:30:11.391 [2024-04-18 12:06:01.844045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.391 [2024-04-18 12:06:01.844307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.391 [2024-04-18 12:06:01.844327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:30:11.391 qpair failed and we were unable to recover it. 00:30:11.391 [2024-04-18 12:06:01.844526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.391 [2024-04-18 12:06:01.844723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.391 [2024-04-18 12:06:01.844746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:30:11.391 qpair failed and we were unable to recover it. 00:30:11.391 [2024-04-18 12:06:01.844968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.391 [2024-04-18 12:06:01.845177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.391 [2024-04-18 12:06:01.845198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:30:11.391 qpair failed and we were unable to recover it. 00:30:11.391 [2024-04-18 12:06:01.845466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.391 [2024-04-18 12:06:01.845691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.391 [2024-04-18 12:06:01.845713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:30:11.391 qpair failed and we were unable to recover it. 00:30:11.391 [2024-04-18 12:06:01.845915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.391 [2024-04-18 12:06:01.846127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.391 [2024-04-18 12:06:01.846148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:30:11.391 qpair failed and we were unable to recover it. 00:30:11.391 [2024-04-18 12:06:01.846427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.391 [2024-04-18 12:06:01.846571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.391 [2024-04-18 12:06:01.846593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:30:11.391 qpair failed and we were unable to recover it. 00:30:11.391 [2024-04-18 12:06:01.846795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.391 [2024-04-18 12:06:01.847063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.391 [2024-04-18 12:06:01.847084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:30:11.391 qpair failed and we were unable to recover it. 00:30:11.391 [2024-04-18 12:06:01.847303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.391 [2024-04-18 12:06:01.847519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.391 [2024-04-18 12:06:01.847541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:30:11.391 qpair failed and we were unable to recover it. 00:30:11.391 [2024-04-18 12:06:01.847752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.391 [2024-04-18 12:06:01.847964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.391 [2024-04-18 12:06:01.847986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:30:11.391 qpair failed and we were unable to recover it. 00:30:11.391 [2024-04-18 12:06:01.848219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.391 [2024-04-18 12:06:01.848487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.391 [2024-04-18 12:06:01.848508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:30:11.391 qpair failed and we were unable to recover it. 00:30:11.391 [2024-04-18 12:06:01.848655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.391 [2024-04-18 12:06:01.848851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.391 [2024-04-18 12:06:01.848873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:30:11.391 qpair failed and we were unable to recover it. 00:30:11.391 [2024-04-18 12:06:01.849153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.391 [2024-04-18 12:06:01.849373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.391 [2024-04-18 12:06:01.849394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:30:11.391 qpair failed and we were unable to recover it. 00:30:11.391 [2024-04-18 12:06:01.849677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.391 [2024-04-18 12:06:01.849881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.391 [2024-04-18 12:06:01.849897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.391 qpair failed and we were unable to recover it. 00:30:11.391 [2024-04-18 12:06:01.850100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.391 [2024-04-18 12:06:01.850305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.391 [2024-04-18 12:06:01.850321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.391 qpair failed and we were unable to recover it. 00:30:11.391 [2024-04-18 12:06:01.850535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.391 [2024-04-18 12:06:01.850732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.391 [2024-04-18 12:06:01.850748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.391 qpair failed and we were unable to recover it. 00:30:11.391 [2024-04-18 12:06:01.850875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.391 [2024-04-18 12:06:01.851066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.391 [2024-04-18 12:06:01.851082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.391 qpair failed and we were unable to recover it. 00:30:11.391 [2024-04-18 12:06:01.851340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.391 [2024-04-18 12:06:01.851547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.391 [2024-04-18 12:06:01.851564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.391 qpair failed and we were unable to recover it. 00:30:11.391 [2024-04-18 12:06:01.851834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.391 [2024-04-18 12:06:01.852016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.391 [2024-04-18 12:06:01.852032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.391 qpair failed and we were unable to recover it. 00:30:11.391 [2024-04-18 12:06:01.852245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.391 [2024-04-18 12:06:01.852480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.391 [2024-04-18 12:06:01.852496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.391 qpair failed and we were unable to recover it. 00:30:11.391 [2024-04-18 12:06:01.852695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.391 [2024-04-18 12:06:01.852882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.391 [2024-04-18 12:06:01.852898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.391 qpair failed and we were unable to recover it. 00:30:11.391 [2024-04-18 12:06:01.853167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.391 [2024-04-18 12:06:01.853387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.391 [2024-04-18 12:06:01.853404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.391 qpair failed and we were unable to recover it. 00:30:11.391 [2024-04-18 12:06:01.853601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.391 [2024-04-18 12:06:01.853798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.392 [2024-04-18 12:06:01.853814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.392 qpair failed and we were unable to recover it. 00:30:11.392 [2024-04-18 12:06:01.854020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.392 [2024-04-18 12:06:01.854247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.392 [2024-04-18 12:06:01.854264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.392 qpair failed and we were unable to recover it. 00:30:11.392 [2024-04-18 12:06:01.854601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.392 [2024-04-18 12:06:01.854852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.392 [2024-04-18 12:06:01.854868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.392 qpair failed and we were unable to recover it. 00:30:11.392 [2024-04-18 12:06:01.855077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.392 [2024-04-18 12:06:01.855404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.392 [2024-04-18 12:06:01.855420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.392 qpair failed and we were unable to recover it. 00:30:11.392 [2024-04-18 12:06:01.855712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.392 [2024-04-18 12:06:01.855968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.392 [2024-04-18 12:06:01.855984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.392 qpair failed and we were unable to recover it. 00:30:11.392 [2024-04-18 12:06:01.856184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.392 [2024-04-18 12:06:01.856402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.392 [2024-04-18 12:06:01.856419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.392 qpair failed and we were unable to recover it. 00:30:11.392 [2024-04-18 12:06:01.856687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.392 [2024-04-18 12:06:01.856868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.392 [2024-04-18 12:06:01.856889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.392 qpair failed and we were unable to recover it. 00:30:11.392 [2024-04-18 12:06:01.857232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.392 [2024-04-18 12:06:01.857414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.392 [2024-04-18 12:06:01.857429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.392 qpair failed and we were unable to recover it. 00:30:11.392 [2024-04-18 12:06:01.857629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.392 [2024-04-18 12:06:01.857826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.392 [2024-04-18 12:06:01.857842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.392 qpair failed and we were unable to recover it. 00:30:11.392 [2024-04-18 12:06:01.858100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.392 [2024-04-18 12:06:01.858372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.392 [2024-04-18 12:06:01.858388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.392 qpair failed and we were unable to recover it. 00:30:11.392 [2024-04-18 12:06:01.858582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.392 [2024-04-18 12:06:01.858768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.392 [2024-04-18 12:06:01.858784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.392 qpair failed and we were unable to recover it. 00:30:11.392 [2024-04-18 12:06:01.858942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.392 [2024-04-18 12:06:01.859146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.392 [2024-04-18 12:06:01.859161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.392 qpair failed and we were unable to recover it. 00:30:11.392 [2024-04-18 12:06:01.859353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.392 [2024-04-18 12:06:01.859673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.392 [2024-04-18 12:06:01.859689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.392 qpair failed and we were unable to recover it. 00:30:11.392 [2024-04-18 12:06:01.859876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.392 [2024-04-18 12:06:01.860134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.392 [2024-04-18 12:06:01.860150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.392 qpair failed and we were unable to recover it. 00:30:11.392 [2024-04-18 12:06:01.860317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.392 [2024-04-18 12:06:01.860565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.392 [2024-04-18 12:06:01.860594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:11.392 qpair failed and we were unable to recover it. 00:30:11.392 [2024-04-18 12:06:01.860898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.392 [2024-04-18 12:06:01.861178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.392 [2024-04-18 12:06:01.861200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:30:11.392 qpair failed and we were unable to recover it. 00:30:11.392 [2024-04-18 12:06:01.861484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.392 [2024-04-18 12:06:01.861761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.392 [2024-04-18 12:06:01.861783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:30:11.392 qpair failed and we were unable to recover it. 00:30:11.392 [2024-04-18 12:06:01.861993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.392 [2024-04-18 12:06:01.862256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.392 [2024-04-18 12:06:01.862278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:30:11.392 qpair failed and we were unable to recover it. 00:30:11.392 [2024-04-18 12:06:01.862429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.392 [2024-04-18 12:06:01.862627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.392 [2024-04-18 12:06:01.862649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:30:11.392 qpair failed and we were unable to recover it. 00:30:11.392 [2024-04-18 12:06:01.862939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.392 [2024-04-18 12:06:01.863140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.392 [2024-04-18 12:06:01.863161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:30:11.392 qpair failed and we were unable to recover it. 00:30:11.392 [2024-04-18 12:06:01.863354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.392 [2024-04-18 12:06:01.863567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.392 [2024-04-18 12:06:01.863583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.392 qpair failed and we were unable to recover it. 00:30:11.392 [2024-04-18 12:06:01.863719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.392 [2024-04-18 12:06:01.863863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.392 [2024-04-18 12:06:01.863879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.392 qpair failed and we were unable to recover it. 00:30:11.392 [2024-04-18 12:06:01.864135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.392 [2024-04-18 12:06:01.864326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.392 [2024-04-18 12:06:01.864342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.392 qpair failed and we were unable to recover it. 00:30:11.392 [2024-04-18 12:06:01.864773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.392 [2024-04-18 12:06:01.864951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.392 [2024-04-18 12:06:01.864967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.392 qpair failed and we were unable to recover it. 00:30:11.392 [2024-04-18 12:06:01.865172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.392 [2024-04-18 12:06:01.865285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.392 [2024-04-18 12:06:01.865308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:11.392 qpair failed and we were unable to recover it. 00:30:11.392 [2024-04-18 12:06:01.865516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.392 [2024-04-18 12:06:01.865749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.392 [2024-04-18 12:06:01.865771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:11.392 qpair failed and we were unable to recover it. 00:30:11.392 [2024-04-18 12:06:01.865981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.392 [2024-04-18 12:06:01.866258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.392 [2024-04-18 12:06:01.866279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:11.392 qpair failed and we were unable to recover it. 00:30:11.392 [2024-04-18 12:06:01.866503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.392 [2024-04-18 12:06:01.866702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.392 [2024-04-18 12:06:01.866724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:11.392 qpair failed and we were unable to recover it. 00:30:11.392 [2024-04-18 12:06:01.866950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.392 [2024-04-18 12:06:01.867305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.392 [2024-04-18 12:06:01.867326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:11.392 qpair failed and we were unable to recover it. 00:30:11.392 [2024-04-18 12:06:01.867580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.392 [2024-04-18 12:06:01.867871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.393 [2024-04-18 12:06:01.867892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:11.393 qpair failed and we were unable to recover it. 00:30:11.393 [2024-04-18 12:06:01.868092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.393 [2024-04-18 12:06:01.868382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.393 [2024-04-18 12:06:01.868403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:11.393 qpair failed and we were unable to recover it. 00:30:11.393 [2024-04-18 12:06:01.868719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.393 [2024-04-18 12:06:01.868848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.393 [2024-04-18 12:06:01.868869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:11.393 qpair failed and we were unable to recover it. 00:30:11.393 [2024-04-18 12:06:01.869152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.393 [2024-04-18 12:06:01.869423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.393 [2024-04-18 12:06:01.869444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:30:11.393 qpair failed and we were unable to recover it. 00:30:11.393 [2024-04-18 12:06:01.869653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.393 [2024-04-18 12:06:01.869764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.393 [2024-04-18 12:06:01.869780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.393 qpair failed and we were unable to recover it. 00:30:11.393 [2024-04-18 12:06:01.869968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.393 [2024-04-18 12:06:01.870082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.393 [2024-04-18 12:06:01.870098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.393 qpair failed and we were unable to recover it. 00:30:11.393 [2024-04-18 12:06:01.870289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.393 [2024-04-18 12:06:01.870547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.393 [2024-04-18 12:06:01.870564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.393 qpair failed and we were unable to recover it. 00:30:11.393 [2024-04-18 12:06:01.870753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.393 [2024-04-18 12:06:01.870940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.393 [2024-04-18 12:06:01.870956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.393 qpair failed and we were unable to recover it. 00:30:11.393 [2024-04-18 12:06:01.871157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.393 [2024-04-18 12:06:01.871457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.393 [2024-04-18 12:06:01.871474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.393 qpair failed and we were unable to recover it. 00:30:11.393 [2024-04-18 12:06:01.871756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.393 [2024-04-18 12:06:01.871947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.393 [2024-04-18 12:06:01.871963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.393 qpair failed and we were unable to recover it. 00:30:11.393 [2024-04-18 12:06:01.872134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.393 [2024-04-18 12:06:01.872459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.393 [2024-04-18 12:06:01.872475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.393 qpair failed and we were unable to recover it. 00:30:11.393 [2024-04-18 12:06:01.872691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.393 [2024-04-18 12:06:01.873011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.393 [2024-04-18 12:06:01.873027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.393 qpair failed and we were unable to recover it. 00:30:11.393 [2024-04-18 12:06:01.873213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.393 [2024-04-18 12:06:01.873482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.393 [2024-04-18 12:06:01.873499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.393 qpair failed and we were unable to recover it. 00:30:11.393 [2024-04-18 12:06:01.873844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.393 [2024-04-18 12:06:01.874024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.393 [2024-04-18 12:06:01.874040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.393 qpair failed and we were unable to recover it. 00:30:11.393 [2024-04-18 12:06:01.874363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.393 [2024-04-18 12:06:01.874632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.393 [2024-04-18 12:06:01.874649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.393 qpair failed and we were unable to recover it. 00:30:11.393 [2024-04-18 12:06:01.874976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.393 [2024-04-18 12:06:01.875199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.393 [2024-04-18 12:06:01.875216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.393 qpair failed and we were unable to recover it. 00:30:11.393 [2024-04-18 12:06:01.875424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.393 [2024-04-18 12:06:01.875584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.393 [2024-04-18 12:06:01.875602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.393 qpair failed and we were unable to recover it. 00:30:11.393 [2024-04-18 12:06:01.875874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.393 [2024-04-18 12:06:01.876014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.393 [2024-04-18 12:06:01.876034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.393 qpair failed and we were unable to recover it. 00:30:11.393 [2024-04-18 12:06:01.876152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.393 [2024-04-18 12:06:01.876458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.393 [2024-04-18 12:06:01.876474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.393 qpair failed and we were unable to recover it. 00:30:11.393 [2024-04-18 12:06:01.876731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.393 [2024-04-18 12:06:01.876945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.393 [2024-04-18 12:06:01.876961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.393 qpair failed and we were unable to recover it. 00:30:11.393 [2024-04-18 12:06:01.877220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.393 [2024-04-18 12:06:01.877472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.393 [2024-04-18 12:06:01.877489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.393 qpair failed and we were unable to recover it. 00:30:11.393 [2024-04-18 12:06:01.877606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.393 [2024-04-18 12:06:01.877924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.393 [2024-04-18 12:06:01.877940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.393 qpair failed and we were unable to recover it. 00:30:11.393 [2024-04-18 12:06:01.878135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.393 [2024-04-18 12:06:01.878356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.393 [2024-04-18 12:06:01.878373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.393 qpair failed and we were unable to recover it. 00:30:11.393 [2024-04-18 12:06:01.878698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.393 [2024-04-18 12:06:01.879002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.393 [2024-04-18 12:06:01.879018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.393 qpair failed and we were unable to recover it. 00:30:11.393 [2024-04-18 12:06:01.879190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.393 [2024-04-18 12:06:01.879519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.393 [2024-04-18 12:06:01.879535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.393 qpair failed and we were unable to recover it. 00:30:11.393 [2024-04-18 12:06:01.879721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.393 [2024-04-18 12:06:01.879994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.393 [2024-04-18 12:06:01.880011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.393 qpair failed and we were unable to recover it. 00:30:11.393 [2024-04-18 12:06:01.880248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.393 [2024-04-18 12:06:01.880426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.393 [2024-04-18 12:06:01.880442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.393 qpair failed and we were unable to recover it. 00:30:11.393 [2024-04-18 12:06:01.880707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.393 [2024-04-18 12:06:01.880979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.393 [2024-04-18 12:06:01.880995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.393 qpair failed and we were unable to recover it. 00:30:11.393 [2024-04-18 12:06:01.881288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.393 [2024-04-18 12:06:01.881479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.393 [2024-04-18 12:06:01.881495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.393 qpair failed and we were unable to recover it. 00:30:11.394 [2024-04-18 12:06:01.881826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.394 [2024-04-18 12:06:01.882126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.394 [2024-04-18 12:06:01.882143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.394 qpair failed and we were unable to recover it. 00:30:11.394 [2024-04-18 12:06:01.882332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.394 [2024-04-18 12:06:01.882588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.394 [2024-04-18 12:06:01.882604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.394 qpair failed and we were unable to recover it. 00:30:11.394 [2024-04-18 12:06:01.882799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.394 [2024-04-18 12:06:01.883051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.394 [2024-04-18 12:06:01.883068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.394 qpair failed and we were unable to recover it. 00:30:11.394 [2024-04-18 12:06:01.883262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.394 [2024-04-18 12:06:01.883461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.394 [2024-04-18 12:06:01.883479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.394 qpair failed and we were unable to recover it. 00:30:11.394 [2024-04-18 12:06:01.883591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.394 [2024-04-18 12:06:01.883960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.394 [2024-04-18 12:06:01.883976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.394 qpair failed and we were unable to recover it. 00:30:11.394 [2024-04-18 12:06:01.884183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.394 [2024-04-18 12:06:01.884312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.394 [2024-04-18 12:06:01.884328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.394 qpair failed and we were unable to recover it. 00:30:11.394 [2024-04-18 12:06:01.884518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.394 [2024-04-18 12:06:01.884864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.394 [2024-04-18 12:06:01.884881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.394 qpair failed and we were unable to recover it. 00:30:11.394 [2024-04-18 12:06:01.885156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.394 [2024-04-18 12:06:01.885494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.394 [2024-04-18 12:06:01.885511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.394 qpair failed and we were unable to recover it. 00:30:11.394 [2024-04-18 12:06:01.885791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.394 [2024-04-18 12:06:01.886089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.394 [2024-04-18 12:06:01.886105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.394 qpair failed and we were unable to recover it. 00:30:11.394 [2024-04-18 12:06:01.886352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.394 [2024-04-18 12:06:01.886556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.394 [2024-04-18 12:06:01.886572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.394 qpair failed and we were unable to recover it. 00:30:11.394 [2024-04-18 12:06:01.886791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.394 [2024-04-18 12:06:01.886978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.394 [2024-04-18 12:06:01.886993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.394 qpair failed and we were unable to recover it. 00:30:11.394 [2024-04-18 12:06:01.887215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.394 [2024-04-18 12:06:01.887497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.394 [2024-04-18 12:06:01.887513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.394 qpair failed and we were unable to recover it. 00:30:11.394 [2024-04-18 12:06:01.887814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.394 [2024-04-18 12:06:01.888065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.394 [2024-04-18 12:06:01.888081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.394 qpair failed and we were unable to recover it. 00:30:11.394 [2024-04-18 12:06:01.888286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.394 [2024-04-18 12:06:01.888545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.394 [2024-04-18 12:06:01.888561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.394 qpair failed and we were unable to recover it. 00:30:11.394 [2024-04-18 12:06:01.888783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.394 [2024-04-18 12:06:01.888951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.394 [2024-04-18 12:06:01.888967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.394 qpair failed and we were unable to recover it. 00:30:11.394 [2024-04-18 12:06:01.889169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.394 [2024-04-18 12:06:01.889445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.394 [2024-04-18 12:06:01.889467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.394 qpair failed and we were unable to recover it. 00:30:11.394 [2024-04-18 12:06:01.889802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.394 [2024-04-18 12:06:01.890000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.394 [2024-04-18 12:06:01.890019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.394 qpair failed and we were unable to recover it. 00:30:11.394 [2024-04-18 12:06:01.890301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.394 [2024-04-18 12:06:01.890645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.394 [2024-04-18 12:06:01.890662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.394 qpair failed and we were unable to recover it. 00:30:11.394 [2024-04-18 12:06:01.890944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.394 [2024-04-18 12:06:01.891216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.394 [2024-04-18 12:06:01.891232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.394 qpair failed and we were unable to recover it. 00:30:11.394 [2024-04-18 12:06:01.891516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.394 [2024-04-18 12:06:01.891787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.394 [2024-04-18 12:06:01.891803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.394 qpair failed and we were unable to recover it. 00:30:11.394 [2024-04-18 12:06:01.892079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.394 [2024-04-18 12:06:01.892399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.394 [2024-04-18 12:06:01.892416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.394 qpair failed and we were unable to recover it. 00:30:11.394 [2024-04-18 12:06:01.892523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.394 [2024-04-18 12:06:01.892708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.394 [2024-04-18 12:06:01.892724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.394 qpair failed and we were unable to recover it. 00:30:11.394 [2024-04-18 12:06:01.892998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.394 [2024-04-18 12:06:01.893195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.394 [2024-04-18 12:06:01.893211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.394 qpair failed and we were unable to recover it. 00:30:11.394 [2024-04-18 12:06:01.893487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.394 [2024-04-18 12:06:01.893830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.394 [2024-04-18 12:06:01.893846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.394 qpair failed and we were unable to recover it. 00:30:11.394 [2024-04-18 12:06:01.894121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.394 [2024-04-18 12:06:01.894420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.394 [2024-04-18 12:06:01.894436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.394 qpair failed and we were unable to recover it. 00:30:11.394 [2024-04-18 12:06:01.894709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.394 [2024-04-18 12:06:01.895036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.394 [2024-04-18 12:06:01.895052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.394 qpair failed and we were unable to recover it. 00:30:11.394 [2024-04-18 12:06:01.895236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.394 [2024-04-18 12:06:01.895543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.394 [2024-04-18 12:06:01.895562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.394 qpair failed and we were unable to recover it. 00:30:11.394 [2024-04-18 12:06:01.895861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.394 [2024-04-18 12:06:01.896080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.394 [2024-04-18 12:06:01.896096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.394 qpair failed and we were unable to recover it. 00:30:11.394 [2024-04-18 12:06:01.896376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.394 [2024-04-18 12:06:01.896575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.395 [2024-04-18 12:06:01.896591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.395 qpair failed and we were unable to recover it. 00:30:11.395 [2024-04-18 12:06:01.896836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.395 [2024-04-18 12:06:01.897157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.395 [2024-04-18 12:06:01.897173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.395 qpair failed and we were unable to recover it. 00:30:11.395 [2024-04-18 12:06:01.897448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.395 [2024-04-18 12:06:01.897714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.395 [2024-04-18 12:06:01.897731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.395 qpair failed and we were unable to recover it. 00:30:11.395 [2024-04-18 12:06:01.897939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.395 [2024-04-18 12:06:01.898203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.395 [2024-04-18 12:06:01.898219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.395 qpair failed and we were unable to recover it. 00:30:11.395 [2024-04-18 12:06:01.898509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.395 [2024-04-18 12:06:01.898697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.395 [2024-04-18 12:06:01.898713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.395 qpair failed and we were unable to recover it. 00:30:11.395 [2024-04-18 12:06:01.898969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.395 [2024-04-18 12:06:01.899185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.395 [2024-04-18 12:06:01.899202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.395 qpair failed and we were unable to recover it. 00:30:11.395 [2024-04-18 12:06:01.899461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.395 [2024-04-18 12:06:01.899792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.395 [2024-04-18 12:06:01.899809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.395 qpair failed and we were unable to recover it. 00:30:11.395 [2024-04-18 12:06:01.900079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.395 [2024-04-18 12:06:01.900425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.395 [2024-04-18 12:06:01.900441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.395 qpair failed and we were unable to recover it. 00:30:11.395 [2024-04-18 12:06:01.900799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.395 [2024-04-18 12:06:01.901136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.395 [2024-04-18 12:06:01.901156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.395 qpair failed and we were unable to recover it. 00:30:11.395 [2024-04-18 12:06:01.901410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.395 [2024-04-18 12:06:01.901710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.395 [2024-04-18 12:06:01.901726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.395 qpair failed and we were unable to recover it. 00:30:11.395 [2024-04-18 12:06:01.902046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.395 [2024-04-18 12:06:01.902299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.395 [2024-04-18 12:06:01.902315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.395 qpair failed and we were unable to recover it. 00:30:11.395 [2024-04-18 12:06:01.902638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.395 [2024-04-18 12:06:01.902889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.395 [2024-04-18 12:06:01.902905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.395 qpair failed and we were unable to recover it. 00:30:11.395 [2024-04-18 12:06:01.903176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.395 [2024-04-18 12:06:01.903392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.395 [2024-04-18 12:06:01.903408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.395 qpair failed and we were unable to recover it. 00:30:11.395 [2024-04-18 12:06:01.903758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.395 [2024-04-18 12:06:01.904030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.395 [2024-04-18 12:06:01.904046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.395 qpair failed and we were unable to recover it. 00:30:11.395 [2024-04-18 12:06:01.904323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.395 [2024-04-18 12:06:01.904518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.395 [2024-04-18 12:06:01.904534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.395 qpair failed and we were unable to recover it. 00:30:11.395 [2024-04-18 12:06:01.904812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.395 [2024-04-18 12:06:01.905101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.395 [2024-04-18 12:06:01.905117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.395 qpair failed and we were unable to recover it. 00:30:11.395 [2024-04-18 12:06:01.905379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.395 [2024-04-18 12:06:01.905587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.395 [2024-04-18 12:06:01.905604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.395 qpair failed and we were unable to recover it. 00:30:11.395 [2024-04-18 12:06:01.905867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.395 [2024-04-18 12:06:01.906134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.395 [2024-04-18 12:06:01.906150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.395 qpair failed and we were unable to recover it. 00:30:11.395 [2024-04-18 12:06:01.906473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.395 [2024-04-18 12:06:01.906746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.395 [2024-04-18 12:06:01.906764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.395 qpair failed and we were unable to recover it. 00:30:11.395 [2024-04-18 12:06:01.906969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.395 [2024-04-18 12:06:01.907193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.395 [2024-04-18 12:06:01.907209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.395 qpair failed and we were unable to recover it. 00:30:11.395 [2024-04-18 12:06:01.907474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.395 [2024-04-18 12:06:01.907709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.395 [2024-04-18 12:06:01.907724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.395 qpair failed and we were unable to recover it. 00:30:11.395 [2024-04-18 12:06:01.908023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.395 [2024-04-18 12:06:01.908189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.395 [2024-04-18 12:06:01.908204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.395 qpair failed and we were unable to recover it. 00:30:11.395 [2024-04-18 12:06:01.908411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.395 [2024-04-18 12:06:01.908577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.395 [2024-04-18 12:06:01.908593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.395 qpair failed and we were unable to recover it. 00:30:11.395 [2024-04-18 12:06:01.908886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.395 [2024-04-18 12:06:01.909096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.395 [2024-04-18 12:06:01.909112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.395 qpair failed and we were unable to recover it. 00:30:11.395 [2024-04-18 12:06:01.909251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.396 [2024-04-18 12:06:01.909434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.396 [2024-04-18 12:06:01.909463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.396 qpair failed and we were unable to recover it. 00:30:11.396 [2024-04-18 12:06:01.909665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.396 [2024-04-18 12:06:01.909885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.396 [2024-04-18 12:06:01.909900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.396 qpair failed and we were unable to recover it. 00:30:11.396 [2024-04-18 12:06:01.910091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.396 [2024-04-18 12:06:01.910430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.396 [2024-04-18 12:06:01.910445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.396 qpair failed and we were unable to recover it. 00:30:11.396 [2024-04-18 12:06:01.910796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.396 [2024-04-18 12:06:01.911128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.396 [2024-04-18 12:06:01.911144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.396 qpair failed and we were unable to recover it. 00:30:11.396 [2024-04-18 12:06:01.911414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.396 [2024-04-18 12:06:01.911596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.396 [2024-04-18 12:06:01.911612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.396 qpair failed and we were unable to recover it. 00:30:11.396 [2024-04-18 12:06:01.911735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.396 [2024-04-18 12:06:01.912100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.396 [2024-04-18 12:06:01.912116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.396 qpair failed and we were unable to recover it. 00:30:11.396 [2024-04-18 12:06:01.912461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.396 [2024-04-18 12:06:01.912663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.396 [2024-04-18 12:06:01.912680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.396 qpair failed and we were unable to recover it. 00:30:11.396 [2024-04-18 12:06:01.912958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.396 [2024-04-18 12:06:01.913212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.396 [2024-04-18 12:06:01.913228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.396 qpair failed and we were unable to recover it. 00:30:11.396 [2024-04-18 12:06:01.913504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.396 [2024-04-18 12:06:01.913774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.396 [2024-04-18 12:06:01.913790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.396 qpair failed and we were unable to recover it. 00:30:11.396 [2024-04-18 12:06:01.913993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.396 [2024-04-18 12:06:01.914198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.396 [2024-04-18 12:06:01.914214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.396 qpair failed and we were unable to recover it. 00:30:11.667 [2024-04-18 12:06:01.914555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.667 [2024-04-18 12:06:01.914782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.667 [2024-04-18 12:06:01.914798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.667 qpair failed and we were unable to recover it. 00:30:11.667 [2024-04-18 12:06:01.915168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.667 [2024-04-18 12:06:01.915374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.667 [2024-04-18 12:06:01.915390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.667 qpair failed and we were unable to recover it. 00:30:11.667 [2024-04-18 12:06:01.915730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.667 [2024-04-18 12:06:01.915988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.667 [2024-04-18 12:06:01.916004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.667 qpair failed and we were unable to recover it. 00:30:11.667 [2024-04-18 12:06:01.916219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.667 [2024-04-18 12:06:01.916387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.667 [2024-04-18 12:06:01.916403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.667 qpair failed and we were unable to recover it. 00:30:11.667 [2024-04-18 12:06:01.916686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.667 [2024-04-18 12:06:01.916891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.667 [2024-04-18 12:06:01.916907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.667 qpair failed and we were unable to recover it. 00:30:11.667 [2024-04-18 12:06:01.917180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.668 [2024-04-18 12:06:01.917499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.668 [2024-04-18 12:06:01.917515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.668 qpair failed and we were unable to recover it. 00:30:11.668 [2024-04-18 12:06:01.917716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.668 [2024-04-18 12:06:01.918006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.668 [2024-04-18 12:06:01.918022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.668 qpair failed and we were unable to recover it. 00:30:11.668 [2024-04-18 12:06:01.918233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.668 [2024-04-18 12:06:01.918447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.668 [2024-04-18 12:06:01.918483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.668 qpair failed and we were unable to recover it. 00:30:11.668 [2024-04-18 12:06:01.918809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.668 [2024-04-18 12:06:01.919064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.668 [2024-04-18 12:06:01.919080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.668 qpair failed and we were unable to recover it. 00:30:11.668 [2024-04-18 12:06:01.919328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.668 [2024-04-18 12:06:01.919673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.668 [2024-04-18 12:06:01.919690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.668 qpair failed and we were unable to recover it. 00:30:11.668 [2024-04-18 12:06:01.919885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.668 [2024-04-18 12:06:01.920139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.668 [2024-04-18 12:06:01.920155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.668 qpair failed and we were unable to recover it. 00:30:11.668 [2024-04-18 12:06:01.920434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.668 [2024-04-18 12:06:01.920660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.668 [2024-04-18 12:06:01.920680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.668 qpair failed and we were unable to recover it. 00:30:11.668 [2024-04-18 12:06:01.920953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.668 [2024-04-18 12:06:01.921223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.668 [2024-04-18 12:06:01.921239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.668 qpair failed and we were unable to recover it. 00:30:11.668 [2024-04-18 12:06:01.921565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.668 [2024-04-18 12:06:01.921907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.668 [2024-04-18 12:06:01.921923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.668 qpair failed and we were unable to recover it. 00:30:11.668 [2024-04-18 12:06:01.922134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.668 [2024-04-18 12:06:01.922329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.668 [2024-04-18 12:06:01.922345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.668 qpair failed and we were unable to recover it. 00:30:11.668 [2024-04-18 12:06:01.922612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.668 [2024-04-18 12:06:01.922815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.668 [2024-04-18 12:06:01.922831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.668 qpair failed and we were unable to recover it. 00:30:11.668 [2024-04-18 12:06:01.923155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.668 [2024-04-18 12:06:01.923413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.668 [2024-04-18 12:06:01.923429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.668 qpair failed and we were unable to recover it. 00:30:11.668 [2024-04-18 12:06:01.923543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.668 [2024-04-18 12:06:01.923773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.668 [2024-04-18 12:06:01.923789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.668 qpair failed and we were unable to recover it. 00:30:11.668 [2024-04-18 12:06:01.924117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.668 [2024-04-18 12:06:01.924326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.668 [2024-04-18 12:06:01.924342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.668 qpair failed and we were unable to recover it. 00:30:11.668 [2024-04-18 12:06:01.924624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.668 [2024-04-18 12:06:01.924876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.668 [2024-04-18 12:06:01.924892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.668 qpair failed and we were unable to recover it. 00:30:11.668 [2024-04-18 12:06:01.925100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.668 [2024-04-18 12:06:01.925432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.668 [2024-04-18 12:06:01.925448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.668 qpair failed and we were unable to recover it. 00:30:11.668 [2024-04-18 12:06:01.925735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.668 [2024-04-18 12:06:01.925936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.668 [2024-04-18 12:06:01.925952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.668 qpair failed and we were unable to recover it. 00:30:11.668 [2024-04-18 12:06:01.926159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.668 [2024-04-18 12:06:01.926436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.668 [2024-04-18 12:06:01.926457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.668 qpair failed and we were unable to recover it. 00:30:11.668 [2024-04-18 12:06:01.926679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.668 [2024-04-18 12:06:01.926944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.668 [2024-04-18 12:06:01.926960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.668 qpair failed and we were unable to recover it. 00:30:11.668 [2024-04-18 12:06:01.927157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.668 [2024-04-18 12:06:01.927439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.668 [2024-04-18 12:06:01.927459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.668 qpair failed and we were unable to recover it. 00:30:11.668 [2024-04-18 12:06:01.927729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.668 [2024-04-18 12:06:01.927939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.668 [2024-04-18 12:06:01.927955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.668 qpair failed and we were unable to recover it. 00:30:11.668 [2024-04-18 12:06:01.928086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.668 [2024-04-18 12:06:01.928345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.668 [2024-04-18 12:06:01.928361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.668 qpair failed and we were unable to recover it. 00:30:11.668 [2024-04-18 12:06:01.928616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.668 [2024-04-18 12:06:01.928884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.668 [2024-04-18 12:06:01.928900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.668 qpair failed and we were unable to recover it. 00:30:11.668 [2024-04-18 12:06:01.929173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.668 [2024-04-18 12:06:01.929371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.668 [2024-04-18 12:06:01.929387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.668 qpair failed and we were unable to recover it. 00:30:11.668 [2024-04-18 12:06:01.929660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.668 [2024-04-18 12:06:01.929923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.668 [2024-04-18 12:06:01.929944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.668 qpair failed and we were unable to recover it. 00:30:11.668 [2024-04-18 12:06:01.930205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.668 [2024-04-18 12:06:01.930469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.668 [2024-04-18 12:06:01.930485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.668 qpair failed and we were unable to recover it. 00:30:11.668 [2024-04-18 12:06:01.930835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.668 [2024-04-18 12:06:01.931035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.668 [2024-04-18 12:06:01.931051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.668 qpair failed and we were unable to recover it. 00:30:11.668 [2024-04-18 12:06:01.931306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.668 [2024-04-18 12:06:01.931499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.668 [2024-04-18 12:06:01.931515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.668 qpair failed and we were unable to recover it. 00:30:11.669 [2024-04-18 12:06:01.931717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.669 [2024-04-18 12:06:01.931991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.669 [2024-04-18 12:06:01.932007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.669 qpair failed and we were unable to recover it. 00:30:11.669 [2024-04-18 12:06:01.932263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.669 [2024-04-18 12:06:01.932519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.669 [2024-04-18 12:06:01.932535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.669 qpair failed and we were unable to recover it. 00:30:11.669 [2024-04-18 12:06:01.932856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.669 [2024-04-18 12:06:01.933155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.669 [2024-04-18 12:06:01.933171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.669 qpair failed and we were unable to recover it. 00:30:11.669 [2024-04-18 12:06:01.933464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.669 [2024-04-18 12:06:01.933792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.669 [2024-04-18 12:06:01.933807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.669 qpair failed and we were unable to recover it. 00:30:11.669 [2024-04-18 12:06:01.934005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.669 [2024-04-18 12:06:01.934214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.669 [2024-04-18 12:06:01.934230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.669 qpair failed and we were unable to recover it. 00:30:11.669 [2024-04-18 12:06:01.934597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.669 [2024-04-18 12:06:01.934803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.669 [2024-04-18 12:06:01.934819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.669 qpair failed and we were unable to recover it. 00:30:11.669 [2024-04-18 12:06:01.935011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.669 [2024-04-18 12:06:01.935213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.669 [2024-04-18 12:06:01.935229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.669 qpair failed and we were unable to recover it. 00:30:11.669 [2024-04-18 12:06:01.935504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.669 [2024-04-18 12:06:01.935716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.669 [2024-04-18 12:06:01.935732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.669 qpair failed and we were unable to recover it. 00:30:11.669 [2024-04-18 12:06:01.935994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.669 [2024-04-18 12:06:01.936242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.669 [2024-04-18 12:06:01.936258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.669 qpair failed and we were unable to recover it. 00:30:11.669 [2024-04-18 12:06:01.936515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.669 [2024-04-18 12:06:01.936715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.669 [2024-04-18 12:06:01.936731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.669 qpair failed and we were unable to recover it. 00:30:11.669 [2024-04-18 12:06:01.936997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.669 [2024-04-18 12:06:01.937202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.669 [2024-04-18 12:06:01.937218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.669 qpair failed and we were unable to recover it. 00:30:11.669 [2024-04-18 12:06:01.937477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.669 [2024-04-18 12:06:01.937673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.669 [2024-04-18 12:06:01.937689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.669 qpair failed and we were unable to recover it. 00:30:11.669 [2024-04-18 12:06:01.937909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.669 [2024-04-18 12:06:01.938184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.669 [2024-04-18 12:06:01.938200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.669 qpair failed and we were unable to recover it. 00:30:11.669 [2024-04-18 12:06:01.938525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.669 [2024-04-18 12:06:01.938807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.669 [2024-04-18 12:06:01.938823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.669 qpair failed and we were unable to recover it. 00:30:11.669 12:06:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:11.669 [2024-04-18 12:06:01.939170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.669 12:06:01 -- common/autotest_common.sh@850 -- # return 0 00:30:11.669 [2024-04-18 12:06:01.939445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.669 [2024-04-18 12:06:01.939466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.669 qpair failed and we were unable to recover it. 00:30:11.669 12:06:01 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:30:11.669 [2024-04-18 12:06:01.939735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.669 12:06:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:11.669 [2024-04-18 12:06:01.940074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.669 [2024-04-18 12:06:01.940090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.669 qpair failed and we were unable to recover it. 00:30:11.669 12:06:01 -- common/autotest_common.sh@10 -- # set +x 00:30:11.669 [2024-04-18 12:06:01.940382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.669 [2024-04-18 12:06:01.940641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.669 [2024-04-18 12:06:01.940657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.669 qpair failed and we were unable to recover it. 00:30:11.669 [2024-04-18 12:06:01.940876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.669 [2024-04-18 12:06:01.941087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.669 [2024-04-18 12:06:01.941103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.669 qpair failed and we were unable to recover it. 00:30:11.669 [2024-04-18 12:06:01.941431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.669 [2024-04-18 12:06:01.941761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.669 [2024-04-18 12:06:01.941777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.669 qpair failed and we were unable to recover it. 00:30:11.669 [2024-04-18 12:06:01.942051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.669 [2024-04-18 12:06:01.942372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.669 [2024-04-18 12:06:01.942388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.669 qpair failed and we were unable to recover it. 00:30:11.669 [2024-04-18 12:06:01.942646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.669 [2024-04-18 12:06:01.942843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.669 [2024-04-18 12:06:01.942860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.669 qpair failed and we were unable to recover it. 00:30:11.669 [2024-04-18 12:06:01.943191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.669 [2024-04-18 12:06:01.943398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.669 [2024-04-18 12:06:01.943414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.669 qpair failed and we were unable to recover it. 00:30:11.669 [2024-04-18 12:06:01.943713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.669 [2024-04-18 12:06:01.943992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.669 [2024-04-18 12:06:01.944008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.669 qpair failed and we were unable to recover it. 00:30:11.669 [2024-04-18 12:06:01.944311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.669 [2024-04-18 12:06:01.944518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.669 [2024-04-18 12:06:01.944536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.669 qpair failed and we were unable to recover it. 00:30:11.669 [2024-04-18 12:06:01.944870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.669 [2024-04-18 12:06:01.945196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.669 [2024-04-18 12:06:01.945213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.669 qpair failed and we were unable to recover it. 00:30:11.669 [2024-04-18 12:06:01.945402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.669 [2024-04-18 12:06:01.945655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.669 [2024-04-18 12:06:01.945671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.669 qpair failed and we were unable to recover it. 00:30:11.669 [2024-04-18 12:06:01.945947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.669 [2024-04-18 12:06:01.946312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.669 [2024-04-18 12:06:01.946328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.669 qpair failed and we were unable to recover it. 00:30:11.669 [2024-04-18 12:06:01.946509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.669 [2024-04-18 12:06:01.946782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.669 [2024-04-18 12:06:01.946798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.670 qpair failed and we were unable to recover it. 00:30:11.670 [2024-04-18 12:06:01.947009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.670 [2024-04-18 12:06:01.947334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.670 [2024-04-18 12:06:01.947349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.670 qpair failed and we were unable to recover it. 00:30:11.670 [2024-04-18 12:06:01.947580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.670 [2024-04-18 12:06:01.947913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.670 [2024-04-18 12:06:01.947929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.670 qpair failed and we were unable to recover it. 00:30:11.670 [2024-04-18 12:06:01.948205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.670 [2024-04-18 12:06:01.948409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.670 [2024-04-18 12:06:01.948425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.670 qpair failed and we were unable to recover it. 00:30:11.670 [2024-04-18 12:06:01.948759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.670 [2024-04-18 12:06:01.949003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.670 [2024-04-18 12:06:01.949020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.670 qpair failed and we were unable to recover it. 00:30:11.670 [2024-04-18 12:06:01.949343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.670 [2024-04-18 12:06:01.949539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.670 [2024-04-18 12:06:01.949555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.670 qpair failed and we were unable to recover it. 00:30:11.670 [2024-04-18 12:06:01.949773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.670 [2024-04-18 12:06:01.950032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.670 [2024-04-18 12:06:01.950048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.670 qpair failed and we were unable to recover it. 00:30:11.670 [2024-04-18 12:06:01.950184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.670 [2024-04-18 12:06:01.950407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.670 [2024-04-18 12:06:01.950422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.670 qpair failed and we were unable to recover it. 00:30:11.670 [2024-04-18 12:06:01.950641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.670 [2024-04-18 12:06:01.950836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.670 [2024-04-18 12:06:01.950851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.670 qpair failed and we were unable to recover it. 00:30:11.670 [2024-04-18 12:06:01.951123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.670 [2024-04-18 12:06:01.951332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.670 [2024-04-18 12:06:01.951348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.670 qpair failed and we were unable to recover it. 00:30:11.670 [2024-04-18 12:06:01.951559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.670 [2024-04-18 12:06:01.951753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.670 [2024-04-18 12:06:01.951770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.670 qpair failed and we were unable to recover it. 00:30:11.670 [2024-04-18 12:06:01.952058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.670 [2024-04-18 12:06:01.952262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.670 [2024-04-18 12:06:01.952278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.670 qpair failed and we were unable to recover it. 00:30:11.670 [2024-04-18 12:06:01.952558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.670 [2024-04-18 12:06:01.952861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.670 [2024-04-18 12:06:01.952877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.670 qpair failed and we were unable to recover it. 00:30:11.670 [2024-04-18 12:06:01.953183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.670 [2024-04-18 12:06:01.953406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.670 [2024-04-18 12:06:01.953423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.670 qpair failed and we were unable to recover it. 00:30:11.670 [2024-04-18 12:06:01.953750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.670 [2024-04-18 12:06:01.953963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.670 [2024-04-18 12:06:01.953981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.670 qpair failed and we were unable to recover it. 00:30:11.670 [2024-04-18 12:06:01.954198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.670 [2024-04-18 12:06:01.954494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.670 [2024-04-18 12:06:01.954510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.670 qpair failed and we were unable to recover it. 00:30:11.670 [2024-04-18 12:06:01.954841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.670 [2024-04-18 12:06:01.955036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.670 [2024-04-18 12:06:01.955053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.670 qpair failed and we were unable to recover it. 00:30:11.670 [2024-04-18 12:06:01.955177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.670 [2024-04-18 12:06:01.955396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.670 [2024-04-18 12:06:01.955412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.670 qpair failed and we were unable to recover it. 00:30:11.670 [2024-04-18 12:06:01.955621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.670 [2024-04-18 12:06:01.955904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.670 [2024-04-18 12:06:01.955920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.670 qpair failed and we were unable to recover it. 00:30:11.670 [2024-04-18 12:06:01.956246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.670 [2024-04-18 12:06:01.956459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.670 [2024-04-18 12:06:01.956475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.670 qpair failed and we were unable to recover it. 00:30:11.670 [2024-04-18 12:06:01.956603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.670 [2024-04-18 12:06:01.956872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.670 [2024-04-18 12:06:01.956889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.670 qpair failed and we were unable to recover it. 00:30:11.670 [2024-04-18 12:06:01.957015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.670 [2024-04-18 12:06:01.957278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.670 [2024-04-18 12:06:01.957293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.670 qpair failed and we were unable to recover it. 00:30:11.670 [2024-04-18 12:06:01.957511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.670 [2024-04-18 12:06:01.957707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.670 [2024-04-18 12:06:01.957723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.670 qpair failed and we were unable to recover it. 00:30:11.670 [2024-04-18 12:06:01.958051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.670 [2024-04-18 12:06:01.958238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.670 [2024-04-18 12:06:01.958255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.670 qpair failed and we were unable to recover it. 00:30:11.671 [2024-04-18 12:06:01.958532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.671 [2024-04-18 12:06:01.958774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.671 [2024-04-18 12:06:01.958793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.671 qpair failed and we were unable to recover it. 00:30:11.671 [2024-04-18 12:06:01.958918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.671 [2024-04-18 12:06:01.959183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.671 [2024-04-18 12:06:01.959199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.671 qpair failed and we were unable to recover it. 00:30:11.671 [2024-04-18 12:06:01.959467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.671 [2024-04-18 12:06:01.959656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.671 [2024-04-18 12:06:01.959672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.671 qpair failed and we were unable to recover it. 00:30:11.671 [2024-04-18 12:06:01.959879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.671 [2024-04-18 12:06:01.960063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.671 [2024-04-18 12:06:01.960079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.671 qpair failed and we were unable to recover it. 00:30:11.671 [2024-04-18 12:06:01.960279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.671 [2024-04-18 12:06:01.960484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.671 [2024-04-18 12:06:01.960501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.671 qpair failed and we were unable to recover it. 00:30:11.671 [2024-04-18 12:06:01.960757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.671 [2024-04-18 12:06:01.960951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.671 [2024-04-18 12:06:01.960968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.671 qpair failed and we were unable to recover it. 00:30:11.671 [2024-04-18 12:06:01.961246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.671 [2024-04-18 12:06:01.961520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.671 [2024-04-18 12:06:01.961537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.671 qpair failed and we were unable to recover it. 00:30:11.671 [2024-04-18 12:06:01.961805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.671 [2024-04-18 12:06:01.962076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.671 [2024-04-18 12:06:01.962092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.671 qpair failed and we were unable to recover it. 00:30:11.671 [2024-04-18 12:06:01.962297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.671 [2024-04-18 12:06:01.962566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.671 [2024-04-18 12:06:01.962582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.671 qpair failed and we were unable to recover it. 00:30:11.671 [2024-04-18 12:06:01.962783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.671 [2024-04-18 12:06:01.962984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.671 [2024-04-18 12:06:01.963000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.671 qpair failed and we were unable to recover it. 00:30:11.671 [2024-04-18 12:06:01.963189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.671 [2024-04-18 12:06:01.963387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.671 [2024-04-18 12:06:01.963405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.671 qpair failed and we were unable to recover it. 00:30:11.671 [2024-04-18 12:06:01.963666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.671 [2024-04-18 12:06:01.963930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.671 [2024-04-18 12:06:01.963945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.671 qpair failed and we were unable to recover it. 00:30:11.671 [2024-04-18 12:06:01.964139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.671 [2024-04-18 12:06:01.964418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.671 [2024-04-18 12:06:01.964434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.671 qpair failed and we were unable to recover it. 00:30:11.671 [2024-04-18 12:06:01.964647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.671 [2024-04-18 12:06:01.964775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.671 [2024-04-18 12:06:01.964791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.671 qpair failed and we were unable to recover it. 00:30:11.671 [2024-04-18 12:06:01.965074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.671 [2024-04-18 12:06:01.965335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.671 [2024-04-18 12:06:01.965352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.671 qpair failed and we were unable to recover it. 00:30:11.671 [2024-04-18 12:06:01.965545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.671 [2024-04-18 12:06:01.965869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.671 [2024-04-18 12:06:01.965885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.671 qpair failed and we were unable to recover it. 00:30:11.671 [2024-04-18 12:06:01.966111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.671 [2024-04-18 12:06:01.966396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.671 [2024-04-18 12:06:01.966412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.671 qpair failed and we were unable to recover it. 00:30:11.671 [2024-04-18 12:06:01.966681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.671 [2024-04-18 12:06:01.966890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.671 [2024-04-18 12:06:01.966906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.671 qpair failed and we were unable to recover it. 00:30:11.671 [2024-04-18 12:06:01.967162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.671 [2024-04-18 12:06:01.967464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.671 [2024-04-18 12:06:01.967480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.671 qpair failed and we were unable to recover it. 00:30:11.671 [2024-04-18 12:06:01.967688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.671 [2024-04-18 12:06:01.967943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.671 [2024-04-18 12:06:01.967959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.671 qpair failed and we were unable to recover it. 00:30:11.671 [2024-04-18 12:06:01.968160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.671 [2024-04-18 12:06:01.968382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.671 [2024-04-18 12:06:01.968397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.671 qpair failed and we were unable to recover it. 00:30:11.671 [2024-04-18 12:06:01.968537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.671 [2024-04-18 12:06:01.968790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.671 [2024-04-18 12:06:01.968806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.671 qpair failed and we were unable to recover it. 00:30:11.671 [2024-04-18 12:06:01.969028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.671 [2024-04-18 12:06:01.969284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.671 [2024-04-18 12:06:01.969300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.671 qpair failed and we were unable to recover it. 00:30:11.671 [2024-04-18 12:06:01.969534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.671 [2024-04-18 12:06:01.969736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.671 [2024-04-18 12:06:01.969752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.671 qpair failed and we were unable to recover it. 00:30:11.671 [2024-04-18 12:06:01.969955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.671 [2024-04-18 12:06:01.970152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.671 [2024-04-18 12:06:01.970168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.671 qpair failed and we were unable to recover it. 00:30:11.671 [2024-04-18 12:06:01.970401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.671 [2024-04-18 12:06:01.970724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.671 [2024-04-18 12:06:01.970740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.671 qpair failed and we were unable to recover it. 00:30:11.671 [2024-04-18 12:06:01.971007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.671 [2024-04-18 12:06:01.971195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.671 [2024-04-18 12:06:01.971211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.671 qpair failed and we were unable to recover it. 00:30:11.671 [2024-04-18 12:06:01.971429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.672 [2024-04-18 12:06:01.971641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.672 [2024-04-18 12:06:01.971658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.672 qpair failed and we were unable to recover it. 00:30:11.672 [2024-04-18 12:06:01.971855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.672 [2024-04-18 12:06:01.972127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.672 [2024-04-18 12:06:01.972143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.672 qpair failed and we were unable to recover it. 00:30:11.672 [2024-04-18 12:06:01.972428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.672 [2024-04-18 12:06:01.972630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.672 [2024-04-18 12:06:01.972647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.672 qpair failed and we were unable to recover it. 00:30:11.672 [2024-04-18 12:06:01.973022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.672 [2024-04-18 12:06:01.973267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.672 [2024-04-18 12:06:01.973282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.672 qpair failed and we were unable to recover it. 00:30:11.672 [2024-04-18 12:06:01.973499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.672 [2024-04-18 12:06:01.973601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.672 [2024-04-18 12:06:01.973617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.672 qpair failed and we were unable to recover it. 00:30:11.672 [2024-04-18 12:06:01.973832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.672 [2024-04-18 12:06:01.974089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.672 [2024-04-18 12:06:01.974104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.672 qpair failed and we were unable to recover it. 00:30:11.672 [2024-04-18 12:06:01.974304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.672 [2024-04-18 12:06:01.974531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.672 [2024-04-18 12:06:01.974548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.672 qpair failed and we were unable to recover it. 00:30:11.672 [2024-04-18 12:06:01.974737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.672 [2024-04-18 12:06:01.975058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.672 [2024-04-18 12:06:01.975074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.672 qpair failed and we were unable to recover it. 00:30:11.672 [2024-04-18 12:06:01.975336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.672 [2024-04-18 12:06:01.975546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.672 [2024-04-18 12:06:01.975562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.672 qpair failed and we were unable to recover it. 00:30:11.672 [2024-04-18 12:06:01.975754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.672 [2024-04-18 12:06:01.975935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.672 [2024-04-18 12:06:01.975951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.672 qpair failed and we were unable to recover it. 00:30:11.672 [2024-04-18 12:06:01.976206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.672 [2024-04-18 12:06:01.976466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.672 [2024-04-18 12:06:01.976482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.672 qpair failed and we were unable to recover it. 00:30:11.672 [2024-04-18 12:06:01.976825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.672 [2024-04-18 12:06:01.977083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.672 [2024-04-18 12:06:01.977099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.672 qpair failed and we were unable to recover it. 00:30:11.672 [2024-04-18 12:06:01.977346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.672 [2024-04-18 12:06:01.977548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.672 [2024-04-18 12:06:01.977565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.672 qpair failed and we were unable to recover it. 00:30:11.672 [2024-04-18 12:06:01.977770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.672 [2024-04-18 12:06:01.977974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.672 [2024-04-18 12:06:01.977990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.672 qpair failed and we were unable to recover it. 00:30:11.672 [2024-04-18 12:06:01.978196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.672 [2024-04-18 12:06:01.978388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.672 [2024-04-18 12:06:01.978404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.672 qpair failed and we were unable to recover it. 00:30:11.672 [2024-04-18 12:06:01.978688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.672 [2024-04-18 12:06:01.978903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.672 [2024-04-18 12:06:01.978919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.672 qpair failed and we were unable to recover it. 00:30:11.672 [2024-04-18 12:06:01.979151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.672 [2024-04-18 12:06:01.979429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.672 [2024-04-18 12:06:01.979445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.672 qpair failed and we were unable to recover it. 00:30:11.672 [2024-04-18 12:06:01.979675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.672 [2024-04-18 12:06:01.979976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.672 [2024-04-18 12:06:01.979992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.672 qpair failed and we were unable to recover it. 00:30:11.673 [2024-04-18 12:06:01.980205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.673 [2024-04-18 12:06:01.980479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.673 [2024-04-18 12:06:01.980495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.673 qpair failed and we were unable to recover it. 00:30:11.673 [2024-04-18 12:06:01.980821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.673 [2024-04-18 12:06:01.981022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.673 [2024-04-18 12:06:01.981038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.673 qpair failed and we were unable to recover it. 00:30:11.673 [2024-04-18 12:06:01.981375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.673 [2024-04-18 12:06:01.981650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.673 [2024-04-18 12:06:01.981666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.673 qpair failed and we were unable to recover it. 00:30:11.673 [2024-04-18 12:06:01.981853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.673 [2024-04-18 12:06:01.982061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.673 [2024-04-18 12:06:01.982077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.673 qpair failed and we were unable to recover it. 00:30:11.673 [2024-04-18 12:06:01.982342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.673 [2024-04-18 12:06:01.982615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.673 [2024-04-18 12:06:01.982644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.673 qpair failed and we were unable to recover it. 00:30:11.673 [2024-04-18 12:06:01.982828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.673 [2024-04-18 12:06:01.983119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.673 [2024-04-18 12:06:01.983135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.673 qpair failed and we were unable to recover it. 00:30:11.673 12:06:01 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:11.673 [2024-04-18 12:06:01.983335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.673 [2024-04-18 12:06:01.983439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.673 [2024-04-18 12:06:01.983465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.673 qpair failed and we were unable to recover it. 00:30:11.673 [2024-04-18 12:06:01.983666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.673 12:06:01 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:11.673 [2024-04-18 12:06:01.983928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.673 [2024-04-18 12:06:01.983944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.673 qpair failed and we were unable to recover it. 00:30:11.673 12:06:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:11.673 [2024-04-18 12:06:01.984145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.673 12:06:01 -- common/autotest_common.sh@10 -- # set +x 00:30:11.673 [2024-04-18 12:06:01.984332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.673 [2024-04-18 12:06:01.984349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.673 qpair failed and we were unable to recover it. 00:30:11.673 [2024-04-18 12:06:01.984692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.673 [2024-04-18 12:06:01.984887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.673 [2024-04-18 12:06:01.984903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.673 qpair failed and we were unable to recover it. 00:30:11.673 [2024-04-18 12:06:01.985108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.673 [2024-04-18 12:06:01.985317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.673 [2024-04-18 12:06:01.985332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.673 qpair failed and we were unable to recover it. 00:30:11.673 [2024-04-18 12:06:01.985554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.673 [2024-04-18 12:06:01.985752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.673 [2024-04-18 12:06:01.985768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.673 qpair failed and we were unable to recover it. 00:30:11.673 [2024-04-18 12:06:01.986039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.673 [2024-04-18 12:06:01.986355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.673 [2024-04-18 12:06:01.986371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.673 qpair failed and we were unable to recover it. 00:30:11.673 [2024-04-18 12:06:01.986580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.673 [2024-04-18 12:06:01.986836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.673 [2024-04-18 12:06:01.986852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.673 qpair failed and we were unable to recover it. 00:30:11.673 [2024-04-18 12:06:01.987059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.673 [2024-04-18 12:06:01.987268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.673 [2024-04-18 12:06:01.987284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.673 qpair failed and we were unable to recover it. 00:30:11.673 [2024-04-18 12:06:01.987554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.673 [2024-04-18 12:06:01.987743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.673 [2024-04-18 12:06:01.987759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.673 qpair failed and we were unable to recover it. 00:30:11.673 [2024-04-18 12:06:01.988023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.673 [2024-04-18 12:06:01.988296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.673 [2024-04-18 12:06:01.988312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.673 qpair failed and we were unable to recover it. 00:30:11.673 [2024-04-18 12:06:01.988522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.673 [2024-04-18 12:06:01.988845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.673 [2024-04-18 12:06:01.988860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.673 qpair failed and we were unable to recover it. 00:30:11.673 [2024-04-18 12:06:01.989102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.673 [2024-04-18 12:06:01.989286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.673 [2024-04-18 12:06:01.989302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.673 qpair failed and we were unable to recover it. 00:30:11.673 [2024-04-18 12:06:01.989499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.673 [2024-04-18 12:06:01.989684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.673 [2024-04-18 12:06:01.989700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.673 qpair failed and we were unable to recover it. 00:30:11.673 [2024-04-18 12:06:01.989980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.673 [2024-04-18 12:06:01.990197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.673 [2024-04-18 12:06:01.990213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.673 qpair failed and we were unable to recover it. 00:30:11.673 [2024-04-18 12:06:01.990472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.673 [2024-04-18 12:06:01.990803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.673 [2024-04-18 12:06:01.990820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.673 qpair failed and we were unable to recover it. 00:30:11.673 [2024-04-18 12:06:01.991033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.673 [2024-04-18 12:06:01.991321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.673 [2024-04-18 12:06:01.991338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.673 qpair failed and we were unable to recover it. 00:30:11.673 [2024-04-18 12:06:01.991620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.673 [2024-04-18 12:06:01.991814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.673 [2024-04-18 12:06:01.991830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.673 qpair failed and we were unable to recover it. 00:30:11.673 [2024-04-18 12:06:01.992029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.674 [2024-04-18 12:06:01.992224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.674 [2024-04-18 12:06:01.992241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.674 qpair failed and we were unable to recover it. 00:30:11.674 [2024-04-18 12:06:01.992430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.674 [2024-04-18 12:06:01.992651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.674 [2024-04-18 12:06:01.992667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.674 qpair failed and we were unable to recover it. 00:30:11.674 [2024-04-18 12:06:01.992877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.674 [2024-04-18 12:06:01.993165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.674 [2024-04-18 12:06:01.993182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.674 qpair failed and we were unable to recover it. 00:30:11.674 [2024-04-18 12:06:01.993388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.674 [2024-04-18 12:06:01.993668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.674 [2024-04-18 12:06:01.993688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.674 qpair failed and we were unable to recover it. 00:30:11.674 [2024-04-18 12:06:01.993895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.674 [2024-04-18 12:06:01.994236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.674 [2024-04-18 12:06:01.994258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.674 qpair failed and we were unable to recover it. 00:30:11.674 [2024-04-18 12:06:01.994475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.674 [2024-04-18 12:06:01.994667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.674 [2024-04-18 12:06:01.994683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.674 qpair failed and we were unable to recover it. 00:30:11.674 [2024-04-18 12:06:01.994805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.674 [2024-04-18 12:06:01.995055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.674 [2024-04-18 12:06:01.995073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.674 qpair failed and we were unable to recover it. 00:30:11.674 [2024-04-18 12:06:01.995272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.674 [2024-04-18 12:06:01.995463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.674 [2024-04-18 12:06:01.995481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.674 qpair failed and we were unable to recover it. 00:30:11.674 [2024-04-18 12:06:01.995740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.674 [2024-04-18 12:06:01.996021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.674 [2024-04-18 12:06:01.996037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.674 qpair failed and we were unable to recover it. 00:30:11.674 [2024-04-18 12:06:01.996221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.674 [2024-04-18 12:06:01.996549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.674 [2024-04-18 12:06:01.996566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.674 qpair failed and we were unable to recover it. 00:30:11.674 [2024-04-18 12:06:01.996779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.674 [2024-04-18 12:06:01.996976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.674 [2024-04-18 12:06:01.996992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.674 qpair failed and we were unable to recover it. 00:30:11.674 [2024-04-18 12:06:01.997197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.674 [2024-04-18 12:06:01.997401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.674 [2024-04-18 12:06:01.997418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.674 qpair failed and we were unable to recover it. 00:30:11.674 [2024-04-18 12:06:01.997620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.674 [2024-04-18 12:06:01.997962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.674 [2024-04-18 12:06:01.997980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.674 qpair failed and we were unable to recover it. 00:30:11.674 [2024-04-18 12:06:01.998268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.674 [2024-04-18 12:06:01.998460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.674 [2024-04-18 12:06:01.998478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.674 qpair failed and we were unable to recover it. 00:30:11.674 [2024-04-18 12:06:01.998608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.674 [2024-04-18 12:06:01.998799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.674 [2024-04-18 12:06:01.998817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.674 qpair failed and we were unable to recover it. 00:30:11.674 [2024-04-18 12:06:01.999104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.674 [2024-04-18 12:06:01.999375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.674 [2024-04-18 12:06:01.999392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.674 qpair failed and we were unable to recover it. 00:30:11.674 [2024-04-18 12:06:01.999745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.674 [2024-04-18 12:06:02.000119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.674 [2024-04-18 12:06:02.000139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.674 qpair failed and we were unable to recover it. 00:30:11.674 [2024-04-18 12:06:02.000422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.674 [2024-04-18 12:06:02.000652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.674 [2024-04-18 12:06:02.000671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.674 qpair failed and we were unable to recover it. 00:30:11.674 [2024-04-18 12:06:02.000862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.674 [2024-04-18 12:06:02.001118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.674 [2024-04-18 12:06:02.001135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.674 qpair failed and we were unable to recover it. 00:30:11.674 [2024-04-18 12:06:02.001429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.674 [2024-04-18 12:06:02.001697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.674 [2024-04-18 12:06:02.001713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.674 qpair failed and we were unable to recover it. 00:30:11.674 [2024-04-18 12:06:02.001919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.674 [2024-04-18 12:06:02.002254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.674 [2024-04-18 12:06:02.002270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.674 qpair failed and we were unable to recover it. 00:30:11.674 [2024-04-18 12:06:02.002543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.674 [2024-04-18 12:06:02.002646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.674 [2024-04-18 12:06:02.002662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.674 qpair failed and we were unable to recover it. 00:30:11.674 [2024-04-18 12:06:02.002854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.674 [2024-04-18 12:06:02.003120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.674 [2024-04-18 12:06:02.003136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.674 qpair failed and we were unable to recover it. 00:30:11.674 [2024-04-18 12:06:02.003415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.674 [2024-04-18 12:06:02.003698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.674 [2024-04-18 12:06:02.003714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.674 qpair failed and we were unable to recover it. 00:30:11.674 [2024-04-18 12:06:02.003929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.674 [2024-04-18 12:06:02.004127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.674 [2024-04-18 12:06:02.004143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.674 qpair failed and we were unable to recover it. 00:30:11.674 [2024-04-18 12:06:02.004470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.674 [2024-04-18 12:06:02.004741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.674 [2024-04-18 12:06:02.004757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.674 qpair failed and we were unable to recover it. 00:30:11.674 [2024-04-18 12:06:02.004964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.674 [2024-04-18 12:06:02.005153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.674 [2024-04-18 12:06:02.005170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.674 qpair failed and we were unable to recover it. 00:30:11.674 [2024-04-18 12:06:02.005415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.674 [2024-04-18 12:06:02.005525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.674 [2024-04-18 12:06:02.005541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.674 qpair failed and we were unable to recover it. 00:30:11.674 [2024-04-18 12:06:02.005748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.674 [2024-04-18 12:06:02.006011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.675 [2024-04-18 12:06:02.006031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.675 qpair failed and we were unable to recover it. 00:30:11.675 [2024-04-18 12:06:02.006299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.675 [2024-04-18 12:06:02.006490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.675 [2024-04-18 12:06:02.006506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.675 qpair failed and we were unable to recover it. 00:30:11.675 [2024-04-18 12:06:02.006784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.675 [2024-04-18 12:06:02.006995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.675 [2024-04-18 12:06:02.007011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.675 qpair failed and we were unable to recover it. 00:30:11.675 [2024-04-18 12:06:02.007287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.675 [2024-04-18 12:06:02.007469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.675 [2024-04-18 12:06:02.007486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.675 qpair failed and we were unable to recover it. 00:30:11.675 [2024-04-18 12:06:02.007692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.675 [2024-04-18 12:06:02.007891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.675 [2024-04-18 12:06:02.007908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.675 qpair failed and we were unable to recover it. 00:30:11.675 [2024-04-18 12:06:02.008123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.675 [2024-04-18 12:06:02.008413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.675 [2024-04-18 12:06:02.008429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.675 qpair failed and we were unable to recover it. 00:30:11.675 [2024-04-18 12:06:02.008638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.675 [2024-04-18 12:06:02.008904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.675 [2024-04-18 12:06:02.008920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.675 qpair failed and we were unable to recover it. 00:30:11.675 [2024-04-18 12:06:02.009174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.675 [2024-04-18 12:06:02.009428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.675 [2024-04-18 12:06:02.009444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.675 qpair failed and we were unable to recover it. 00:30:11.675 [2024-04-18 12:06:02.009701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.675 [2024-04-18 12:06:02.009894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.675 [2024-04-18 12:06:02.009910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.675 qpair failed and we were unable to recover it. 00:30:11.675 [2024-04-18 12:06:02.010156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.675 [2024-04-18 12:06:02.010423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.675 [2024-04-18 12:06:02.010439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.675 qpair failed and we were unable to recover it. 00:30:11.675 [2024-04-18 12:06:02.010772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.675 [2024-04-18 12:06:02.010967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.675 [2024-04-18 12:06:02.010983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.675 qpair failed and we were unable to recover it. 00:30:11.675 [2024-04-18 12:06:02.011170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.675 [2024-04-18 12:06:02.011445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.675 [2024-04-18 12:06:02.011472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.675 qpair failed and we were unable to recover it. 00:30:11.675 [2024-04-18 12:06:02.011804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.675 [2024-04-18 12:06:02.012062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.675 [2024-04-18 12:06:02.012077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.675 qpair failed and we were unable to recover it. 00:30:11.675 [2024-04-18 12:06:02.012358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.675 [2024-04-18 12:06:02.012683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.675 [2024-04-18 12:06:02.012704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.675 qpair failed and we were unable to recover it. 00:30:11.675 [2024-04-18 12:06:02.012891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.675 [2024-04-18 12:06:02.013214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.675 [2024-04-18 12:06:02.013229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.675 qpair failed and we were unable to recover it. 00:30:11.675 [2024-04-18 12:06:02.013499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.675 [2024-04-18 12:06:02.013792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.675 [2024-04-18 12:06:02.013808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.675 qpair failed and we were unable to recover it. 00:30:11.675 [2024-04-18 12:06:02.014090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.675 [2024-04-18 12:06:02.014359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.675 [2024-04-18 12:06:02.014375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.675 qpair failed and we were unable to recover it. 00:30:11.675 [2024-04-18 12:06:02.014587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.675 [2024-04-18 12:06:02.014932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.675 [2024-04-18 12:06:02.014948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.675 qpair failed and we were unable to recover it. 00:30:11.675 [2024-04-18 12:06:02.015295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.675 [2024-04-18 12:06:02.015503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.675 [2024-04-18 12:06:02.015519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.675 qpair failed and we were unable to recover it. 00:30:11.675 [2024-04-18 12:06:02.015844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.675 [2024-04-18 12:06:02.016110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.675 [2024-04-18 12:06:02.016126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.675 qpair failed and we were unable to recover it. 00:30:11.675 [2024-04-18 12:06:02.016326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.675 [2024-04-18 12:06:02.016609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.675 [2024-04-18 12:06:02.016625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.675 qpair failed and we were unable to recover it. 00:30:11.675 [2024-04-18 12:06:02.016898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.675 [2024-04-18 12:06:02.017097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.675 [2024-04-18 12:06:02.017113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.675 qpair failed and we were unable to recover it. 00:30:11.675 [2024-04-18 12:06:02.017379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.675 [2024-04-18 12:06:02.017632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.675 [2024-04-18 12:06:02.017648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.675 qpair failed and we were unable to recover it. 00:30:11.676 [2024-04-18 12:06:02.017906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.676 [2024-04-18 12:06:02.018208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.676 [2024-04-18 12:06:02.018226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.676 qpair failed and we were unable to recover it. 00:30:11.676 [2024-04-18 12:06:02.018531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.676 [2024-04-18 12:06:02.018787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.676 [2024-04-18 12:06:02.018803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.676 qpair failed and we were unable to recover it. 00:30:11.676 [2024-04-18 12:06:02.019077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.676 [2024-04-18 12:06:02.019279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.676 [2024-04-18 12:06:02.019295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.676 qpair failed and we were unable to recover it. 00:30:11.676 [2024-04-18 12:06:02.019481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.676 [2024-04-18 12:06:02.019673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.676 [2024-04-18 12:06:02.019689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.676 qpair failed and we were unable to recover it. 00:30:11.676 [2024-04-18 12:06:02.019896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.676 [2024-04-18 12:06:02.020104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.676 [2024-04-18 12:06:02.020120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.676 qpair failed and we were unable to recover it. 00:30:11.676 [2024-04-18 12:06:02.020386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.676 [2024-04-18 12:06:02.020635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.676 [2024-04-18 12:06:02.020652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.676 qpair failed and we were unable to recover it. 00:30:11.676 [2024-04-18 12:06:02.020914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.676 [2024-04-18 12:06:02.021224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.676 [2024-04-18 12:06:02.021241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.676 qpair failed and we were unable to recover it. 00:30:11.676 [2024-04-18 12:06:02.021600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.676 [2024-04-18 12:06:02.021820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.676 [2024-04-18 12:06:02.021835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.676 qpair failed and we were unable to recover it. 00:30:11.676 [2024-04-18 12:06:02.022048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.676 [2024-04-18 12:06:02.022245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.676 [2024-04-18 12:06:02.022260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.676 qpair failed and we were unable to recover it. 00:30:11.676 [2024-04-18 12:06:02.022530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.676 [2024-04-18 12:06:02.022802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.676 [2024-04-18 12:06:02.022818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.676 qpair failed and we were unable to recover it. 00:30:11.676 [2024-04-18 12:06:02.023077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.676 [2024-04-18 12:06:02.023330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.676 [2024-04-18 12:06:02.023348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.676 qpair failed and we were unable to recover it. 00:30:11.676 [2024-04-18 12:06:02.023615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.676 [2024-04-18 12:06:02.023885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.676 [2024-04-18 12:06:02.023901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.676 qpair failed and we were unable to recover it. 00:30:11.676 [2024-04-18 12:06:02.024168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.676 [2024-04-18 12:06:02.024468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.676 [2024-04-18 12:06:02.024484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.676 qpair failed and we were unable to recover it. 00:30:11.676 [2024-04-18 12:06:02.024741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.676 [2024-04-18 12:06:02.024947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.676 [2024-04-18 12:06:02.024962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.676 qpair failed and we were unable to recover it. 00:30:11.676 [2024-04-18 12:06:02.025184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.676 [2024-04-18 12:06:02.025377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.676 [2024-04-18 12:06:02.025393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.676 qpair failed and we were unable to recover it. 00:30:11.676 [2024-04-18 12:06:02.025757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.676 [2024-04-18 12:06:02.025957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.676 [2024-04-18 12:06:02.025973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.676 qpair failed and we were unable to recover it. 00:30:11.676 [2024-04-18 12:06:02.028724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.676 [2024-04-18 12:06:02.028984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.676 [2024-04-18 12:06:02.028999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.676 qpair failed and we were unable to recover it. 00:30:11.676 [2024-04-18 12:06:02.029327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.676 [2024-04-18 12:06:02.029600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.676 [2024-04-18 12:06:02.029616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.676 qpair failed and we were unable to recover it. 00:30:11.676 [2024-04-18 12:06:02.029821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.676 [2024-04-18 12:06:02.030019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.676 [2024-04-18 12:06:02.030035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.676 qpair failed and we were unable to recover it. 00:30:11.676 [2024-04-18 12:06:02.030291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.676 [2024-04-18 12:06:02.030613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.676 [2024-04-18 12:06:02.030629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.676 qpair failed and we were unable to recover it. 00:30:11.676 [2024-04-18 12:06:02.030890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.676 [2024-04-18 12:06:02.031154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.676 [2024-04-18 12:06:02.031172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.676 qpair failed and we were unable to recover it. 00:30:11.676 [2024-04-18 12:06:02.031467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.676 [2024-04-18 12:06:02.031684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.676 [2024-04-18 12:06:02.031700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.676 qpair failed and we were unable to recover it. 00:30:11.676 [2024-04-18 12:06:02.031971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.676 [2024-04-18 12:06:02.032172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.676 [2024-04-18 12:06:02.032188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.676 qpair failed and we were unable to recover it. 00:30:11.676 [2024-04-18 12:06:02.032458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.676 [2024-04-18 12:06:02.032566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.676 [2024-04-18 12:06:02.032583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.676 qpair failed and we were unable to recover it. 00:30:11.676 [2024-04-18 12:06:02.032850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.676 [2024-04-18 12:06:02.033126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.676 [2024-04-18 12:06:02.033141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.676 qpair failed and we were unable to recover it. 00:30:11.676 [2024-04-18 12:06:02.033410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.676 [2024-04-18 12:06:02.033617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.676 [2024-04-18 12:06:02.033634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.676 qpair failed and we were unable to recover it. 00:30:11.676 [2024-04-18 12:06:02.033848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.676 [2024-04-18 12:06:02.034208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.676 [2024-04-18 12:06:02.034224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.676 qpair failed and we were unable to recover it. 00:30:11.676 [2024-04-18 12:06:02.034573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.676 [2024-04-18 12:06:02.034784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.676 [2024-04-18 12:06:02.034799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.676 qpair failed and we were unable to recover it. 00:30:11.676 [2024-04-18 12:06:02.035003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.676 [2024-04-18 12:06:02.035369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.676 [2024-04-18 12:06:02.035386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.677 qpair failed and we were unable to recover it. 00:30:11.677 [2024-04-18 12:06:02.035660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.677 [2024-04-18 12:06:02.035984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.677 [2024-04-18 12:06:02.035999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.677 qpair failed and we were unable to recover it. 00:30:11.677 [2024-04-18 12:06:02.036222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.677 [2024-04-18 12:06:02.036438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.677 [2024-04-18 12:06:02.036459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.677 qpair failed and we were unable to recover it. 00:30:11.677 [2024-04-18 12:06:02.036738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.677 [2024-04-18 12:06:02.036961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.677 [2024-04-18 12:06:02.036977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.677 qpair failed and we were unable to recover it. 00:30:11.677 [2024-04-18 12:06:02.037263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.677 [2024-04-18 12:06:02.037566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.677 [2024-04-18 12:06:02.037582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.677 qpair failed and we were unable to recover it. 00:30:11.677 [2024-04-18 12:06:02.037789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.677 [2024-04-18 12:06:02.038061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.677 [2024-04-18 12:06:02.038078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.677 qpair failed and we were unable to recover it. 00:30:11.677 [2024-04-18 12:06:02.038342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.677 [2024-04-18 12:06:02.038597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.677 [2024-04-18 12:06:02.038613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.677 qpair failed and we were unable to recover it. 00:30:11.677 [2024-04-18 12:06:02.038912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.677 [2024-04-18 12:06:02.039125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.677 [2024-04-18 12:06:02.039139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.677 qpair failed and we were unable to recover it. 00:30:11.677 [2024-04-18 12:06:02.039311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.677 [2024-04-18 12:06:02.039591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.677 [2024-04-18 12:06:02.039607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.677 qpair failed and we were unable to recover it. 00:30:11.677 [2024-04-18 12:06:02.039876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.677 [2024-04-18 12:06:02.040089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.677 [2024-04-18 12:06:02.040105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.677 qpair failed and we were unable to recover it. 00:30:11.677 [2024-04-18 12:06:02.040371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.677 [2024-04-18 12:06:02.040649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.677 [2024-04-18 12:06:02.040665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.677 qpair failed and we were unable to recover it. 00:30:11.677 [2024-04-18 12:06:02.040884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.677 [2024-04-18 12:06:02.041168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.677 [2024-04-18 12:06:02.041184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.677 qpair failed and we were unable to recover it. 00:30:11.677 [2024-04-18 12:06:02.041313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.677 [2024-04-18 12:06:02.041615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.677 [2024-04-18 12:06:02.041632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.677 qpair failed and we were unable to recover it. 00:30:11.677 [2024-04-18 12:06:02.041893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.677 [2024-04-18 12:06:02.042154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.677 [2024-04-18 12:06:02.042170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.677 qpair failed and we were unable to recover it. 00:30:11.677 [2024-04-18 12:06:02.042442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.677 [2024-04-18 12:06:02.042724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.677 [2024-04-18 12:06:02.042740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.677 qpair failed and we were unable to recover it. 00:30:11.677 [2024-04-18 12:06:02.042945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.677 [2024-04-18 12:06:02.043155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.677 [2024-04-18 12:06:02.043172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.677 qpair failed and we were unable to recover it. 00:30:11.677 [2024-04-18 12:06:02.043449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.677 [2024-04-18 12:06:02.043731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.677 [2024-04-18 12:06:02.043747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.677 qpair failed and we were unable to recover it. 00:30:11.677 [2024-04-18 12:06:02.044093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.677 [2024-04-18 12:06:02.044368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.677 [2024-04-18 12:06:02.044384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.677 qpair failed and we were unable to recover it. 00:30:11.677 [2024-04-18 12:06:02.044661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.677 [2024-04-18 12:06:02.044885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.677 [2024-04-18 12:06:02.044901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.677 qpair failed and we were unable to recover it. 00:30:11.677 [2024-04-18 12:06:02.045224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.677 [2024-04-18 12:06:02.045440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.677 [2024-04-18 12:06:02.045466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.677 qpair failed and we were unable to recover it. 00:30:11.677 [2024-04-18 12:06:02.045737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.677 [2024-04-18 12:06:02.045971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.677 [2024-04-18 12:06:02.045987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.677 qpair failed and we were unable to recover it. 00:30:11.677 [2024-04-18 12:06:02.046174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.677 [2024-04-18 12:06:02.046285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.677 [2024-04-18 12:06:02.046301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.677 qpair failed and we were unable to recover it. 00:30:11.677 [2024-04-18 12:06:02.046579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.677 [2024-04-18 12:06:02.046864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.677 [2024-04-18 12:06:02.046880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.677 qpair failed and we were unable to recover it. 00:30:11.677 [2024-04-18 12:06:02.047108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.677 [2024-04-18 12:06:02.047408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.678 [2024-04-18 12:06:02.047424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.678 qpair failed and we were unable to recover it. 00:30:11.678 [2024-04-18 12:06:02.047658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.678 [2024-04-18 12:06:02.047926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.678 [2024-04-18 12:06:02.047942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.678 qpair failed and we were unable to recover it. 00:30:11.678 [2024-04-18 12:06:02.048211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.678 [2024-04-18 12:06:02.048549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.678 [2024-04-18 12:06:02.048565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.678 qpair failed and we were unable to recover it. 00:30:11.678 [2024-04-18 12:06:02.048839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.678 [2024-04-18 12:06:02.049184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.678 [2024-04-18 12:06:02.049200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.678 qpair failed and we were unable to recover it. 00:30:11.678 [2024-04-18 12:06:02.049439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.678 [2024-04-18 12:06:02.049640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.678 [2024-04-18 12:06:02.049656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.678 qpair failed and we were unable to recover it. 00:30:11.678 [2024-04-18 12:06:02.049982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.678 [2024-04-18 12:06:02.050324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.678 [2024-04-18 12:06:02.050340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.678 qpair failed and we were unable to recover it. 00:30:11.678 [2024-04-18 12:06:02.050658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.678 [2024-04-18 12:06:02.050852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.678 [2024-04-18 12:06:02.050868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.678 qpair failed and we were unable to recover it. 00:30:11.678 [2024-04-18 12:06:02.051071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.678 [2024-04-18 12:06:02.051285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.678 [2024-04-18 12:06:02.051300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.678 qpair failed and we were unable to recover it. 00:30:11.678 [2024-04-18 12:06:02.051579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.678 [2024-04-18 12:06:02.051776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.678 [2024-04-18 12:06:02.051791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.678 qpair failed and we were unable to recover it. 00:30:11.678 [2024-04-18 12:06:02.051988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.678 [2024-04-18 12:06:02.052181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.678 [2024-04-18 12:06:02.052196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.678 qpair failed and we were unable to recover it. 00:30:11.678 [2024-04-18 12:06:02.052448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.678 [2024-04-18 12:06:02.052800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.678 [2024-04-18 12:06:02.052816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.678 qpair failed and we were unable to recover it. 00:30:11.678 [2024-04-18 12:06:02.053140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.678 [2024-04-18 12:06:02.053433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.678 [2024-04-18 12:06:02.053449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.678 qpair failed and we were unable to recover it. 00:30:11.678 [2024-04-18 12:06:02.053722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.678 [2024-04-18 12:06:02.053975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.678 [2024-04-18 12:06:02.053991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.678 qpair failed and we were unable to recover it. 00:30:11.678 [2024-04-18 12:06:02.054185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.678 [2024-04-18 12:06:02.054439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.678 [2024-04-18 12:06:02.054460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.678 qpair failed and we were unable to recover it. 00:30:11.678 [2024-04-18 12:06:02.054795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.678 [2024-04-18 12:06:02.055049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.678 [2024-04-18 12:06:02.055065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.678 qpair failed and we were unable to recover it. 00:30:11.678 [2024-04-18 12:06:02.055397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.678 [2024-04-18 12:06:02.055739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.678 [2024-04-18 12:06:02.055756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.678 qpair failed and we were unable to recover it. 00:30:11.678 [2024-04-18 12:06:02.055960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.678 [2024-04-18 12:06:02.056145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.678 [2024-04-18 12:06:02.056161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.678 qpair failed and we were unable to recover it. 00:30:11.678 [2024-04-18 12:06:02.056369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.678 [2024-04-18 12:06:02.056663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.678 [2024-04-18 12:06:02.056679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.678 qpair failed and we were unable to recover it. 00:30:11.678 [2024-04-18 12:06:02.056937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.678 [2024-04-18 12:06:02.057279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.678 [2024-04-18 12:06:02.057295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.678 qpair failed and we were unable to recover it. 00:30:11.678 [2024-04-18 12:06:02.057495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.678 [2024-04-18 12:06:02.057701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.678 [2024-04-18 12:06:02.057717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.678 qpair failed and we were unable to recover it. 00:30:11.678 [2024-04-18 12:06:02.057998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.678 [2024-04-18 12:06:02.058267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.678 [2024-04-18 12:06:02.058283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.678 qpair failed and we were unable to recover it. 00:30:11.678 [2024-04-18 12:06:02.058559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.678 [2024-04-18 12:06:02.058771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.678 [2024-04-18 12:06:02.058787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.678 qpair failed and we were unable to recover it. 00:30:11.678 [2024-04-18 12:06:02.058986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.678 [2024-04-18 12:06:02.059244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.678 [2024-04-18 12:06:02.059260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.678 qpair failed and we were unable to recover it. 00:30:11.678 [2024-04-18 12:06:02.059542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.678 [2024-04-18 12:06:02.059743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.678 [2024-04-18 12:06:02.059759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.678 qpair failed and we were unable to recover it. 00:30:11.678 [2024-04-18 12:06:02.059960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.678 [2024-04-18 12:06:02.060170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.678 [2024-04-18 12:06:02.060186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.678 qpair failed and we were unable to recover it. 00:30:11.678 [2024-04-18 12:06:02.060469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.678 [2024-04-18 12:06:02.060733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.678 [2024-04-18 12:06:02.060749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.678 qpair failed and we were unable to recover it. 00:30:11.678 [2024-04-18 12:06:02.061007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.678 [2024-04-18 12:06:02.061193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.678 [2024-04-18 12:06:02.061209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.678 qpair failed and we were unable to recover it. 00:30:11.678 [2024-04-18 12:06:02.061503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.678 [2024-04-18 12:06:02.061718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.678 [2024-04-18 12:06:02.061733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.678 qpair failed and we were unable to recover it. 00:30:11.678 [2024-04-18 12:06:02.061861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.678 [2024-04-18 12:06:02.062139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.678 [2024-04-18 12:06:02.062155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.678 qpair failed and we were unable to recover it. 00:30:11.679 [2024-04-18 12:06:02.062425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.679 [2024-04-18 12:06:02.062748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.679 [2024-04-18 12:06:02.062766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.679 qpair failed and we were unable to recover it. 00:30:11.679 [2024-04-18 12:06:02.063029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.679 [2024-04-18 12:06:02.063290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.679 [2024-04-18 12:06:02.063311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.679 qpair failed and we were unable to recover it. 00:30:11.679 [2024-04-18 12:06:02.063567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.679 [2024-04-18 12:06:02.063913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.679 [2024-04-18 12:06:02.063929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.679 qpair failed and we were unable to recover it. 00:30:11.679 [2024-04-18 12:06:02.064124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.679 [2024-04-18 12:06:02.064346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.679 [2024-04-18 12:06:02.064362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.679 qpair failed and we were unable to recover it. 00:30:11.679 [2024-04-18 12:06:02.064635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.679 [2024-04-18 12:06:02.064913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.679 [2024-04-18 12:06:02.064929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.679 qpair failed and we were unable to recover it. 00:30:11.679 [2024-04-18 12:06:02.065125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.679 [2024-04-18 12:06:02.065391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.679 [2024-04-18 12:06:02.065407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.679 qpair failed and we were unable to recover it. 00:30:11.679 [2024-04-18 12:06:02.065738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.679 [2024-04-18 12:06:02.065904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.679 [2024-04-18 12:06:02.065920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.679 qpair failed and we were unable to recover it. 00:30:11.679 [2024-04-18 12:06:02.066206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.679 [2024-04-18 12:06:02.066464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.679 [2024-04-18 12:06:02.066481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.679 qpair failed and we were unable to recover it. 00:30:11.679 [2024-04-18 12:06:02.066809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.679 [2024-04-18 12:06:02.067081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.679 [2024-04-18 12:06:02.067097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.679 qpair failed and we were unable to recover it. 00:30:11.679 [2024-04-18 12:06:02.067283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.679 [2024-04-18 12:06:02.067478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.679 [2024-04-18 12:06:02.067495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.679 qpair failed and we were unable to recover it. 00:30:11.679 [2024-04-18 12:06:02.067721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.679 [2024-04-18 12:06:02.067997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.679 [2024-04-18 12:06:02.068012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.679 qpair failed and we were unable to recover it. 00:30:11.679 [2024-04-18 12:06:02.068223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.679 [2024-04-18 12:06:02.068431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.679 [2024-04-18 12:06:02.068447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.679 qpair failed and we were unable to recover it. 00:30:11.679 [2024-04-18 12:06:02.068747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.679 [2024-04-18 12:06:02.069020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.679 [2024-04-18 12:06:02.069037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.679 qpair failed and we were unable to recover it. 00:30:11.679 [2024-04-18 12:06:02.069360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.679 Malloc0 00:30:11.679 [2024-04-18 12:06:02.069614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.679 [2024-04-18 12:06:02.069631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.679 qpair failed and we were unable to recover it. 00:30:11.679 [2024-04-18 12:06:02.069924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.679 [2024-04-18 12:06:02.070181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.679 [2024-04-18 12:06:02.070197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.679 qpair failed and we were unable to recover it. 00:30:11.679 12:06:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:11.679 [2024-04-18 12:06:02.070475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.679 [2024-04-18 12:06:02.070676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.679 [2024-04-18 12:06:02.070692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.679 qpair failed and we were unable to recover it. 00:30:11.679 12:06:02 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:11.679 [2024-04-18 12:06:02.070982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.679 12:06:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:11.679 [2024-04-18 12:06:02.071239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.679 [2024-04-18 12:06:02.071255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.679 qpair failed and we were unable to recover it. 00:30:11.679 12:06:02 -- common/autotest_common.sh@10 -- # set +x 00:30:11.679 [2024-04-18 12:06:02.071555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.679 [2024-04-18 12:06:02.071809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.679 [2024-04-18 12:06:02.071826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.679 qpair failed and we were unable to recover it. 00:30:11.679 [2024-04-18 12:06:02.072084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.679 [2024-04-18 12:06:02.072425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.679 [2024-04-18 12:06:02.072441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.679 qpair failed and we were unable to recover it. 00:30:11.679 [2024-04-18 12:06:02.072657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.679 [2024-04-18 12:06:02.072986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.679 [2024-04-18 12:06:02.073001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.679 qpair failed and we were unable to recover it. 00:30:11.679 [2024-04-18 12:06:02.073347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.679 [2024-04-18 12:06:02.073583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.679 [2024-04-18 12:06:02.073602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.679 qpair failed and we were unable to recover it. 00:30:11.679 [2024-04-18 12:06:02.073854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.679 [2024-04-18 12:06:02.074038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.679 [2024-04-18 12:06:02.074054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.679 qpair failed and we were unable to recover it. 00:30:11.679 [2024-04-18 12:06:02.074321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.679 [2024-04-18 12:06:02.074524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.679 [2024-04-18 12:06:02.074540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.679 qpair failed and we were unable to recover it. 00:30:11.679 [2024-04-18 12:06:02.074774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.679 [2024-04-18 12:06:02.075034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.679 [2024-04-18 12:06:02.075050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.679 qpair failed and we were unable to recover it. 00:30:11.679 [2024-04-18 12:06:02.075331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.679 [2024-04-18 12:06:02.075539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.679 [2024-04-18 12:06:02.075555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.679 qpair failed and we were unable to recover it. 00:30:11.679 [2024-04-18 12:06:02.075907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.679 [2024-04-18 12:06:02.076167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.679 [2024-04-18 12:06:02.076183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.679 qpair failed and we were unable to recover it. 00:30:11.679 [2024-04-18 12:06:02.076377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.679 [2024-04-18 12:06:02.076659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.679 [2024-04-18 12:06:02.076675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.679 qpair failed and we were unable to recover it. 00:30:11.679 [2024-04-18 12:06:02.076950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.679 [2024-04-18 12:06:02.077104] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:11.679 [2024-04-18 12:06:02.077171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.680 [2024-04-18 12:06:02.077185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.680 qpair failed and we were unable to recover it. 00:30:11.680 [2024-04-18 12:06:02.077375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.680 [2024-04-18 12:06:02.077602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.680 [2024-04-18 12:06:02.077618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.680 qpair failed and we were unable to recover it. 00:30:11.680 [2024-04-18 12:06:02.077848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.680 [2024-04-18 12:06:02.078101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.680 [2024-04-18 12:06:02.078117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.680 qpair failed and we were unable to recover it. 00:30:11.680 [2024-04-18 12:06:02.078372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.680 [2024-04-18 12:06:02.078646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.680 [2024-04-18 12:06:02.078663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.680 qpair failed and we were unable to recover it. 00:30:11.680 [2024-04-18 12:06:02.078931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.680 [2024-04-18 12:06:02.079184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.680 [2024-04-18 12:06:02.079200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.680 qpair failed and we were unable to recover it. 00:30:11.680 [2024-04-18 12:06:02.079396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.680 [2024-04-18 12:06:02.079684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.680 [2024-04-18 12:06:02.079700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.680 qpair failed and we were unable to recover it. 00:30:11.680 [2024-04-18 12:06:02.079969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.680 [2024-04-18 12:06:02.080225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.680 [2024-04-18 12:06:02.080241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.680 qpair failed and we were unable to recover it. 00:30:11.680 [2024-04-18 12:06:02.080505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.680 [2024-04-18 12:06:02.080810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.680 [2024-04-18 12:06:02.080826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.680 qpair failed and we were unable to recover it. 00:30:11.680 [2024-04-18 12:06:02.081128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.680 [2024-04-18 12:06:02.081408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.680 [2024-04-18 12:06:02.081424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.680 qpair failed and we were unable to recover it. 00:30:11.680 [2024-04-18 12:06:02.081648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.680 [2024-04-18 12:06:02.081838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.680 [2024-04-18 12:06:02.081854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.680 qpair failed and we were unable to recover it. 00:30:11.680 [2024-04-18 12:06:02.082056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.680 [2024-04-18 12:06:02.082258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.680 [2024-04-18 12:06:02.082274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.680 qpair failed and we were unable to recover it. 00:30:11.680 [2024-04-18 12:06:02.082488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.680 [2024-04-18 12:06:02.082780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.680 [2024-04-18 12:06:02.082796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.680 qpair failed and we were unable to recover it. 00:30:11.680 [2024-04-18 12:06:02.083003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.680 [2024-04-18 12:06:02.083216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.680 [2024-04-18 12:06:02.083231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.680 qpair failed and we were unable to recover it. 00:30:11.680 [2024-04-18 12:06:02.083508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.680 [2024-04-18 12:06:02.083728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.680 [2024-04-18 12:06:02.083744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.680 qpair failed and we were unable to recover it. 00:30:11.680 [2024-04-18 12:06:02.083948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.680 [2024-04-18 12:06:02.084207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.680 [2024-04-18 12:06:02.084224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.680 qpair failed and we were unable to recover it. 00:30:11.680 [2024-04-18 12:06:02.084485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.680 [2024-04-18 12:06:02.084835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.680 [2024-04-18 12:06:02.084851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.680 qpair failed and we were unable to recover it. 00:30:11.680 [2024-04-18 12:06:02.085185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.680 [2024-04-18 12:06:02.085454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.680 [2024-04-18 12:06:02.085470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.680 qpair failed and we were unable to recover it. 00:30:11.680 [2024-04-18 12:06:02.085821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.680 [2024-04-18 12:06:02.086015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.680 12:06:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:11.680 [2024-04-18 12:06:02.086031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.680 qpair failed and we were unable to recover it. 00:30:11.680 [2024-04-18 12:06:02.086357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.680 12:06:02 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:11.680 [2024-04-18 12:06:02.086558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.680 [2024-04-18 12:06:02.086588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.680 qpair failed and we were unable to recover it. 00:30:11.680 12:06:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:11.680 [2024-04-18 12:06:02.086868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.680 12:06:02 -- common/autotest_common.sh@10 -- # set +x 00:30:11.680 [2024-04-18 12:06:02.087135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.680 [2024-04-18 12:06:02.087152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.680 qpair failed and we were unable to recover it. 00:30:11.680 [2024-04-18 12:06:02.087423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.680 [2024-04-18 12:06:02.087787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.680 [2024-04-18 12:06:02.087803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.680 qpair failed and we were unable to recover it. 00:30:11.680 [2024-04-18 12:06:02.088013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.680 [2024-04-18 12:06:02.088266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.680 [2024-04-18 12:06:02.088282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.680 qpair failed and we were unable to recover it. 00:30:11.680 [2024-04-18 12:06:02.088647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.680 [2024-04-18 12:06:02.088836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.680 [2024-04-18 12:06:02.088853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.680 qpair failed and we were unable to recover it. 00:30:11.680 [2024-04-18 12:06:02.089144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.680 [2024-04-18 12:06:02.089408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.680 [2024-04-18 12:06:02.089424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.680 qpair failed and we were unable to recover it. 00:30:11.680 [2024-04-18 12:06:02.089692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.680 [2024-04-18 12:06:02.089982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.680 [2024-04-18 12:06:02.089998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.680 qpair failed and we were unable to recover it. 00:30:11.680 [2024-04-18 12:06:02.090321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.680 [2024-04-18 12:06:02.090610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.680 [2024-04-18 12:06:02.090627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.680 qpair failed and we were unable to recover it. 00:30:11.680 [2024-04-18 12:06:02.090902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.680 [2024-04-18 12:06:02.091154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.680 [2024-04-18 12:06:02.091169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.680 qpair failed and we were unable to recover it. 00:30:11.680 [2024-04-18 12:06:02.091461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.680 [2024-04-18 12:06:02.091682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.680 [2024-04-18 12:06:02.091698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.680 qpair failed and we were unable to recover it. 00:30:11.680 [2024-04-18 12:06:02.091961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.680 [2024-04-18 12:06:02.092251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.681 [2024-04-18 12:06:02.092267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.681 qpair failed and we were unable to recover it. 00:30:11.681 [2024-04-18 12:06:02.092590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.681 [2024-04-18 12:06:02.092842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.681 [2024-04-18 12:06:02.092858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.681 qpair failed and we were unable to recover it. 00:30:11.681 [2024-04-18 12:06:02.093183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.681 [2024-04-18 12:06:02.093442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.681 [2024-04-18 12:06:02.093463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.681 qpair failed and we were unable to recover it. 00:30:11.681 [2024-04-18 12:06:02.093673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.681 12:06:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:11.681 [2024-04-18 12:06:02.093994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.681 [2024-04-18 12:06:02.094014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.681 qpair failed and we were unable to recover it. 00:30:11.681 12:06:02 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:11.681 [2024-04-18 12:06:02.094383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.681 [2024-04-18 12:06:02.094658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.681 12:06:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:11.681 [2024-04-18 12:06:02.094674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.681 qpair failed and we were unable to recover it. 00:30:11.681 [2024-04-18 12:06:02.094904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.681 12:06:02 -- common/autotest_common.sh@10 -- # set +x 00:30:11.681 [2024-04-18 12:06:02.095155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.681 [2024-04-18 12:06:02.095171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.681 qpair failed and we were unable to recover it. 00:30:11.681 [2024-04-18 12:06:02.095496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.681 [2024-04-18 12:06:02.095707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.681 [2024-04-18 12:06:02.095723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.681 qpair failed and we were unable to recover it. 00:30:11.681 [2024-04-18 12:06:02.095992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.681 [2024-04-18 12:06:02.096201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.681 [2024-04-18 12:06:02.096217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.681 qpair failed and we were unable to recover it. 00:30:11.681 [2024-04-18 12:06:02.096557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.681 [2024-04-18 12:06:02.096901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.681 [2024-04-18 12:06:02.096917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.681 qpair failed and we were unable to recover it. 00:30:11.681 [2024-04-18 12:06:02.097195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.681 [2024-04-18 12:06:02.097536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.681 [2024-04-18 12:06:02.097552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.681 qpair failed and we were unable to recover it. 00:30:11.681 [2024-04-18 12:06:02.097818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.681 [2024-04-18 12:06:02.098093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.681 [2024-04-18 12:06:02.098109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.681 qpair failed and we were unable to recover it. 00:30:11.681 [2024-04-18 12:06:02.098301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.681 [2024-04-18 12:06:02.098511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.681 [2024-04-18 12:06:02.098527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.681 qpair failed and we were unable to recover it. 00:30:11.681 [2024-04-18 12:06:02.098730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.681 [2024-04-18 12:06:02.098929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.681 [2024-04-18 12:06:02.098945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.681 qpair failed and we were unable to recover it. 00:30:11.681 [2024-04-18 12:06:02.099271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.681 [2024-04-18 12:06:02.099521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.681 [2024-04-18 12:06:02.099537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.681 qpair failed and we were unable to recover it. 00:30:11.681 [2024-04-18 12:06:02.099838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.681 [2024-04-18 12:06:02.100121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.681 [2024-04-18 12:06:02.100138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.681 qpair failed and we were unable to recover it. 00:30:11.681 [2024-04-18 12:06:02.100465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.681 [2024-04-18 12:06:02.100668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.681 [2024-04-18 12:06:02.100684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.681 qpair failed and we were unable to recover it. 00:30:11.681 [2024-04-18 12:06:02.100960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.681 [2024-04-18 12:06:02.101295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.681 [2024-04-18 12:06:02.101311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.681 qpair failed and we were unable to recover it. 00:30:11.681 [2024-04-18 12:06:02.101579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.681 [2024-04-18 12:06:02.101775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.681 [2024-04-18 12:06:02.101791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.681 qpair failed and we were unable to recover it. 00:30:11.681 12:06:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:11.681 [2024-04-18 12:06:02.102004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.681 [2024-04-18 12:06:02.102203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.681 [2024-04-18 12:06:02.102218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.681 qpair failed and we were unable to recover it. 00:30:11.681 12:06:02 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:11.681 [2024-04-18 12:06:02.102538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.681 12:06:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:11.681 [2024-04-18 12:06:02.102743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.681 [2024-04-18 12:06:02.102760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.681 qpair failed and we were unable to recover it. 00:30:11.681 12:06:02 -- common/autotest_common.sh@10 -- # set +x 00:30:11.681 [2024-04-18 12:06:02.102968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.681 [2024-04-18 12:06:02.103237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.681 [2024-04-18 12:06:02.103253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.681 qpair failed and we were unable to recover it. 00:30:11.681 [2024-04-18 12:06:02.103537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.681 [2024-04-18 12:06:02.103756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.681 [2024-04-18 12:06:02.103772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.681 qpair failed and we were unable to recover it. 00:30:11.681 [2024-04-18 12:06:02.103983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.681 [2024-04-18 12:06:02.104188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.681 [2024-04-18 12:06:02.104204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.681 qpair failed and we were unable to recover it. 00:30:11.681 [2024-04-18 12:06:02.104404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.681 [2024-04-18 12:06:02.104624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.681 [2024-04-18 12:06:02.104643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.681 qpair failed and we were unable to recover it. 00:30:11.681 [2024-04-18 12:06:02.104841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.681 [2024-04-18 12:06:02.105165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.681 [2024-04-18 12:06:02.105181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:30:11.681 qpair failed and we were unable to recover it. 00:30:11.681 [2024-04-18 12:06:02.105381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.681 [2024-04-18 12:06:02.105572] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:11.681 12:06:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:11.681 [2024-04-18 12:06:02.110221] posix.c: 675:posix_sock_psk_use_session_client_cb: *ERROR*: PSK is not set 00:30:11.681 [2024-04-18 12:06:02.110282] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000020040 (107): Transport endpoint is not connected 00:30:11.681 12:06:02 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:11.681 [2024-04-18 12:06:02.110379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.681 qpair failed and we were unable to recover it. 00:30:11.681 12:06:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:11.681 12:06:02 -- common/autotest_common.sh@10 -- # set +x 00:30:11.682 12:06:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:11.682 12:06:02 -- host/target_disconnect.sh@58 -- # wait 2656463 00:30:11.682 [2024-04-18 12:06:02.118680] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.682 [2024-04-18 12:06:02.118817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.682 [2024-04-18 12:06:02.118844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.682 [2024-04-18 12:06:02.118866] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.682 [2024-04-18 12:06:02.118878] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:11.682 [2024-04-18 12:06:02.118904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.682 qpair failed and we were unable to recover it. 00:30:11.682 [2024-04-18 12:06:02.128631] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.682 [2024-04-18 12:06:02.128751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.682 [2024-04-18 12:06:02.128775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.682 [2024-04-18 12:06:02.128790] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.682 [2024-04-18 12:06:02.128801] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:11.682 [2024-04-18 12:06:02.128825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.682 qpair failed and we were unable to recover it. 00:30:11.682 [2024-04-18 12:06:02.138631] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.682 [2024-04-18 12:06:02.138749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.682 [2024-04-18 12:06:02.138772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.682 [2024-04-18 12:06:02.138789] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.682 [2024-04-18 12:06:02.138800] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:11.682 [2024-04-18 12:06:02.138824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.682 qpair failed and we were unable to recover it. 00:30:11.682 [2024-04-18 12:06:02.148668] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.682 [2024-04-18 12:06:02.148808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.682 [2024-04-18 12:06:02.148833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.682 [2024-04-18 12:06:02.148846] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.682 [2024-04-18 12:06:02.148857] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:11.682 [2024-04-18 12:06:02.148881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.682 qpair failed and we were unable to recover it. 00:30:11.682 [2024-04-18 12:06:02.158690] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.682 [2024-04-18 12:06:02.158809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.682 [2024-04-18 12:06:02.158832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.682 [2024-04-18 12:06:02.158846] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.682 [2024-04-18 12:06:02.158857] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:11.682 [2024-04-18 12:06:02.158881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.682 qpair failed and we were unable to recover it. 00:30:11.682 [2024-04-18 12:06:02.168728] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.682 [2024-04-18 12:06:02.168843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.682 [2024-04-18 12:06:02.168865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.682 [2024-04-18 12:06:02.168878] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.682 [2024-04-18 12:06:02.168889] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:11.682 [2024-04-18 12:06:02.168913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.682 qpair failed and we were unable to recover it. 00:30:11.682 [2024-04-18 12:06:02.178685] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.682 [2024-04-18 12:06:02.178798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.682 [2024-04-18 12:06:02.178821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.682 [2024-04-18 12:06:02.178834] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.682 [2024-04-18 12:06:02.178845] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:11.682 [2024-04-18 12:06:02.178869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.682 qpair failed and we were unable to recover it. 00:30:11.682 [2024-04-18 12:06:02.188722] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.682 [2024-04-18 12:06:02.188832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.682 [2024-04-18 12:06:02.188855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.682 [2024-04-18 12:06:02.188869] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.682 [2024-04-18 12:06:02.188880] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:11.682 [2024-04-18 12:06:02.188904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.682 qpair failed and we were unable to recover it. 00:30:11.682 [2024-04-18 12:06:02.198793] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.682 [2024-04-18 12:06:02.198916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.682 [2024-04-18 12:06:02.198939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.682 [2024-04-18 12:06:02.198952] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.682 [2024-04-18 12:06:02.198963] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:11.682 [2024-04-18 12:06:02.198991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.682 qpair failed and we were unable to recover it. 00:30:11.942 [2024-04-18 12:06:02.208862] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.942 [2024-04-18 12:06:02.209086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.942 [2024-04-18 12:06:02.209111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.942 [2024-04-18 12:06:02.209125] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.942 [2024-04-18 12:06:02.209136] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:11.942 [2024-04-18 12:06:02.209161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.942 qpair failed and we were unable to recover it. 00:30:11.942 [2024-04-18 12:06:02.218859] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.942 [2024-04-18 12:06:02.218969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.942 [2024-04-18 12:06:02.218992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.942 [2024-04-18 12:06:02.219005] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.942 [2024-04-18 12:06:02.219016] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:11.942 [2024-04-18 12:06:02.219039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.942 qpair failed and we were unable to recover it. 00:30:11.942 [2024-04-18 12:06:02.228852] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.942 [2024-04-18 12:06:02.229059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.942 [2024-04-18 12:06:02.229083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.942 [2024-04-18 12:06:02.229101] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.942 [2024-04-18 12:06:02.229112] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:11.942 [2024-04-18 12:06:02.229136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.942 qpair failed and we were unable to recover it. 00:30:11.942 [2024-04-18 12:06:02.238886] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.942 [2024-04-18 12:06:02.238999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.942 [2024-04-18 12:06:02.239021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.942 [2024-04-18 12:06:02.239034] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.942 [2024-04-18 12:06:02.239045] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:11.942 [2024-04-18 12:06:02.239068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.942 qpair failed and we were unable to recover it. 00:30:11.942 [2024-04-18 12:06:02.249012] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.942 [2024-04-18 12:06:02.249121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.942 [2024-04-18 12:06:02.249143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.942 [2024-04-18 12:06:02.249157] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.942 [2024-04-18 12:06:02.249168] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:11.942 [2024-04-18 12:06:02.249192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.942 qpair failed and we were unable to recover it. 00:30:11.942 [2024-04-18 12:06:02.258939] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.942 [2024-04-18 12:06:02.259080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.942 [2024-04-18 12:06:02.259102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.942 [2024-04-18 12:06:02.259116] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.942 [2024-04-18 12:06:02.259127] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:11.942 [2024-04-18 12:06:02.259150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.942 qpair failed and we were unable to recover it. 00:30:11.942 [2024-04-18 12:06:02.269051] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.942 [2024-04-18 12:06:02.269162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.942 [2024-04-18 12:06:02.269185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.942 [2024-04-18 12:06:02.269198] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.942 [2024-04-18 12:06:02.269209] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:11.942 [2024-04-18 12:06:02.269234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.942 qpair failed and we were unable to recover it. 00:30:11.942 [2024-04-18 12:06:02.278998] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.942 [2024-04-18 12:06:02.279108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.942 [2024-04-18 12:06:02.279133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.942 [2024-04-18 12:06:02.279147] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.942 [2024-04-18 12:06:02.279158] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:11.942 [2024-04-18 12:06:02.279182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.942 qpair failed and we were unable to recover it. 00:30:11.942 [2024-04-18 12:06:02.289061] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.942 [2024-04-18 12:06:02.289175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.942 [2024-04-18 12:06:02.289197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.942 [2024-04-18 12:06:02.289210] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.942 [2024-04-18 12:06:02.289221] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:11.942 [2024-04-18 12:06:02.289245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.942 qpair failed and we were unable to recover it. 00:30:11.942 [2024-04-18 12:06:02.299174] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.942 [2024-04-18 12:06:02.299299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.942 [2024-04-18 12:06:02.299321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.942 [2024-04-18 12:06:02.299335] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.942 [2024-04-18 12:06:02.299346] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:11.942 [2024-04-18 12:06:02.299370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.942 qpair failed and we were unable to recover it. 00:30:11.942 [2024-04-18 12:06:02.309103] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.942 [2024-04-18 12:06:02.309208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.942 [2024-04-18 12:06:02.309230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.942 [2024-04-18 12:06:02.309244] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.942 [2024-04-18 12:06:02.309255] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:11.942 [2024-04-18 12:06:02.309279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.942 qpair failed and we were unable to recover it. 00:30:11.942 [2024-04-18 12:06:02.319335] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.942 [2024-04-18 12:06:02.319448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.942 [2024-04-18 12:06:02.319481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.942 [2024-04-18 12:06:02.319495] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.943 [2024-04-18 12:06:02.319506] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:11.943 [2024-04-18 12:06:02.319530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.943 qpair failed and we were unable to recover it. 00:30:11.943 [2024-04-18 12:06:02.329143] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.943 [2024-04-18 12:06:02.329258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.943 [2024-04-18 12:06:02.329281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.943 [2024-04-18 12:06:02.329295] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.943 [2024-04-18 12:06:02.329306] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:11.943 [2024-04-18 12:06:02.329331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.943 qpair failed and we were unable to recover it. 00:30:11.943 [2024-04-18 12:06:02.339235] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.943 [2024-04-18 12:06:02.339345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.943 [2024-04-18 12:06:02.339367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.943 [2024-04-18 12:06:02.339381] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.943 [2024-04-18 12:06:02.339392] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:11.943 [2024-04-18 12:06:02.339415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.943 qpair failed and we were unable to recover it. 00:30:11.943 [2024-04-18 12:06:02.349212] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.943 [2024-04-18 12:06:02.349354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.943 [2024-04-18 12:06:02.349376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.943 [2024-04-18 12:06:02.349390] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.943 [2024-04-18 12:06:02.349401] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:11.943 [2024-04-18 12:06:02.349425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.943 qpair failed and we were unable to recover it. 00:30:11.943 [2024-04-18 12:06:02.359422] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.943 [2024-04-18 12:06:02.359547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.943 [2024-04-18 12:06:02.359570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.943 [2024-04-18 12:06:02.359584] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.943 [2024-04-18 12:06:02.359595] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:11.943 [2024-04-18 12:06:02.359622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.943 qpair failed and we were unable to recover it. 00:30:11.943 [2024-04-18 12:06:02.369296] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.943 [2024-04-18 12:06:02.369404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.943 [2024-04-18 12:06:02.369426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.943 [2024-04-18 12:06:02.369439] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.943 [2024-04-18 12:06:02.369456] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:11.943 [2024-04-18 12:06:02.369481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.943 qpair failed and we were unable to recover it. 00:30:11.943 [2024-04-18 12:06:02.379347] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.943 [2024-04-18 12:06:02.379465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.943 [2024-04-18 12:06:02.379488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.943 [2024-04-18 12:06:02.379501] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.943 [2024-04-18 12:06:02.379512] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:11.943 [2024-04-18 12:06:02.379535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.943 qpair failed and we were unable to recover it. 00:30:11.943 [2024-04-18 12:06:02.389345] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.943 [2024-04-18 12:06:02.389495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.943 [2024-04-18 12:06:02.389517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.943 [2024-04-18 12:06:02.389531] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.943 [2024-04-18 12:06:02.389542] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:11.943 [2024-04-18 12:06:02.389565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.943 qpair failed and we were unable to recover it. 00:30:11.943 [2024-04-18 12:06:02.399465] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.943 [2024-04-18 12:06:02.399598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.943 [2024-04-18 12:06:02.399620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.943 [2024-04-18 12:06:02.399634] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.943 [2024-04-18 12:06:02.399645] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:11.943 [2024-04-18 12:06:02.399670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.943 qpair failed and we were unable to recover it. 00:30:11.943 [2024-04-18 12:06:02.409384] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.943 [2024-04-18 12:06:02.409514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.943 [2024-04-18 12:06:02.409539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.943 [2024-04-18 12:06:02.409552] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.943 [2024-04-18 12:06:02.409563] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:11.943 [2024-04-18 12:06:02.409587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.943 qpair failed and we were unable to recover it. 00:30:11.943 [2024-04-18 12:06:02.419456] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.943 [2024-04-18 12:06:02.419672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.943 [2024-04-18 12:06:02.419696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.943 [2024-04-18 12:06:02.419710] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.943 [2024-04-18 12:06:02.419721] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:11.943 [2024-04-18 12:06:02.419746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.943 qpair failed and we were unable to recover it. 00:30:11.943 [2024-04-18 12:06:02.429448] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.943 [2024-04-18 12:06:02.429560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.943 [2024-04-18 12:06:02.429582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.943 [2024-04-18 12:06:02.429596] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.943 [2024-04-18 12:06:02.429607] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:11.943 [2024-04-18 12:06:02.429634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.943 qpair failed and we were unable to recover it. 00:30:11.943 [2024-04-18 12:06:02.439473] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.943 [2024-04-18 12:06:02.439585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.943 [2024-04-18 12:06:02.439607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.943 [2024-04-18 12:06:02.439621] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.943 [2024-04-18 12:06:02.439632] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:11.943 [2024-04-18 12:06:02.439656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.943 qpair failed and we were unable to recover it. 00:30:11.943 [2024-04-18 12:06:02.449527] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.943 [2024-04-18 12:06:02.449639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.943 [2024-04-18 12:06:02.449661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.943 [2024-04-18 12:06:02.449675] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.943 [2024-04-18 12:06:02.449688] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:11.943 [2024-04-18 12:06:02.449712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.944 qpair failed and we were unable to recover it. 00:30:11.944 [2024-04-18 12:06:02.459571] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.944 [2024-04-18 12:06:02.459685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.944 [2024-04-18 12:06:02.459707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.944 [2024-04-18 12:06:02.459720] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.944 [2024-04-18 12:06:02.459732] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:11.944 [2024-04-18 12:06:02.459755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.944 qpair failed and we were unable to recover it. 00:30:11.944 [2024-04-18 12:06:02.469600] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.944 [2024-04-18 12:06:02.469705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.944 [2024-04-18 12:06:02.469727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.944 [2024-04-18 12:06:02.469740] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.944 [2024-04-18 12:06:02.469751] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:11.944 [2024-04-18 12:06:02.469775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.944 qpair failed and we were unable to recover it. 00:30:11.944 [2024-04-18 12:06:02.479654] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.944 [2024-04-18 12:06:02.479799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.944 [2024-04-18 12:06:02.479821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.944 [2024-04-18 12:06:02.479835] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.944 [2024-04-18 12:06:02.479845] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:11.944 [2024-04-18 12:06:02.479868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.944 qpair failed and we were unable to recover it. 00:30:12.202 [2024-04-18 12:06:02.489722] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.202 [2024-04-18 12:06:02.489834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.202 [2024-04-18 12:06:02.489856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.202 [2024-04-18 12:06:02.489870] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.202 [2024-04-18 12:06:02.489881] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.202 [2024-04-18 12:06:02.489904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.202 qpair failed and we were unable to recover it. 00:30:12.202 [2024-04-18 12:06:02.499717] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.202 [2024-04-18 12:06:02.499828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.202 [2024-04-18 12:06:02.499850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.202 [2024-04-18 12:06:02.499864] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.202 [2024-04-18 12:06:02.499874] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.202 [2024-04-18 12:06:02.499897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.202 qpair failed and we were unable to recover it. 00:30:12.202 [2024-04-18 12:06:02.509668] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.202 [2024-04-18 12:06:02.509786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.202 [2024-04-18 12:06:02.509808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.202 [2024-04-18 12:06:02.509822] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.202 [2024-04-18 12:06:02.509832] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.202 [2024-04-18 12:06:02.509856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.202 qpair failed and we were unable to recover it. 00:30:12.202 [2024-04-18 12:06:02.519818] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.202 [2024-04-18 12:06:02.519956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.202 [2024-04-18 12:06:02.519978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.202 [2024-04-18 12:06:02.519991] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.202 [2024-04-18 12:06:02.520002] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.202 [2024-04-18 12:06:02.520025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.202 qpair failed and we were unable to recover it. 00:30:12.202 [2024-04-18 12:06:02.529806] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.202 [2024-04-18 12:06:02.529918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.202 [2024-04-18 12:06:02.529940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.202 [2024-04-18 12:06:02.529954] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.202 [2024-04-18 12:06:02.529965] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.202 [2024-04-18 12:06:02.529988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.202 qpair failed and we were unable to recover it. 00:30:12.202 [2024-04-18 12:06:02.539814] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.202 [2024-04-18 12:06:02.540033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.202 [2024-04-18 12:06:02.540057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.202 [2024-04-18 12:06:02.540074] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.203 [2024-04-18 12:06:02.540085] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.203 [2024-04-18 12:06:02.540108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.203 qpair failed and we were unable to recover it. 00:30:12.203 [2024-04-18 12:06:02.549869] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.203 [2024-04-18 12:06:02.549979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.203 [2024-04-18 12:06:02.550002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.203 [2024-04-18 12:06:02.550015] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.203 [2024-04-18 12:06:02.550027] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.203 [2024-04-18 12:06:02.550050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.203 qpair failed and we were unable to recover it. 00:30:12.203 [2024-04-18 12:06:02.559851] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.203 [2024-04-18 12:06:02.559970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.203 [2024-04-18 12:06:02.559992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.203 [2024-04-18 12:06:02.560005] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.203 [2024-04-18 12:06:02.560016] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.203 [2024-04-18 12:06:02.560039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.203 qpair failed and we were unable to recover it. 00:30:12.203 [2024-04-18 12:06:02.569883] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.203 [2024-04-18 12:06:02.569990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.203 [2024-04-18 12:06:02.570011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.203 [2024-04-18 12:06:02.570024] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.203 [2024-04-18 12:06:02.570035] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.203 [2024-04-18 12:06:02.570060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.203 qpair failed and we were unable to recover it. 00:30:12.203 [2024-04-18 12:06:02.579792] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.203 [2024-04-18 12:06:02.579899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.203 [2024-04-18 12:06:02.579921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.203 [2024-04-18 12:06:02.579934] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.203 [2024-04-18 12:06:02.579945] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.203 [2024-04-18 12:06:02.579968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.203 qpair failed and we were unable to recover it. 00:30:12.203 [2024-04-18 12:06:02.589903] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.203 [2024-04-18 12:06:02.590039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.203 [2024-04-18 12:06:02.590061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.203 [2024-04-18 12:06:02.590074] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.203 [2024-04-18 12:06:02.590085] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.203 [2024-04-18 12:06:02.590108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.203 qpair failed and we were unable to recover it. 00:30:12.203 [2024-04-18 12:06:02.599921] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.203 [2024-04-18 12:06:02.600040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.203 [2024-04-18 12:06:02.600062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.203 [2024-04-18 12:06:02.600075] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.203 [2024-04-18 12:06:02.600086] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.203 [2024-04-18 12:06:02.600109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.203 qpair failed and we were unable to recover it. 00:30:12.203 [2024-04-18 12:06:02.610005] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.203 [2024-04-18 12:06:02.610125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.203 [2024-04-18 12:06:02.610147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.203 [2024-04-18 12:06:02.610160] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.203 [2024-04-18 12:06:02.610171] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.203 [2024-04-18 12:06:02.610195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.203 qpair failed and we were unable to recover it. 00:30:12.203 [2024-04-18 12:06:02.620088] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.203 [2024-04-18 12:06:02.620208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.203 [2024-04-18 12:06:02.620230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.203 [2024-04-18 12:06:02.620244] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.203 [2024-04-18 12:06:02.620255] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.203 [2024-04-18 12:06:02.620278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.203 qpair failed and we were unable to recover it. 00:30:12.203 [2024-04-18 12:06:02.630100] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.203 [2024-04-18 12:06:02.630206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.203 [2024-04-18 12:06:02.630228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.203 [2024-04-18 12:06:02.630244] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.203 [2024-04-18 12:06:02.630276] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.203 [2024-04-18 12:06:02.630300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.203 qpair failed and we were unable to recover it. 00:30:12.203 [2024-04-18 12:06:02.640040] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.203 [2024-04-18 12:06:02.640156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.203 [2024-04-18 12:06:02.640179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.203 [2024-04-18 12:06:02.640192] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.203 [2024-04-18 12:06:02.640203] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.203 [2024-04-18 12:06:02.640226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.203 qpair failed and we were unable to recover it. 00:30:12.203 [2024-04-18 12:06:02.650062] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.203 [2024-04-18 12:06:02.650170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.203 [2024-04-18 12:06:02.650193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.203 [2024-04-18 12:06:02.650207] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.203 [2024-04-18 12:06:02.650217] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.203 [2024-04-18 12:06:02.650241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.203 qpair failed and we were unable to recover it. 00:30:12.203 [2024-04-18 12:06:02.660063] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.203 [2024-04-18 12:06:02.660269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.203 [2024-04-18 12:06:02.660292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.203 [2024-04-18 12:06:02.660305] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.203 [2024-04-18 12:06:02.660316] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.203 [2024-04-18 12:06:02.660343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.203 qpair failed and we were unable to recover it. 00:30:12.203 [2024-04-18 12:06:02.670106] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.203 [2024-04-18 12:06:02.670215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.203 [2024-04-18 12:06:02.670237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.203 [2024-04-18 12:06:02.670251] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.203 [2024-04-18 12:06:02.670262] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.203 [2024-04-18 12:06:02.670286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.203 qpair failed and we were unable to recover it. 00:30:12.203 [2024-04-18 12:06:02.680248] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.204 [2024-04-18 12:06:02.680466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.204 [2024-04-18 12:06:02.680489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.204 [2024-04-18 12:06:02.680503] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.204 [2024-04-18 12:06:02.680514] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.204 [2024-04-18 12:06:02.680538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.204 qpair failed and we were unable to recover it. 00:30:12.204 [2024-04-18 12:06:02.690202] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.204 [2024-04-18 12:06:02.690338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.204 [2024-04-18 12:06:02.690360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.204 [2024-04-18 12:06:02.690374] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.204 [2024-04-18 12:06:02.690385] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.204 [2024-04-18 12:06:02.690408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.204 qpair failed and we were unable to recover it. 00:30:12.204 [2024-04-18 12:06:02.700196] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.204 [2024-04-18 12:06:02.700327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.204 [2024-04-18 12:06:02.700349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.204 [2024-04-18 12:06:02.700363] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.204 [2024-04-18 12:06:02.700373] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.204 [2024-04-18 12:06:02.700396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.204 qpair failed and we were unable to recover it. 00:30:12.204 [2024-04-18 12:06:02.710180] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.204 [2024-04-18 12:06:02.710287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.204 [2024-04-18 12:06:02.710309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.204 [2024-04-18 12:06:02.710322] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.204 [2024-04-18 12:06:02.710333] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.204 [2024-04-18 12:06:02.710356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.204 qpair failed and we were unable to recover it. 00:30:12.204 [2024-04-18 12:06:02.720324] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.204 [2024-04-18 12:06:02.720488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.204 [2024-04-18 12:06:02.720513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.204 [2024-04-18 12:06:02.720527] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.204 [2024-04-18 12:06:02.720538] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.204 [2024-04-18 12:06:02.720562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.204 qpair failed and we were unable to recover it. 00:30:12.204 [2024-04-18 12:06:02.730292] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.204 [2024-04-18 12:06:02.730511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.204 [2024-04-18 12:06:02.730534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.204 [2024-04-18 12:06:02.730548] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.204 [2024-04-18 12:06:02.730560] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.204 [2024-04-18 12:06:02.730584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.204 qpair failed and we were unable to recover it. 00:30:12.204 [2024-04-18 12:06:02.740275] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.204 [2024-04-18 12:06:02.740412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.204 [2024-04-18 12:06:02.740434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.204 [2024-04-18 12:06:02.740447] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.204 [2024-04-18 12:06:02.740466] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.204 [2024-04-18 12:06:02.740490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.204 qpair failed and we were unable to recover it. 00:30:12.465 [2024-04-18 12:06:02.750292] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.465 [2024-04-18 12:06:02.750407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.465 [2024-04-18 12:06:02.750430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.465 [2024-04-18 12:06:02.750444] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.465 [2024-04-18 12:06:02.750464] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.465 [2024-04-18 12:06:02.750489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.465 qpair failed and we were unable to recover it. 00:30:12.465 [2024-04-18 12:06:02.760381] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.465 [2024-04-18 12:06:02.760502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.465 [2024-04-18 12:06:02.760524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.465 [2024-04-18 12:06:02.760538] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.465 [2024-04-18 12:06:02.760549] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.465 [2024-04-18 12:06:02.760575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.465 qpair failed and we were unable to recover it. 00:30:12.465 [2024-04-18 12:06:02.770471] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.465 [2024-04-18 12:06:02.770584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.465 [2024-04-18 12:06:02.770607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.465 [2024-04-18 12:06:02.770620] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.465 [2024-04-18 12:06:02.770631] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.465 [2024-04-18 12:06:02.770656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.465 qpair failed and we were unable to recover it. 00:30:12.465 [2024-04-18 12:06:02.780405] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.465 [2024-04-18 12:06:02.780534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.465 [2024-04-18 12:06:02.780556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.465 [2024-04-18 12:06:02.780570] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.465 [2024-04-18 12:06:02.780581] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.465 [2024-04-18 12:06:02.780604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.465 qpair failed and we were unable to recover it. 00:30:12.465 [2024-04-18 12:06:02.790470] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.465 [2024-04-18 12:06:02.790687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.465 [2024-04-18 12:06:02.790709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.465 [2024-04-18 12:06:02.790723] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.465 [2024-04-18 12:06:02.790734] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.465 [2024-04-18 12:06:02.790758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.465 qpair failed and we were unable to recover it. 00:30:12.465 [2024-04-18 12:06:02.800529] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.465 [2024-04-18 12:06:02.800637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.465 [2024-04-18 12:06:02.800659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.465 [2024-04-18 12:06:02.800673] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.465 [2024-04-18 12:06:02.800684] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.465 [2024-04-18 12:06:02.800708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.465 qpair failed and we were unable to recover it. 00:30:12.465 [2024-04-18 12:06:02.810555] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.465 [2024-04-18 12:06:02.810662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.465 [2024-04-18 12:06:02.810687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.465 [2024-04-18 12:06:02.810700] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.465 [2024-04-18 12:06:02.810711] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.465 [2024-04-18 12:06:02.810736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.465 qpair failed and we were unable to recover it. 00:30:12.465 [2024-04-18 12:06:02.820543] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.465 [2024-04-18 12:06:02.820649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.465 [2024-04-18 12:06:02.820671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.465 [2024-04-18 12:06:02.820685] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.465 [2024-04-18 12:06:02.820695] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.465 [2024-04-18 12:06:02.820718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.465 qpair failed and we were unable to recover it. 00:30:12.465 [2024-04-18 12:06:02.830597] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.465 [2024-04-18 12:06:02.830706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.465 [2024-04-18 12:06:02.830729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.465 [2024-04-18 12:06:02.830742] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.465 [2024-04-18 12:06:02.830753] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.465 [2024-04-18 12:06:02.830777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.465 qpair failed and we were unable to recover it. 00:30:12.465 [2024-04-18 12:06:02.840720] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.465 [2024-04-18 12:06:02.840851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.465 [2024-04-18 12:06:02.840874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.465 [2024-04-18 12:06:02.840887] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.465 [2024-04-18 12:06:02.840898] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.465 [2024-04-18 12:06:02.840921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.465 qpair failed and we were unable to recover it. 00:30:12.465 [2024-04-18 12:06:02.850592] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.465 [2024-04-18 12:06:02.850701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.465 [2024-04-18 12:06:02.850725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.465 [2024-04-18 12:06:02.850740] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.465 [2024-04-18 12:06:02.850754] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.465 [2024-04-18 12:06:02.850778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.465 qpair failed and we were unable to recover it. 00:30:12.465 [2024-04-18 12:06:02.860747] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.465 [2024-04-18 12:06:02.860856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.465 [2024-04-18 12:06:02.860878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.465 [2024-04-18 12:06:02.860892] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.466 [2024-04-18 12:06:02.860902] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.466 [2024-04-18 12:06:02.860926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.466 qpair failed and we were unable to recover it. 00:30:12.466 [2024-04-18 12:06:02.870711] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.466 [2024-04-18 12:06:02.870818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.466 [2024-04-18 12:06:02.870840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.466 [2024-04-18 12:06:02.870854] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.466 [2024-04-18 12:06:02.870865] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.466 [2024-04-18 12:06:02.870889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.466 qpair failed and we were unable to recover it. 00:30:12.466 [2024-04-18 12:06:02.880764] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.466 [2024-04-18 12:06:02.880871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.466 [2024-04-18 12:06:02.880893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.466 [2024-04-18 12:06:02.880906] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.466 [2024-04-18 12:06:02.880917] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.466 [2024-04-18 12:06:02.880940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.466 qpair failed and we were unable to recover it. 00:30:12.466 [2024-04-18 12:06:02.890762] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.466 [2024-04-18 12:06:02.890869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.466 [2024-04-18 12:06:02.890896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.466 [2024-04-18 12:06:02.890910] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.466 [2024-04-18 12:06:02.890921] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.466 [2024-04-18 12:06:02.890947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.466 qpair failed and we were unable to recover it. 00:30:12.466 [2024-04-18 12:06:02.900775] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.466 [2024-04-18 12:06:02.900885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.466 [2024-04-18 12:06:02.900907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.466 [2024-04-18 12:06:02.900920] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.466 [2024-04-18 12:06:02.900931] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.466 [2024-04-18 12:06:02.900954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.466 qpair failed and we were unable to recover it. 00:30:12.466 [2024-04-18 12:06:02.910802] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.466 [2024-04-18 12:06:02.910933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.466 [2024-04-18 12:06:02.910954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.466 [2024-04-18 12:06:02.910968] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.466 [2024-04-18 12:06:02.910979] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.466 [2024-04-18 12:06:02.911002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.466 qpair failed and we were unable to recover it. 00:30:12.466 [2024-04-18 12:06:02.920881] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.466 [2024-04-18 12:06:02.920992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.466 [2024-04-18 12:06:02.921013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.466 [2024-04-18 12:06:02.921027] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.466 [2024-04-18 12:06:02.921038] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.466 [2024-04-18 12:06:02.921061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.466 qpair failed and we were unable to recover it. 00:30:12.466 [2024-04-18 12:06:02.930846] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.466 [2024-04-18 12:06:02.930965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.466 [2024-04-18 12:06:02.930987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.466 [2024-04-18 12:06:02.931000] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.466 [2024-04-18 12:06:02.931011] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.466 [2024-04-18 12:06:02.931034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.466 qpair failed and we were unable to recover it. 00:30:12.466 [2024-04-18 12:06:02.940895] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.466 [2024-04-18 12:06:02.941048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.466 [2024-04-18 12:06:02.941070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.466 [2024-04-18 12:06:02.941083] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.466 [2024-04-18 12:06:02.941097] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.466 [2024-04-18 12:06:02.941121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.466 qpair failed and we were unable to recover it. 00:30:12.466 [2024-04-18 12:06:02.950935] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.466 [2024-04-18 12:06:02.951038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.466 [2024-04-18 12:06:02.951060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.466 [2024-04-18 12:06:02.951073] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.466 [2024-04-18 12:06:02.951084] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.466 [2024-04-18 12:06:02.951107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.466 qpair failed and we were unable to recover it. 00:30:12.466 [2024-04-18 12:06:02.961036] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.466 [2024-04-18 12:06:02.961156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.466 [2024-04-18 12:06:02.961179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.466 [2024-04-18 12:06:02.961193] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.466 [2024-04-18 12:06:02.961204] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.466 [2024-04-18 12:06:02.961227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.466 qpair failed and we were unable to recover it. 00:30:12.466 [2024-04-18 12:06:02.970963] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.466 [2024-04-18 12:06:02.971068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.466 [2024-04-18 12:06:02.971091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.466 [2024-04-18 12:06:02.971104] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.466 [2024-04-18 12:06:02.971116] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.466 [2024-04-18 12:06:02.971139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.466 qpair failed and we were unable to recover it. 00:30:12.466 [2024-04-18 12:06:02.981063] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.466 [2024-04-18 12:06:02.981273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.466 [2024-04-18 12:06:02.981297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.466 [2024-04-18 12:06:02.981311] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.466 [2024-04-18 12:06:02.981321] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.466 [2024-04-18 12:06:02.981345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.466 qpair failed and we were unable to recover it. 00:30:12.466 [2024-04-18 12:06:02.991072] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.466 [2024-04-18 12:06:02.991182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.466 [2024-04-18 12:06:02.991205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.466 [2024-04-18 12:06:02.991218] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.466 [2024-04-18 12:06:02.991229] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.466 [2024-04-18 12:06:02.991253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.466 qpair failed and we were unable to recover it. 00:30:12.466 [2024-04-18 12:06:03.001145] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.467 [2024-04-18 12:06:03.001360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.467 [2024-04-18 12:06:03.001384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.467 [2024-04-18 12:06:03.001398] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.467 [2024-04-18 12:06:03.001410] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.467 [2024-04-18 12:06:03.001434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.467 qpair failed and we were unable to recover it. 00:30:12.726 [2024-04-18 12:06:03.011160] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.726 [2024-04-18 12:06:03.011292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.726 [2024-04-18 12:06:03.011314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.726 [2024-04-18 12:06:03.011328] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.726 [2024-04-18 12:06:03.011339] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.726 [2024-04-18 12:06:03.011362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.726 qpair failed and we were unable to recover it. 00:30:12.726 [2024-04-18 12:06:03.021198] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.726 [2024-04-18 12:06:03.021315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.726 [2024-04-18 12:06:03.021336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.726 [2024-04-18 12:06:03.021350] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.726 [2024-04-18 12:06:03.021361] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.726 [2024-04-18 12:06:03.021384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.726 qpair failed and we were unable to recover it. 00:30:12.726 [2024-04-18 12:06:03.031234] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.726 [2024-04-18 12:06:03.031346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.726 [2024-04-18 12:06:03.031368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.726 [2024-04-18 12:06:03.031384] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.726 [2024-04-18 12:06:03.031395] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.726 [2024-04-18 12:06:03.031419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.726 qpair failed and we were unable to recover it. 00:30:12.726 [2024-04-18 12:06:03.041309] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.726 [2024-04-18 12:06:03.041472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.726 [2024-04-18 12:06:03.041494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.727 [2024-04-18 12:06:03.041508] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.727 [2024-04-18 12:06:03.041518] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.727 [2024-04-18 12:06:03.041542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.727 qpair failed and we were unable to recover it. 00:30:12.727 [2024-04-18 12:06:03.051243] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.727 [2024-04-18 12:06:03.051351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.727 [2024-04-18 12:06:03.051373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.727 [2024-04-18 12:06:03.051387] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.727 [2024-04-18 12:06:03.051398] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.727 [2024-04-18 12:06:03.051422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.727 qpair failed and we were unable to recover it. 00:30:12.727 [2024-04-18 12:06:03.061264] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.727 [2024-04-18 12:06:03.061369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.727 [2024-04-18 12:06:03.061391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.727 [2024-04-18 12:06:03.061404] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.727 [2024-04-18 12:06:03.061414] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.727 [2024-04-18 12:06:03.061438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.727 qpair failed and we were unable to recover it. 00:30:12.727 [2024-04-18 12:06:03.071287] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.727 [2024-04-18 12:06:03.071442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.727 [2024-04-18 12:06:03.071470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.727 [2024-04-18 12:06:03.071484] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.727 [2024-04-18 12:06:03.071494] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.727 [2024-04-18 12:06:03.071519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.727 qpair failed and we were unable to recover it. 00:30:12.727 [2024-04-18 12:06:03.081445] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.727 [2024-04-18 12:06:03.081584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.727 [2024-04-18 12:06:03.081607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.727 [2024-04-18 12:06:03.081621] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.727 [2024-04-18 12:06:03.081631] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.727 [2024-04-18 12:06:03.081654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.727 qpair failed and we were unable to recover it. 00:30:12.727 [2024-04-18 12:06:03.091319] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.727 [2024-04-18 12:06:03.091431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.727 [2024-04-18 12:06:03.091462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.727 [2024-04-18 12:06:03.091477] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.727 [2024-04-18 12:06:03.091488] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.727 [2024-04-18 12:06:03.091512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.727 qpair failed and we were unable to recover it. 00:30:12.727 [2024-04-18 12:06:03.101391] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.727 [2024-04-18 12:06:03.101506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.727 [2024-04-18 12:06:03.101529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.727 [2024-04-18 12:06:03.101542] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.727 [2024-04-18 12:06:03.101553] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.727 [2024-04-18 12:06:03.101577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.727 qpair failed and we were unable to recover it. 00:30:12.727 [2024-04-18 12:06:03.111464] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.727 [2024-04-18 12:06:03.111577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.727 [2024-04-18 12:06:03.111599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.727 [2024-04-18 12:06:03.111613] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.727 [2024-04-18 12:06:03.111624] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.727 [2024-04-18 12:06:03.111647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.727 qpair failed and we were unable to recover it. 00:30:12.727 [2024-04-18 12:06:03.121496] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.727 [2024-04-18 12:06:03.121597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.727 [2024-04-18 12:06:03.121622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.727 [2024-04-18 12:06:03.121636] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.727 [2024-04-18 12:06:03.121646] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.727 [2024-04-18 12:06:03.121674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.727 qpair failed and we were unable to recover it. 00:30:12.727 [2024-04-18 12:06:03.131468] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.727 [2024-04-18 12:06:03.131580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.727 [2024-04-18 12:06:03.131603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.727 [2024-04-18 12:06:03.131616] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.727 [2024-04-18 12:06:03.131627] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.727 [2024-04-18 12:06:03.131651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.727 qpair failed and we were unable to recover it. 00:30:12.727 [2024-04-18 12:06:03.141495] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.727 [2024-04-18 12:06:03.141608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.727 [2024-04-18 12:06:03.141630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.727 [2024-04-18 12:06:03.141644] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.727 [2024-04-18 12:06:03.141654] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.727 [2024-04-18 12:06:03.141685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.727 qpair failed and we were unable to recover it. 00:30:12.727 [2024-04-18 12:06:03.151520] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.727 [2024-04-18 12:06:03.151663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.727 [2024-04-18 12:06:03.151686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.727 [2024-04-18 12:06:03.151699] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.727 [2024-04-18 12:06:03.151710] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.727 [2024-04-18 12:06:03.151733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.727 qpair failed and we were unable to recover it. 00:30:12.727 [2024-04-18 12:06:03.161572] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.727 [2024-04-18 12:06:03.161672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.727 [2024-04-18 12:06:03.161694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.727 [2024-04-18 12:06:03.161708] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.727 [2024-04-18 12:06:03.161718] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.727 [2024-04-18 12:06:03.161744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.727 qpair failed and we were unable to recover it. 00:30:12.727 [2024-04-18 12:06:03.171584] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.727 [2024-04-18 12:06:03.171704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.727 [2024-04-18 12:06:03.171727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.727 [2024-04-18 12:06:03.171741] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.727 [2024-04-18 12:06:03.171751] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.727 [2024-04-18 12:06:03.171775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.727 qpair failed and we were unable to recover it. 00:30:12.727 [2024-04-18 12:06:03.181594] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.727 [2024-04-18 12:06:03.181713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.727 [2024-04-18 12:06:03.181735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.727 [2024-04-18 12:06:03.181749] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.727 [2024-04-18 12:06:03.181760] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.727 [2024-04-18 12:06:03.181783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.727 qpair failed and we were unable to recover it. 00:30:12.727 [2024-04-18 12:06:03.191736] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.727 [2024-04-18 12:06:03.191852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.727 [2024-04-18 12:06:03.191874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.728 [2024-04-18 12:06:03.191887] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.728 [2024-04-18 12:06:03.191898] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.728 [2024-04-18 12:06:03.191922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.728 qpair failed and we were unable to recover it. 00:30:12.728 [2024-04-18 12:06:03.201626] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.728 [2024-04-18 12:06:03.201737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.728 [2024-04-18 12:06:03.201759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.728 [2024-04-18 12:06:03.201772] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.728 [2024-04-18 12:06:03.201783] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.728 [2024-04-18 12:06:03.201806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.728 qpair failed and we were unable to recover it. 00:30:12.728 [2024-04-18 12:06:03.211664] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.728 [2024-04-18 12:06:03.211767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.728 [2024-04-18 12:06:03.211792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.728 [2024-04-18 12:06:03.211806] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.728 [2024-04-18 12:06:03.211817] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.728 [2024-04-18 12:06:03.211840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.728 qpair failed and we were unable to recover it. 00:30:12.728 [2024-04-18 12:06:03.221665] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.728 [2024-04-18 12:06:03.221802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.728 [2024-04-18 12:06:03.221825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.728 [2024-04-18 12:06:03.221838] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.728 [2024-04-18 12:06:03.221849] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.728 [2024-04-18 12:06:03.221872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.728 qpair failed and we were unable to recover it. 00:30:12.728 [2024-04-18 12:06:03.231705] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.728 [2024-04-18 12:06:03.231867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.728 [2024-04-18 12:06:03.231890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.728 [2024-04-18 12:06:03.231903] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.728 [2024-04-18 12:06:03.231914] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.728 [2024-04-18 12:06:03.231938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.728 qpair failed and we were unable to recover it. 00:30:12.728 [2024-04-18 12:06:03.241797] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.728 [2024-04-18 12:06:03.241920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.728 [2024-04-18 12:06:03.241942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.728 [2024-04-18 12:06:03.241955] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.728 [2024-04-18 12:06:03.241966] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.728 [2024-04-18 12:06:03.241989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.728 qpair failed and we were unable to recover it. 00:30:12.728 [2024-04-18 12:06:03.251801] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.728 [2024-04-18 12:06:03.251915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.728 [2024-04-18 12:06:03.251938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.728 [2024-04-18 12:06:03.251951] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.728 [2024-04-18 12:06:03.251965] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.728 [2024-04-18 12:06:03.251988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.728 qpair failed and we were unable to recover it. 00:30:12.728 [2024-04-18 12:06:03.261884] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.728 [2024-04-18 12:06:03.262007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.728 [2024-04-18 12:06:03.262030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.728 [2024-04-18 12:06:03.262043] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.728 [2024-04-18 12:06:03.262054] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.728 [2024-04-18 12:06:03.262077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.728 qpair failed and we were unable to recover it. 00:30:12.728 [2024-04-18 12:06:03.271862] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.728 [2024-04-18 12:06:03.271970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.728 [2024-04-18 12:06:03.271992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.728 [2024-04-18 12:06:03.272006] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.728 [2024-04-18 12:06:03.272017] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.728 [2024-04-18 12:06:03.272040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.728 qpair failed and we were unable to recover it. 00:30:12.987 [2024-04-18 12:06:03.281887] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.987 [2024-04-18 12:06:03.282009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.987 [2024-04-18 12:06:03.282032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.987 [2024-04-18 12:06:03.282046] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.987 [2024-04-18 12:06:03.282056] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.987 [2024-04-18 12:06:03.282080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.987 qpair failed and we were unable to recover it. 00:30:12.987 [2024-04-18 12:06:03.291958] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.987 [2024-04-18 12:06:03.292074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.987 [2024-04-18 12:06:03.292095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.987 [2024-04-18 12:06:03.292109] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.987 [2024-04-18 12:06:03.292120] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.987 [2024-04-18 12:06:03.292143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.987 qpair failed and we were unable to recover it. 00:30:12.987 [2024-04-18 12:06:03.301951] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.987 [2024-04-18 12:06:03.302066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.987 [2024-04-18 12:06:03.302088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.987 [2024-04-18 12:06:03.302102] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.987 [2024-04-18 12:06:03.302112] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.987 [2024-04-18 12:06:03.302136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.987 qpair failed and we were unable to recover it. 00:30:12.987 [2024-04-18 12:06:03.311994] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.987 [2024-04-18 12:06:03.312109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.987 [2024-04-18 12:06:03.312131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.987 [2024-04-18 12:06:03.312144] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.987 [2024-04-18 12:06:03.312155] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.987 [2024-04-18 12:06:03.312179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.987 qpair failed and we were unable to recover it. 00:30:12.987 [2024-04-18 12:06:03.322046] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.987 [2024-04-18 12:06:03.322151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.987 [2024-04-18 12:06:03.322173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.987 [2024-04-18 12:06:03.322186] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.987 [2024-04-18 12:06:03.322197] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.988 [2024-04-18 12:06:03.322220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.988 qpair failed and we were unable to recover it. 00:30:12.988 [2024-04-18 12:06:03.332034] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.988 [2024-04-18 12:06:03.332163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.988 [2024-04-18 12:06:03.332185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.988 [2024-04-18 12:06:03.332199] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.988 [2024-04-18 12:06:03.332209] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.988 [2024-04-18 12:06:03.332233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.988 qpair failed and we were unable to recover it. 00:30:12.988 [2024-04-18 12:06:03.342150] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.988 [2024-04-18 12:06:03.342262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.988 [2024-04-18 12:06:03.342285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.988 [2024-04-18 12:06:03.342298] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.988 [2024-04-18 12:06:03.342312] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.988 [2024-04-18 12:06:03.342336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.988 qpair failed and we were unable to recover it. 00:30:12.988 [2024-04-18 12:06:03.352052] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.988 [2024-04-18 12:06:03.352161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.988 [2024-04-18 12:06:03.352184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.988 [2024-04-18 12:06:03.352197] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.988 [2024-04-18 12:06:03.352208] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.988 [2024-04-18 12:06:03.352236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.988 qpair failed and we were unable to recover it. 00:30:12.988 [2024-04-18 12:06:03.362118] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.988 [2024-04-18 12:06:03.362339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.988 [2024-04-18 12:06:03.362362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.988 [2024-04-18 12:06:03.362376] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.988 [2024-04-18 12:06:03.362388] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.988 [2024-04-18 12:06:03.362411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.988 qpair failed and we were unable to recover it. 00:30:12.988 [2024-04-18 12:06:03.372131] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.988 [2024-04-18 12:06:03.372236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.988 [2024-04-18 12:06:03.372258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.988 [2024-04-18 12:06:03.372272] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.988 [2024-04-18 12:06:03.372284] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.988 [2024-04-18 12:06:03.372307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.988 qpair failed and we were unable to recover it. 00:30:12.988 [2024-04-18 12:06:03.382178] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.988 [2024-04-18 12:06:03.382439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.988 [2024-04-18 12:06:03.382469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.988 [2024-04-18 12:06:03.382483] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.988 [2024-04-18 12:06:03.382494] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.988 [2024-04-18 12:06:03.382518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.988 qpair failed and we were unable to recover it. 00:30:12.988 [2024-04-18 12:06:03.392158] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.988 [2024-04-18 12:06:03.392262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.988 [2024-04-18 12:06:03.392284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.988 [2024-04-18 12:06:03.392297] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.988 [2024-04-18 12:06:03.392308] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.988 [2024-04-18 12:06:03.392331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.988 qpair failed and we were unable to recover it. 00:30:12.988 [2024-04-18 12:06:03.402298] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.988 [2024-04-18 12:06:03.402505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.988 [2024-04-18 12:06:03.402527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.988 [2024-04-18 12:06:03.402548] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.988 [2024-04-18 12:06:03.402560] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.988 [2024-04-18 12:06:03.402584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.988 qpair failed and we were unable to recover it. 00:30:12.988 [2024-04-18 12:06:03.412254] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.988 [2024-04-18 12:06:03.412361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.988 [2024-04-18 12:06:03.412383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.988 [2024-04-18 12:06:03.412397] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.988 [2024-04-18 12:06:03.412408] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.988 [2024-04-18 12:06:03.412431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.988 qpair failed and we were unable to recover it. 00:30:12.988 [2024-04-18 12:06:03.422365] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.988 [2024-04-18 12:06:03.422482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.988 [2024-04-18 12:06:03.422505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.988 [2024-04-18 12:06:03.422518] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.988 [2024-04-18 12:06:03.422529] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.988 [2024-04-18 12:06:03.422553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.988 qpair failed and we were unable to recover it. 00:30:12.988 [2024-04-18 12:06:03.432523] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.988 [2024-04-18 12:06:03.432631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.988 [2024-04-18 12:06:03.432654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.988 [2024-04-18 12:06:03.432670] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.988 [2024-04-18 12:06:03.432681] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.988 [2024-04-18 12:06:03.432704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.988 qpair failed and we were unable to recover it. 00:30:12.988 [2024-04-18 12:06:03.442394] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.988 [2024-04-18 12:06:03.442513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.988 [2024-04-18 12:06:03.442536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.988 [2024-04-18 12:06:03.442550] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.988 [2024-04-18 12:06:03.442561] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.988 [2024-04-18 12:06:03.442584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.988 qpair failed and we were unable to recover it. 00:30:12.988 [2024-04-18 12:06:03.452392] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.988 [2024-04-18 12:06:03.452506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.988 [2024-04-18 12:06:03.452530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.988 [2024-04-18 12:06:03.452544] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.988 [2024-04-18 12:06:03.452555] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.988 [2024-04-18 12:06:03.452579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.988 qpair failed and we were unable to recover it. 00:30:12.988 [2024-04-18 12:06:03.462389] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.989 [2024-04-18 12:06:03.462503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.989 [2024-04-18 12:06:03.462526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.989 [2024-04-18 12:06:03.462539] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.989 [2024-04-18 12:06:03.462552] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.989 [2024-04-18 12:06:03.462576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.989 qpair failed and we were unable to recover it. 00:30:12.989 [2024-04-18 12:06:03.472495] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.989 [2024-04-18 12:06:03.472609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.989 [2024-04-18 12:06:03.472631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.989 [2024-04-18 12:06:03.472644] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.989 [2024-04-18 12:06:03.472655] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.989 [2024-04-18 12:06:03.472678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.989 qpair failed and we were unable to recover it. 00:30:12.989 [2024-04-18 12:06:03.482439] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.989 [2024-04-18 12:06:03.482557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.989 [2024-04-18 12:06:03.482579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.989 [2024-04-18 12:06:03.482592] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.989 [2024-04-18 12:06:03.482603] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.989 [2024-04-18 12:06:03.482627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.989 qpair failed and we were unable to recover it. 00:30:12.989 [2024-04-18 12:06:03.492567] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.989 [2024-04-18 12:06:03.492721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.989 [2024-04-18 12:06:03.492743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.989 [2024-04-18 12:06:03.492757] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.989 [2024-04-18 12:06:03.492768] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.989 [2024-04-18 12:06:03.492791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.989 qpair failed and we were unable to recover it. 00:30:12.989 [2024-04-18 12:06:03.502574] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.989 [2024-04-18 12:06:03.502680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.989 [2024-04-18 12:06:03.502702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.989 [2024-04-18 12:06:03.502715] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.989 [2024-04-18 12:06:03.502726] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.989 [2024-04-18 12:06:03.502749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.989 qpair failed and we were unable to recover it. 00:30:12.989 [2024-04-18 12:06:03.512625] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.989 [2024-04-18 12:06:03.512761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.989 [2024-04-18 12:06:03.512783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.989 [2024-04-18 12:06:03.512796] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.989 [2024-04-18 12:06:03.512807] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.989 [2024-04-18 12:06:03.512830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.989 qpair failed and we were unable to recover it. 00:30:12.989 [2024-04-18 12:06:03.522635] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.989 [2024-04-18 12:06:03.522751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.989 [2024-04-18 12:06:03.522776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.989 [2024-04-18 12:06:03.522790] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.989 [2024-04-18 12:06:03.522801] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.989 [2024-04-18 12:06:03.522825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.989 qpair failed and we were unable to recover it. 00:30:12.989 [2024-04-18 12:06:03.532697] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.989 [2024-04-18 12:06:03.532820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.989 [2024-04-18 12:06:03.532842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.989 [2024-04-18 12:06:03.532856] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.989 [2024-04-18 12:06:03.532867] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:12.989 [2024-04-18 12:06:03.532891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.989 qpair failed and we were unable to recover it. 00:30:13.248 [2024-04-18 12:06:03.542670] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.248 [2024-04-18 12:06:03.542805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.248 [2024-04-18 12:06:03.542827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.248 [2024-04-18 12:06:03.542841] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.248 [2024-04-18 12:06:03.542852] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:13.248 [2024-04-18 12:06:03.542876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.248 qpair failed and we were unable to recover it. 00:30:13.248 [2024-04-18 12:06:03.552693] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.248 [2024-04-18 12:06:03.552799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.248 [2024-04-18 12:06:03.552821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.248 [2024-04-18 12:06:03.552836] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.248 [2024-04-18 12:06:03.552846] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:13.248 [2024-04-18 12:06:03.552869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.248 qpair failed and we were unable to recover it. 00:30:13.248 [2024-04-18 12:06:03.562633] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.248 [2024-04-18 12:06:03.562751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.248 [2024-04-18 12:06:03.562774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.248 [2024-04-18 12:06:03.562788] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.248 [2024-04-18 12:06:03.562799] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:13.248 [2024-04-18 12:06:03.562825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.248 qpair failed and we were unable to recover it. 00:30:13.248 [2024-04-18 12:06:03.572718] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.248 [2024-04-18 12:06:03.572826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.248 [2024-04-18 12:06:03.572848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.248 [2024-04-18 12:06:03.572861] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.248 [2024-04-18 12:06:03.572872] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:13.248 [2024-04-18 12:06:03.572896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.248 qpair failed and we were unable to recover it. 00:30:13.248 [2024-04-18 12:06:03.582798] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.248 [2024-04-18 12:06:03.582910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.248 [2024-04-18 12:06:03.582932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.248 [2024-04-18 12:06:03.582946] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.248 [2024-04-18 12:06:03.582957] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:13.248 [2024-04-18 12:06:03.582997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.248 qpair failed and we were unable to recover it. 00:30:13.248 [2024-04-18 12:06:03.592819] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.248 [2024-04-18 12:06:03.592931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.248 [2024-04-18 12:06:03.592953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.248 [2024-04-18 12:06:03.592967] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.248 [2024-04-18 12:06:03.592978] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:13.248 [2024-04-18 12:06:03.593001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.248 qpair failed and we were unable to recover it. 00:30:13.248 [2024-04-18 12:06:03.602857] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.248 [2024-04-18 12:06:03.602957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.248 [2024-04-18 12:06:03.602979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.248 [2024-04-18 12:06:03.602993] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.248 [2024-04-18 12:06:03.603004] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:13.248 [2024-04-18 12:06:03.603028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.248 qpair failed and we were unable to recover it. 00:30:13.248 [2024-04-18 12:06:03.612798] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.248 [2024-04-18 12:06:03.612901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.248 [2024-04-18 12:06:03.612926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.248 [2024-04-18 12:06:03.612940] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.248 [2024-04-18 12:06:03.612951] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:13.248 [2024-04-18 12:06:03.612975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.248 qpair failed and we were unable to recover it. 00:30:13.248 [2024-04-18 12:06:03.622872] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.248 [2024-04-18 12:06:03.622983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.248 [2024-04-18 12:06:03.623008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.248 [2024-04-18 12:06:03.623022] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.248 [2024-04-18 12:06:03.623033] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:13.248 [2024-04-18 12:06:03.623057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.248 qpair failed and we were unable to recover it. 00:30:13.248 [2024-04-18 12:06:03.632966] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.248 [2024-04-18 12:06:03.633078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.248 [2024-04-18 12:06:03.633099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.248 [2024-04-18 12:06:03.633113] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.248 [2024-04-18 12:06:03.633124] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:13.248 [2024-04-18 12:06:03.633147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.248 qpair failed and we were unable to recover it. 00:30:13.248 [2024-04-18 12:06:03.642958] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.248 [2024-04-18 12:06:03.643066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.248 [2024-04-18 12:06:03.643088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.248 [2024-04-18 12:06:03.643101] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.248 [2024-04-18 12:06:03.643112] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:13.248 [2024-04-18 12:06:03.643135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.248 qpair failed and we were unable to recover it. 00:30:13.248 [2024-04-18 12:06:03.652920] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.248 [2024-04-18 12:06:03.653067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.248 [2024-04-18 12:06:03.653090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.248 [2024-04-18 12:06:03.653103] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.248 [2024-04-18 12:06:03.653115] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:13.248 [2024-04-18 12:06:03.653141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.248 qpair failed and we were unable to recover it. 00:30:13.248 [2024-04-18 12:06:03.663041] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.248 [2024-04-18 12:06:03.663151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.248 [2024-04-18 12:06:03.663173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.248 [2024-04-18 12:06:03.663186] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.248 [2024-04-18 12:06:03.663197] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:13.248 [2024-04-18 12:06:03.663220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.248 qpair failed and we were unable to recover it. 00:30:13.248 [2024-04-18 12:06:03.673068] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.248 [2024-04-18 12:06:03.673316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.248 [2024-04-18 12:06:03.673339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.248 [2024-04-18 12:06:03.673352] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.248 [2024-04-18 12:06:03.673363] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:13.248 [2024-04-18 12:06:03.673387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.248 qpair failed and we were unable to recover it. 00:30:13.248 [2024-04-18 12:06:03.683017] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.248 [2024-04-18 12:06:03.683163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.248 [2024-04-18 12:06:03.683185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.248 [2024-04-18 12:06:03.683199] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.248 [2024-04-18 12:06:03.683210] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:13.249 [2024-04-18 12:06:03.683234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.249 qpair failed and we were unable to recover it. 00:30:13.249 [2024-04-18 12:06:03.693079] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.249 [2024-04-18 12:06:03.693284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.249 [2024-04-18 12:06:03.693307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.249 [2024-04-18 12:06:03.693320] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.249 [2024-04-18 12:06:03.693332] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:13.249 [2024-04-18 12:06:03.693356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.249 qpair failed and we were unable to recover it. 00:30:13.249 [2024-04-18 12:06:03.703289] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.249 [2024-04-18 12:06:03.703401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.249 [2024-04-18 12:06:03.703423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.249 [2024-04-18 12:06:03.703436] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.249 [2024-04-18 12:06:03.703447] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:13.249 [2024-04-18 12:06:03.703479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.249 qpair failed and we were unable to recover it. 00:30:13.249 [2024-04-18 12:06:03.713222] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.249 [2024-04-18 12:06:03.713334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.249 [2024-04-18 12:06:03.713356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.249 [2024-04-18 12:06:03.713370] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.249 [2024-04-18 12:06:03.713381] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:13.249 [2024-04-18 12:06:03.713404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.249 qpair failed and we were unable to recover it. 00:30:13.249 [2024-04-18 12:06:03.723211] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.249 [2024-04-18 12:06:03.723316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.249 [2024-04-18 12:06:03.723338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.249 [2024-04-18 12:06:03.723352] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.249 [2024-04-18 12:06:03.723362] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:13.249 [2024-04-18 12:06:03.723386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.249 qpair failed and we were unable to recover it. 00:30:13.249 [2024-04-18 12:06:03.733203] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.249 [2024-04-18 12:06:03.733359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.249 [2024-04-18 12:06:03.733381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.249 [2024-04-18 12:06:03.733394] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.249 [2024-04-18 12:06:03.733405] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:13.249 [2024-04-18 12:06:03.733428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.249 qpair failed and we were unable to recover it. 00:30:13.249 [2024-04-18 12:06:03.743236] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.249 [2024-04-18 12:06:03.743349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.249 [2024-04-18 12:06:03.743371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.249 [2024-04-18 12:06:03.743384] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.249 [2024-04-18 12:06:03.743398] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:13.249 [2024-04-18 12:06:03.743422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.249 qpair failed and we were unable to recover it. 00:30:13.249 [2024-04-18 12:06:03.753311] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.249 [2024-04-18 12:06:03.753421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.249 [2024-04-18 12:06:03.753443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.249 [2024-04-18 12:06:03.753464] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.249 [2024-04-18 12:06:03.753475] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:13.249 [2024-04-18 12:06:03.753499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.249 qpair failed and we were unable to recover it. 00:30:13.249 [2024-04-18 12:06:03.763289] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.249 [2024-04-18 12:06:03.763392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.249 [2024-04-18 12:06:03.763414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.249 [2024-04-18 12:06:03.763427] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.249 [2024-04-18 12:06:03.763439] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:13.249 [2024-04-18 12:06:03.763471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.249 qpair failed and we were unable to recover it. 00:30:13.249 [2024-04-18 12:06:03.773280] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.249 [2024-04-18 12:06:03.773398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.249 [2024-04-18 12:06:03.773420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.249 [2024-04-18 12:06:03.773434] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.249 [2024-04-18 12:06:03.773444] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:13.249 [2024-04-18 12:06:03.773476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.249 qpair failed and we were unable to recover it. 00:30:13.249 [2024-04-18 12:06:03.783347] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.249 [2024-04-18 12:06:03.783492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.249 [2024-04-18 12:06:03.783515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.249 [2024-04-18 12:06:03.783528] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.249 [2024-04-18 12:06:03.783539] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:13.249 [2024-04-18 12:06:03.783562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.249 qpair failed and we were unable to recover it. 00:30:13.249 [2024-04-18 12:06:03.793434] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.249 [2024-04-18 12:06:03.793663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.249 [2024-04-18 12:06:03.793686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.249 [2024-04-18 12:06:03.793700] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.249 [2024-04-18 12:06:03.793711] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:13.249 [2024-04-18 12:06:03.793735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.249 qpair failed and we were unable to recover it. 00:30:13.508 [2024-04-18 12:06:03.803434] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.508 [2024-04-18 12:06:03.803545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.508 [2024-04-18 12:06:03.803567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.508 [2024-04-18 12:06:03.803581] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.508 [2024-04-18 12:06:03.803592] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:13.508 [2024-04-18 12:06:03.803617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.508 qpair failed and we were unable to recover it. 00:30:13.508 [2024-04-18 12:06:03.813530] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.508 [2024-04-18 12:06:03.813645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.508 [2024-04-18 12:06:03.813667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.508 [2024-04-18 12:06:03.813681] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.508 [2024-04-18 12:06:03.813692] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:13.508 [2024-04-18 12:06:03.813720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.508 qpair failed and we were unable to recover it. 00:30:13.508 [2024-04-18 12:06:03.823500] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.508 [2024-04-18 12:06:03.823614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.508 [2024-04-18 12:06:03.823636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.508 [2024-04-18 12:06:03.823650] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.508 [2024-04-18 12:06:03.823662] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:13.508 [2024-04-18 12:06:03.823686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.508 qpair failed and we were unable to recover it. 00:30:13.508 [2024-04-18 12:06:03.833519] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.508 [2024-04-18 12:06:03.833627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.508 [2024-04-18 12:06:03.833650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.508 [2024-04-18 12:06:03.833666] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.508 [2024-04-18 12:06:03.833678] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:13.508 [2024-04-18 12:06:03.833702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.508 qpair failed and we were unable to recover it. 00:30:13.508 [2024-04-18 12:06:03.843504] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.508 [2024-04-18 12:06:03.843743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.508 [2024-04-18 12:06:03.843767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.508 [2024-04-18 12:06:03.843782] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.508 [2024-04-18 12:06:03.843793] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:13.508 [2024-04-18 12:06:03.843818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.508 qpair failed and we were unable to recover it. 00:30:13.508 [2024-04-18 12:06:03.853522] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.508 [2024-04-18 12:06:03.853736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.508 [2024-04-18 12:06:03.853760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.508 [2024-04-18 12:06:03.853774] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.508 [2024-04-18 12:06:03.853785] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:13.508 [2024-04-18 12:06:03.853809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.508 qpair failed and we were unable to recover it. 00:30:13.508 [2024-04-18 12:06:03.863640] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.508 [2024-04-18 12:06:03.863746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.508 [2024-04-18 12:06:03.863771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.508 [2024-04-18 12:06:03.863784] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.508 [2024-04-18 12:06:03.863797] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:13.508 [2024-04-18 12:06:03.863822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.508 qpair failed and we were unable to recover it. 00:30:13.508 [2024-04-18 12:06:03.873625] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.508 [2024-04-18 12:06:03.873735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.508 [2024-04-18 12:06:03.873757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.508 [2024-04-18 12:06:03.873771] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.508 [2024-04-18 12:06:03.873782] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:13.508 [2024-04-18 12:06:03.873805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.508 qpair failed and we were unable to recover it. 00:30:13.508 [2024-04-18 12:06:03.883661] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.508 [2024-04-18 12:06:03.883773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.508 [2024-04-18 12:06:03.883795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.508 [2024-04-18 12:06:03.883809] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.508 [2024-04-18 12:06:03.883820] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:13.508 [2024-04-18 12:06:03.883843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.508 qpair failed and we were unable to recover it. 00:30:13.508 [2024-04-18 12:06:03.893623] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.508 [2024-04-18 12:06:03.893735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.508 [2024-04-18 12:06:03.893757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.508 [2024-04-18 12:06:03.893771] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.508 [2024-04-18 12:06:03.893782] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:13.508 [2024-04-18 12:06:03.893806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.508 qpair failed and we were unable to recover it. 00:30:13.508 [2024-04-18 12:06:03.903642] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.508 [2024-04-18 12:06:03.903752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.508 [2024-04-18 12:06:03.903774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.508 [2024-04-18 12:06:03.903787] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.508 [2024-04-18 12:06:03.903798] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:13.508 [2024-04-18 12:06:03.903822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.508 qpair failed and we were unable to recover it. 00:30:13.508 [2024-04-18 12:06:03.913689] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.508 [2024-04-18 12:06:03.913795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.508 [2024-04-18 12:06:03.913818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.508 [2024-04-18 12:06:03.913832] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.508 [2024-04-18 12:06:03.913849] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:13.508 [2024-04-18 12:06:03.913873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.508 qpair failed and we were unable to recover it. 00:30:13.508 [2024-04-18 12:06:03.923770] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.508 [2024-04-18 12:06:03.923903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.508 [2024-04-18 12:06:03.923928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.508 [2024-04-18 12:06:03.923942] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.508 [2024-04-18 12:06:03.923953] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:13.508 [2024-04-18 12:06:03.923976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.508 qpair failed and we were unable to recover it. 00:30:13.509 [2024-04-18 12:06:03.933787] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.509 [2024-04-18 12:06:03.933904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.509 [2024-04-18 12:06:03.933926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.509 [2024-04-18 12:06:03.933940] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.509 [2024-04-18 12:06:03.933951] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:13.509 [2024-04-18 12:06:03.933975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.509 qpair failed and we were unable to recover it. 00:30:13.509 [2024-04-18 12:06:03.943896] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.509 [2024-04-18 12:06:03.944004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.509 [2024-04-18 12:06:03.944026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.509 [2024-04-18 12:06:03.944039] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.509 [2024-04-18 12:06:03.944050] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:13.509 [2024-04-18 12:06:03.944074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.509 qpair failed and we were unable to recover it. 00:30:13.509 [2024-04-18 12:06:03.953903] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.509 [2024-04-18 12:06:03.954055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.509 [2024-04-18 12:06:03.954078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.509 [2024-04-18 12:06:03.954092] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.509 [2024-04-18 12:06:03.954103] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:13.509 [2024-04-18 12:06:03.954126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.509 qpair failed and we were unable to recover it. 00:30:13.509 [2024-04-18 12:06:03.963784] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.509 [2024-04-18 12:06:03.963894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.509 [2024-04-18 12:06:03.963916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.509 [2024-04-18 12:06:03.963929] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.509 [2024-04-18 12:06:03.963940] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:13.509 [2024-04-18 12:06:03.963963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.509 qpair failed and we were unable to recover it. 00:30:13.509 [2024-04-18 12:06:03.973881] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.509 [2024-04-18 12:06:03.973997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.509 [2024-04-18 12:06:03.974020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.509 [2024-04-18 12:06:03.974034] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.509 [2024-04-18 12:06:03.974046] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:13.509 [2024-04-18 12:06:03.974070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.509 qpair failed and we were unable to recover it. 00:30:13.509 [2024-04-18 12:06:03.983881] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.509 [2024-04-18 12:06:03.983986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.509 [2024-04-18 12:06:03.984008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.509 [2024-04-18 12:06:03.984022] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.509 [2024-04-18 12:06:03.984032] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:13.509 [2024-04-18 12:06:03.984056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.509 qpair failed and we were unable to recover it. 00:30:13.509 [2024-04-18 12:06:03.993926] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.509 [2024-04-18 12:06:03.994035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.509 [2024-04-18 12:06:03.994056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.509 [2024-04-18 12:06:03.994070] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.509 [2024-04-18 12:06:03.994081] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:13.509 [2024-04-18 12:06:03.994104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.509 qpair failed and we were unable to recover it. 00:30:13.509 [2024-04-18 12:06:04.003996] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.509 [2024-04-18 12:06:04.004107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.509 [2024-04-18 12:06:04.004129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.509 [2024-04-18 12:06:04.004143] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.509 [2024-04-18 12:06:04.004154] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:13.509 [2024-04-18 12:06:04.004178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.509 qpair failed and we were unable to recover it. 00:30:13.509 [2024-04-18 12:06:04.013952] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.509 [2024-04-18 12:06:04.014060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.509 [2024-04-18 12:06:04.014085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.509 [2024-04-18 12:06:04.014098] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.509 [2024-04-18 12:06:04.014109] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:13.509 [2024-04-18 12:06:04.014133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.509 qpair failed and we were unable to recover it. 00:30:13.509 [2024-04-18 12:06:04.023987] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.509 [2024-04-18 12:06:04.024098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.509 [2024-04-18 12:06:04.024121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.509 [2024-04-18 12:06:04.024134] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.509 [2024-04-18 12:06:04.024145] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:13.509 [2024-04-18 12:06:04.024169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.509 qpair failed and we were unable to recover it. 00:30:13.509 [2024-04-18 12:06:04.034038] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.509 [2024-04-18 12:06:04.034145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.509 [2024-04-18 12:06:04.034167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.509 [2024-04-18 12:06:04.034181] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.509 [2024-04-18 12:06:04.034192] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:13.509 [2024-04-18 12:06:04.034215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.509 qpair failed and we were unable to recover it. 00:30:13.509 [2024-04-18 12:06:04.044011] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.509 [2024-04-18 12:06:04.044127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.509 [2024-04-18 12:06:04.044149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.509 [2024-04-18 12:06:04.044162] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.509 [2024-04-18 12:06:04.044173] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:13.509 [2024-04-18 12:06:04.044200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.509 qpair failed and we were unable to recover it. 00:30:13.509 [2024-04-18 12:06:04.054100] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.509 [2024-04-18 12:06:04.054211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.509 [2024-04-18 12:06:04.054234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.509 [2024-04-18 12:06:04.054247] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.509 [2024-04-18 12:06:04.054258] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:13.509 [2024-04-18 12:06:04.054284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.509 qpair failed and we were unable to recover it. 00:30:13.767 [2024-04-18 12:06:04.064226] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.767 [2024-04-18 12:06:04.064336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.767 [2024-04-18 12:06:04.064358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.767 [2024-04-18 12:06:04.064372] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.767 [2024-04-18 12:06:04.064382] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:13.767 [2024-04-18 12:06:04.064406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.767 qpair failed and we were unable to recover it. 00:30:13.767 [2024-04-18 12:06:04.074326] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.767 [2024-04-18 12:06:04.074448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.767 [2024-04-18 12:06:04.074479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.767 [2024-04-18 12:06:04.074493] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.767 [2024-04-18 12:06:04.074504] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:13.767 [2024-04-18 12:06:04.074528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.767 qpair failed and we were unable to recover it. 00:30:13.767 [2024-04-18 12:06:04.084182] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.767 [2024-04-18 12:06:04.084293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.767 [2024-04-18 12:06:04.084315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.767 [2024-04-18 12:06:04.084328] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.767 [2024-04-18 12:06:04.084339] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:13.767 [2024-04-18 12:06:04.084362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.767 qpair failed and we were unable to recover it. 00:30:13.767 [2024-04-18 12:06:04.094160] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.767 [2024-04-18 12:06:04.094266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.767 [2024-04-18 12:06:04.094290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.767 [2024-04-18 12:06:04.094304] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.767 [2024-04-18 12:06:04.094316] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:13.767 [2024-04-18 12:06:04.094340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.767 qpair failed and we were unable to recover it. 00:30:13.767 [2024-04-18 12:06:04.104280] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.767 [2024-04-18 12:06:04.104493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.767 [2024-04-18 12:06:04.104519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.767 [2024-04-18 12:06:04.104532] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.767 [2024-04-18 12:06:04.104543] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:13.767 [2024-04-18 12:06:04.104567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.767 qpair failed and we were unable to recover it. 00:30:13.767 [2024-04-18 12:06:04.114256] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.767 [2024-04-18 12:06:04.114364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.767 [2024-04-18 12:06:04.114386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.767 [2024-04-18 12:06:04.114399] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.767 [2024-04-18 12:06:04.114410] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:30:13.768 [2024-04-18 12:06:04.114434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.768 qpair failed and we were unable to recover it. 00:30:13.768 [2024-04-18 12:06:04.114469] nvme_ctrlr.c:4340:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:30:13.768 A controller has encountered a failure and is being reset. 00:30:13.768 Controller properly reset. 00:30:17.043 Initializing NVMe Controllers 00:30:17.043 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:17.043 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:17.043 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:30:17.043 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:30:17.043 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:30:17.043 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:30:17.043 Initialization complete. Launching workers. 00:30:17.043 Starting thread on core 1 00:30:17.043 Starting thread on core 2 00:30:17.043 Starting thread on core 0 00:30:17.043 Starting thread on core 3 00:30:17.043 12:06:07 -- host/target_disconnect.sh@59 -- # sync 00:30:17.043 00:30:17.043 real 0m11.488s 00:30:17.043 user 0m28.859s 00:30:17.043 sys 0m5.563s 00:30:17.043 12:06:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:17.043 12:06:07 -- common/autotest_common.sh@10 -- # set +x 00:30:17.043 ************************************ 00:30:17.043 END TEST nvmf_target_disconnect_tc2 00:30:17.043 ************************************ 00:30:17.043 12:06:07 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:30:17.043 12:06:07 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:30:17.043 12:06:07 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:30:17.043 12:06:07 -- nvmf/common.sh@477 -- # nvmfcleanup 00:30:17.043 12:06:07 -- nvmf/common.sh@117 -- # sync 00:30:17.043 12:06:07 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:17.043 12:06:07 -- nvmf/common.sh@120 -- # set +e 00:30:17.043 12:06:07 -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:17.043 12:06:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:17.043 rmmod nvme_tcp 00:30:17.043 rmmod nvme_fabrics 00:30:17.043 rmmod nvme_keyring 00:30:17.043 12:06:07 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:17.044 12:06:07 -- nvmf/common.sh@124 -- # set -e 00:30:17.044 12:06:07 -- nvmf/common.sh@125 -- # return 0 00:30:17.044 12:06:07 -- nvmf/common.sh@478 -- # '[' -n 2657276 ']' 00:30:17.044 12:06:07 -- nvmf/common.sh@479 -- # killprocess 2657276 00:30:17.044 12:06:07 -- common/autotest_common.sh@936 -- # '[' -z 2657276 ']' 00:30:17.044 12:06:07 -- common/autotest_common.sh@940 -- # kill -0 2657276 00:30:17.044 12:06:07 -- common/autotest_common.sh@941 -- # uname 00:30:17.044 12:06:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:17.044 12:06:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2657276 00:30:17.301 12:06:07 -- common/autotest_common.sh@942 -- # process_name=reactor_4 00:30:17.301 12:06:07 -- common/autotest_common.sh@946 -- # '[' reactor_4 = sudo ']' 00:30:17.301 12:06:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2657276' 00:30:17.301 killing process with pid 2657276 00:30:17.301 12:06:07 -- common/autotest_common.sh@955 -- # kill 2657276 00:30:17.301 12:06:07 -- common/autotest_common.sh@960 -- # wait 2657276 00:30:18.675 12:06:09 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:30:18.675 12:06:09 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:30:18.675 12:06:09 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:30:18.675 12:06:09 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:18.675 12:06:09 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:18.675 12:06:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:18.675 12:06:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:18.675 12:06:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:21.213 12:06:11 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:21.213 00:30:21.213 real 0m22.732s 00:30:21.213 user 0m59.027s 00:30:21.213 sys 0m11.949s 00:30:21.213 12:06:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:21.213 12:06:11 -- common/autotest_common.sh@10 -- # set +x 00:30:21.213 ************************************ 00:30:21.213 END TEST nvmf_target_disconnect 00:30:21.213 ************************************ 00:30:21.213 12:06:11 -- nvmf/nvmf.sh@123 -- # timing_exit host 00:30:21.213 12:06:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:21.213 12:06:11 -- common/autotest_common.sh@10 -- # set +x 00:30:21.213 12:06:11 -- nvmf/nvmf.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:30:21.213 00:30:21.213 real 21m38.100s 00:30:21.213 user 44m33.336s 00:30:21.213 sys 7m27.644s 00:30:21.213 12:06:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:21.213 12:06:11 -- common/autotest_common.sh@10 -- # set +x 00:30:21.213 ************************************ 00:30:21.213 END TEST nvmf_tcp 00:30:21.213 ************************************ 00:30:21.213 12:06:11 -- spdk/autotest.sh@286 -- # [[ 0 -eq 0 ]] 00:30:21.213 12:06:11 -- spdk/autotest.sh@287 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:21.213 12:06:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:30:21.213 12:06:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:21.213 12:06:11 -- common/autotest_common.sh@10 -- # set +x 00:30:21.213 ************************************ 00:30:21.213 START TEST spdkcli_nvmf_tcp 00:30:21.213 ************************************ 00:30:21.213 12:06:11 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:21.213 * Looking for test storage... 00:30:21.213 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:30:21.213 12:06:11 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:30:21.213 12:06:11 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:30:21.213 12:06:11 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:30:21.213 12:06:11 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:21.213 12:06:11 -- nvmf/common.sh@7 -- # uname -s 00:30:21.213 12:06:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:21.213 12:06:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:21.213 12:06:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:21.213 12:06:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:21.213 12:06:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:21.213 12:06:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:21.213 12:06:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:21.213 12:06:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:21.213 12:06:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:21.213 12:06:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:21.213 12:06:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:30:21.213 12:06:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:30:21.213 12:06:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:21.213 12:06:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:21.213 12:06:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:21.213 12:06:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:21.213 12:06:11 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:21.213 12:06:11 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:21.213 12:06:11 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:21.213 12:06:11 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:21.213 12:06:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.213 12:06:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.213 12:06:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.213 12:06:11 -- paths/export.sh@5 -- # export PATH 00:30:21.213 12:06:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.213 12:06:11 -- nvmf/common.sh@47 -- # : 0 00:30:21.213 12:06:11 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:21.213 12:06:11 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:21.213 12:06:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:21.213 12:06:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:21.213 12:06:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:21.213 12:06:11 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:21.213 12:06:11 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:21.213 12:06:11 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:21.213 12:06:11 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:30:21.213 12:06:11 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:30:21.213 12:06:11 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:30:21.213 12:06:11 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:30:21.213 12:06:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:21.213 12:06:11 -- common/autotest_common.sh@10 -- # set +x 00:30:21.213 12:06:11 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:30:21.213 12:06:11 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2659528 00:30:21.213 12:06:11 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:30:21.213 12:06:11 -- spdkcli/common.sh@34 -- # waitforlisten 2659528 00:30:21.213 12:06:11 -- common/autotest_common.sh@817 -- # '[' -z 2659528 ']' 00:30:21.213 12:06:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:21.213 12:06:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:21.213 12:06:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:21.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:21.213 12:06:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:21.213 12:06:11 -- common/autotest_common.sh@10 -- # set +x 00:30:21.213 [2024-04-18 12:06:11.653578] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:30:21.213 [2024-04-18 12:06:11.653669] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2659528 ] 00:30:21.213 EAL: No free 2048 kB hugepages reported on node 1 00:30:21.473 [2024-04-18 12:06:11.777319] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:21.473 [2024-04-18 12:06:11.998211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:21.473 [2024-04-18 12:06:11.998217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:22.041 12:06:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:22.042 12:06:12 -- common/autotest_common.sh@850 -- # return 0 00:30:22.042 12:06:12 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:30:22.042 12:06:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:22.042 12:06:12 -- common/autotest_common.sh@10 -- # set +x 00:30:22.042 12:06:12 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:30:22.042 12:06:12 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:30:22.042 12:06:12 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:30:22.042 12:06:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:22.042 12:06:12 -- common/autotest_common.sh@10 -- # set +x 00:30:22.042 12:06:12 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:30:22.042 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:30:22.042 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:30:22.042 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:30:22.042 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:30:22.042 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:30:22.042 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:30:22.042 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:22.042 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:30:22.042 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:30:22.042 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:22.042 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:22.042 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:30:22.042 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:22.042 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:22.042 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:30:22.042 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:22.042 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:22.042 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:22.042 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:22.042 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:30:22.042 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:30:22.042 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:22.042 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:30:22.042 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:22.042 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:30:22.042 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:30:22.042 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:30:22.042 ' 00:30:22.301 [2024-04-18 12:06:12.797878] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:30:24.837 [2024-04-18 12:06:15.039549] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:25.775 [2024-04-18 12:06:16.215565] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:30:28.309 [2024-04-18 12:06:18.378179] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:30:30.212 [2024-04-18 12:06:20.236082] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:30:31.149 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:30:31.149 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:30:31.149 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:30:31.149 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:30:31.149 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:30:31.149 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:30:31.149 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:30:31.149 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:31.149 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:30:31.149 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:30:31.149 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:31.149 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:31.149 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:30:31.149 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:31.149 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:31.149 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:30:31.149 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:31.149 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:31.149 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:31.149 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:31.149 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:30:31.149 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:30:31.149 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:31.149 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:30:31.149 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:31.149 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:30:31.149 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:30:31.149 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:30:31.407 12:06:21 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:30:31.407 12:06:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:31.407 12:06:21 -- common/autotest_common.sh@10 -- # set +x 00:30:31.407 12:06:21 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:30:31.407 12:06:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:31.407 12:06:21 -- common/autotest_common.sh@10 -- # set +x 00:30:31.407 12:06:21 -- spdkcli/nvmf.sh@69 -- # check_match 00:30:31.407 12:06:21 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:30:31.665 12:06:22 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:30:31.665 12:06:22 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:30:31.665 12:06:22 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:30:31.665 12:06:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:31.665 12:06:22 -- common/autotest_common.sh@10 -- # set +x 00:30:31.924 12:06:22 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:30:31.924 12:06:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:31.924 12:06:22 -- common/autotest_common.sh@10 -- # set +x 00:30:31.924 12:06:22 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:30:31.924 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:30:31.924 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:31.924 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:30:31.924 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:30:31.924 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:30:31.924 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:30:31.924 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:31.924 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:30:31.924 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:30:31.924 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:30:31.924 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:30:31.924 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:30:31.924 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:30:31.924 ' 00:30:37.231 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:30:37.231 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:30:37.231 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:37.231 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:30:37.231 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:30:37.231 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:30:37.231 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:30:37.231 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:37.231 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:30:37.231 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:30:37.231 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:30:37.231 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:30:37.231 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:30:37.231 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:30:37.493 12:06:27 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:30:37.493 12:06:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:37.493 12:06:27 -- common/autotest_common.sh@10 -- # set +x 00:30:37.493 12:06:27 -- spdkcli/nvmf.sh@90 -- # killprocess 2659528 00:30:37.493 12:06:27 -- common/autotest_common.sh@936 -- # '[' -z 2659528 ']' 00:30:37.493 12:06:27 -- common/autotest_common.sh@940 -- # kill -0 2659528 00:30:37.493 12:06:27 -- common/autotest_common.sh@941 -- # uname 00:30:37.493 12:06:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:37.493 12:06:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2659528 00:30:37.493 12:06:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:37.493 12:06:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:37.493 12:06:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2659528' 00:30:37.493 killing process with pid 2659528 00:30:37.493 12:06:27 -- common/autotest_common.sh@955 -- # kill 2659528 00:30:37.493 [2024-04-18 12:06:27.890732] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:30:37.493 12:06:27 -- common/autotest_common.sh@960 -- # wait 2659528 00:30:38.869 12:06:29 -- spdkcli/nvmf.sh@1 -- # cleanup 00:30:38.869 12:06:29 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:30:38.869 12:06:29 -- spdkcli/common.sh@13 -- # '[' -n 2659528 ']' 00:30:38.869 12:06:29 -- spdkcli/common.sh@14 -- # killprocess 2659528 00:30:38.869 12:06:29 -- common/autotest_common.sh@936 -- # '[' -z 2659528 ']' 00:30:38.870 12:06:29 -- common/autotest_common.sh@940 -- # kill -0 2659528 00:30:38.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (2659528) - No such process 00:30:38.870 12:06:29 -- common/autotest_common.sh@963 -- # echo 'Process with pid 2659528 is not found' 00:30:38.870 Process with pid 2659528 is not found 00:30:38.870 12:06:29 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:30:38.870 12:06:29 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:30:38.870 12:06:29 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:30:38.870 00:30:38.870 real 0m17.708s 00:30:38.870 user 0m35.693s 00:30:38.870 sys 0m1.038s 00:30:38.870 12:06:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:38.870 12:06:29 -- common/autotest_common.sh@10 -- # set +x 00:30:38.870 ************************************ 00:30:38.870 END TEST spdkcli_nvmf_tcp 00:30:38.870 ************************************ 00:30:38.870 12:06:29 -- spdk/autotest.sh@288 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:38.870 12:06:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:30:38.870 12:06:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:38.870 12:06:29 -- common/autotest_common.sh@10 -- # set +x 00:30:38.870 ************************************ 00:30:38.870 START TEST nvmf_identify_passthru 00:30:38.870 ************************************ 00:30:38.870 12:06:29 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:39.128 * Looking for test storage... 00:30:39.128 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:39.128 12:06:29 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:39.128 12:06:29 -- nvmf/common.sh@7 -- # uname -s 00:30:39.128 12:06:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:39.128 12:06:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:39.128 12:06:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:39.128 12:06:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:39.128 12:06:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:39.128 12:06:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:39.128 12:06:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:39.128 12:06:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:39.128 12:06:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:39.128 12:06:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:39.128 12:06:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:30:39.128 12:06:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:30:39.128 12:06:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:39.128 12:06:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:39.128 12:06:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:39.128 12:06:29 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:39.128 12:06:29 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:39.128 12:06:29 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:39.128 12:06:29 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:39.128 12:06:29 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:39.128 12:06:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.129 12:06:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.129 12:06:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.129 12:06:29 -- paths/export.sh@5 -- # export PATH 00:30:39.129 12:06:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.129 12:06:29 -- nvmf/common.sh@47 -- # : 0 00:30:39.129 12:06:29 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:39.129 12:06:29 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:39.129 12:06:29 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:39.129 12:06:29 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:39.129 12:06:29 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:39.129 12:06:29 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:39.129 12:06:29 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:39.129 12:06:29 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:39.129 12:06:29 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:39.129 12:06:29 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:39.129 12:06:29 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:39.129 12:06:29 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:39.129 12:06:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.129 12:06:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.129 12:06:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.129 12:06:29 -- paths/export.sh@5 -- # export PATH 00:30:39.129 12:06:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.129 12:06:29 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:30:39.129 12:06:29 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:30:39.129 12:06:29 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:39.129 12:06:29 -- nvmf/common.sh@437 -- # prepare_net_devs 00:30:39.129 12:06:29 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:30:39.129 12:06:29 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:30:39.129 12:06:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:39.129 12:06:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:39.129 12:06:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:39.129 12:06:29 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:30:39.129 12:06:29 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:30:39.129 12:06:29 -- nvmf/common.sh@285 -- # xtrace_disable 00:30:39.129 12:06:29 -- common/autotest_common.sh@10 -- # set +x 00:30:45.695 12:06:35 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:45.695 12:06:35 -- nvmf/common.sh@291 -- # pci_devs=() 00:30:45.695 12:06:35 -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:45.695 12:06:35 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:45.695 12:06:35 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:45.695 12:06:35 -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:45.695 12:06:35 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:45.695 12:06:35 -- nvmf/common.sh@295 -- # net_devs=() 00:30:45.695 12:06:35 -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:45.695 12:06:35 -- nvmf/common.sh@296 -- # e810=() 00:30:45.695 12:06:35 -- nvmf/common.sh@296 -- # local -ga e810 00:30:45.695 12:06:35 -- nvmf/common.sh@297 -- # x722=() 00:30:45.695 12:06:35 -- nvmf/common.sh@297 -- # local -ga x722 00:30:45.695 12:06:35 -- nvmf/common.sh@298 -- # mlx=() 00:30:45.695 12:06:35 -- nvmf/common.sh@298 -- # local -ga mlx 00:30:45.695 12:06:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:45.695 12:06:35 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:45.695 12:06:35 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:45.695 12:06:35 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:45.695 12:06:35 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:45.695 12:06:35 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:45.695 12:06:35 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:45.695 12:06:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:45.695 12:06:35 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:45.695 12:06:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:45.695 12:06:35 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:45.695 12:06:35 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:45.695 12:06:35 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:45.695 12:06:35 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:45.695 12:06:35 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:45.695 12:06:35 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:45.695 12:06:35 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:45.695 12:06:35 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:45.695 12:06:35 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:45.695 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:45.695 12:06:35 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:45.695 12:06:35 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:45.695 12:06:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:45.695 12:06:35 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:45.695 12:06:35 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:45.695 12:06:35 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:45.695 12:06:35 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:45.695 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:45.695 12:06:35 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:45.695 12:06:35 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:45.695 12:06:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:45.695 12:06:35 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:45.695 12:06:35 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:45.695 12:06:35 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:45.695 12:06:35 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:45.695 12:06:35 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:45.695 12:06:35 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:45.695 12:06:35 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:45.695 12:06:35 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:30:45.695 12:06:35 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:45.695 12:06:35 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:45.695 Found net devices under 0000:af:00.0: cvl_0_0 00:30:45.695 12:06:35 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:30:45.695 12:06:35 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:45.695 12:06:35 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:45.695 12:06:35 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:30:45.695 12:06:35 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:45.695 12:06:35 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:45.695 Found net devices under 0000:af:00.1: cvl_0_1 00:30:45.695 12:06:35 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:30:45.695 12:06:35 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:30:45.695 12:06:35 -- nvmf/common.sh@403 -- # is_hw=yes 00:30:45.695 12:06:35 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:30:45.695 12:06:35 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:30:45.695 12:06:35 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:30:45.695 12:06:35 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:45.695 12:06:35 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:45.695 12:06:35 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:45.695 12:06:35 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:45.695 12:06:35 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:45.695 12:06:35 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:45.695 12:06:35 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:45.695 12:06:35 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:45.695 12:06:35 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:45.695 12:06:35 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:45.695 12:06:35 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:45.695 12:06:35 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:45.695 12:06:35 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:45.695 12:06:35 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:45.695 12:06:35 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:45.695 12:06:35 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:45.695 12:06:35 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:45.695 12:06:36 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:45.695 12:06:36 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:45.695 12:06:36 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:45.695 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:45.695 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:30:45.695 00:30:45.695 --- 10.0.0.2 ping statistics --- 00:30:45.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.695 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:30:45.695 12:06:36 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:45.695 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:45.695 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:30:45.695 00:30:45.695 --- 10.0.0.1 ping statistics --- 00:30:45.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.695 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:30:45.695 12:06:36 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:45.695 12:06:36 -- nvmf/common.sh@411 -- # return 0 00:30:45.695 12:06:36 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:30:45.695 12:06:36 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:45.695 12:06:36 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:30:45.695 12:06:36 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:30:45.695 12:06:36 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:45.695 12:06:36 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:30:45.695 12:06:36 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:30:45.695 12:06:36 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:30:45.695 12:06:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:45.695 12:06:36 -- common/autotest_common.sh@10 -- # set +x 00:30:45.695 12:06:36 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:30:45.695 12:06:36 -- common/autotest_common.sh@1510 -- # bdfs=() 00:30:45.695 12:06:36 -- common/autotest_common.sh@1510 -- # local bdfs 00:30:45.695 12:06:36 -- common/autotest_common.sh@1511 -- # bdfs=($(get_nvme_bdfs)) 00:30:45.695 12:06:36 -- common/autotest_common.sh@1511 -- # get_nvme_bdfs 00:30:45.695 12:06:36 -- common/autotest_common.sh@1499 -- # bdfs=() 00:30:45.695 12:06:36 -- common/autotest_common.sh@1499 -- # local bdfs 00:30:45.695 12:06:36 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:45.695 12:06:36 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:45.695 12:06:36 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:30:45.696 12:06:36 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:30:45.696 12:06:36 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:d8:00.0 00:30:45.696 12:06:36 -- common/autotest_common.sh@1513 -- # echo 0000:d8:00.0 00:30:45.696 12:06:36 -- target/identify_passthru.sh@16 -- # bdf=0000:d8:00.0 00:30:45.696 12:06:36 -- target/identify_passthru.sh@17 -- # '[' -z 0000:d8:00.0 ']' 00:30:45.696 12:06:36 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:30:45.696 12:06:36 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:d8:00.0' -i 0 00:30:45.696 12:06:36 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:30:45.954 EAL: No free 2048 kB hugepages reported on node 1 00:30:51.225 12:06:41 -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLN916500W71P6AGN 00:30:51.225 12:06:41 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:d8:00.0' -i 0 00:30:51.225 12:06:41 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:30:51.225 12:06:41 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:30:51.225 EAL: No free 2048 kB hugepages reported on node 1 00:30:55.413 12:06:45 -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:30:55.413 12:06:45 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:30:55.413 12:06:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:55.413 12:06:45 -- common/autotest_common.sh@10 -- # set +x 00:30:55.672 12:06:45 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:30:55.672 12:06:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:55.672 12:06:45 -- common/autotest_common.sh@10 -- # set +x 00:30:55.672 12:06:45 -- target/identify_passthru.sh@31 -- # nvmfpid=2667415 00:30:55.672 12:06:45 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:30:55.672 12:06:45 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:55.672 12:06:45 -- target/identify_passthru.sh@35 -- # waitforlisten 2667415 00:30:55.672 12:06:45 -- common/autotest_common.sh@817 -- # '[' -z 2667415 ']' 00:30:55.672 12:06:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:55.672 12:06:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:55.672 12:06:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:55.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:55.672 12:06:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:55.672 12:06:45 -- common/autotest_common.sh@10 -- # set +x 00:30:55.672 [2024-04-18 12:06:46.079617] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:30:55.672 [2024-04-18 12:06:46.079708] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:55.672 EAL: No free 2048 kB hugepages reported on node 1 00:30:55.672 [2024-04-18 12:06:46.211322] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:55.930 [2024-04-18 12:06:46.428283] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:55.930 [2024-04-18 12:06:46.428331] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:55.930 [2024-04-18 12:06:46.428342] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:55.930 [2024-04-18 12:06:46.428354] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:55.930 [2024-04-18 12:06:46.428364] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:55.930 [2024-04-18 12:06:46.428503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:55.930 [2024-04-18 12:06:46.428562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:55.930 [2024-04-18 12:06:46.428626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:55.930 [2024-04-18 12:06:46.428634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:56.497 12:06:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:56.497 12:06:46 -- common/autotest_common.sh@850 -- # return 0 00:30:56.497 12:06:46 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:30:56.497 12:06:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:56.497 12:06:46 -- common/autotest_common.sh@10 -- # set +x 00:30:56.497 INFO: Log level set to 20 00:30:56.497 INFO: Requests: 00:30:56.497 { 00:30:56.497 "jsonrpc": "2.0", 00:30:56.497 "method": "nvmf_set_config", 00:30:56.497 "id": 1, 00:30:56.497 "params": { 00:30:56.497 "admin_cmd_passthru": { 00:30:56.497 "identify_ctrlr": true 00:30:56.497 } 00:30:56.497 } 00:30:56.497 } 00:30:56.497 00:30:56.497 INFO: response: 00:30:56.497 { 00:30:56.497 "jsonrpc": "2.0", 00:30:56.497 "id": 1, 00:30:56.497 "result": true 00:30:56.497 } 00:30:56.497 00:30:56.497 12:06:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:56.497 12:06:46 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:30:56.497 12:06:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:56.497 12:06:46 -- common/autotest_common.sh@10 -- # set +x 00:30:56.497 INFO: Setting log level to 20 00:30:56.497 INFO: Setting log level to 20 00:30:56.497 INFO: Log level set to 20 00:30:56.497 INFO: Log level set to 20 00:30:56.497 INFO: Requests: 00:30:56.497 { 00:30:56.497 "jsonrpc": "2.0", 00:30:56.497 "method": "framework_start_init", 00:30:56.497 "id": 1 00:30:56.497 } 00:30:56.497 00:30:56.497 INFO: Requests: 00:30:56.497 { 00:30:56.497 "jsonrpc": "2.0", 00:30:56.497 "method": "framework_start_init", 00:30:56.497 "id": 1 00:30:56.497 } 00:30:56.497 00:30:56.755 [2024-04-18 12:06:47.234815] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:30:56.755 INFO: response: 00:30:56.755 { 00:30:56.755 "jsonrpc": "2.0", 00:30:56.755 "id": 1, 00:30:56.755 "result": true 00:30:56.755 } 00:30:56.755 00:30:56.755 INFO: response: 00:30:56.755 { 00:30:56.755 "jsonrpc": "2.0", 00:30:56.755 "id": 1, 00:30:56.755 "result": true 00:30:56.755 } 00:30:56.755 00:30:56.755 12:06:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:56.755 12:06:47 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:56.755 12:06:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:56.755 12:06:47 -- common/autotest_common.sh@10 -- # set +x 00:30:56.755 INFO: Setting log level to 40 00:30:56.755 INFO: Setting log level to 40 00:30:56.755 INFO: Setting log level to 40 00:30:56.755 [2024-04-18 12:06:47.253716] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:56.755 12:06:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:56.755 12:06:47 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:30:56.755 12:06:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:56.755 12:06:47 -- common/autotest_common.sh@10 -- # set +x 00:30:57.014 12:06:47 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:d8:00.0 00:30:57.014 12:06:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:57.014 12:06:47 -- common/autotest_common.sh@10 -- # set +x 00:31:00.350 Nvme0n1 00:31:00.350 12:06:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:00.350 12:06:50 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:31:00.350 12:06:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:00.350 12:06:50 -- common/autotest_common.sh@10 -- # set +x 00:31:00.350 12:06:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:00.350 12:06:50 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:31:00.350 12:06:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:00.350 12:06:50 -- common/autotest_common.sh@10 -- # set +x 00:31:00.350 12:06:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:00.350 12:06:50 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:00.350 12:06:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:00.351 12:06:50 -- common/autotest_common.sh@10 -- # set +x 00:31:00.351 [2024-04-18 12:06:50.238917] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:00.351 12:06:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:00.351 12:06:50 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:31:00.351 12:06:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:00.351 12:06:50 -- common/autotest_common.sh@10 -- # set +x 00:31:00.351 [2024-04-18 12:06:50.246637] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:31:00.351 [ 00:31:00.351 { 00:31:00.351 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:00.351 "subtype": "Discovery", 00:31:00.351 "listen_addresses": [], 00:31:00.351 "allow_any_host": true, 00:31:00.351 "hosts": [] 00:31:00.351 }, 00:31:00.351 { 00:31:00.351 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:00.351 "subtype": "NVMe", 00:31:00.351 "listen_addresses": [ 00:31:00.351 { 00:31:00.351 "transport": "TCP", 00:31:00.351 "trtype": "TCP", 00:31:00.351 "adrfam": "IPv4", 00:31:00.351 "traddr": "10.0.0.2", 00:31:00.351 "trsvcid": "4420" 00:31:00.351 } 00:31:00.351 ], 00:31:00.351 "allow_any_host": true, 00:31:00.351 "hosts": [], 00:31:00.351 "serial_number": "SPDK00000000000001", 00:31:00.351 "model_number": "SPDK bdev Controller", 00:31:00.351 "max_namespaces": 1, 00:31:00.351 "min_cntlid": 1, 00:31:00.351 "max_cntlid": 65519, 00:31:00.351 "namespaces": [ 00:31:00.351 { 00:31:00.351 "nsid": 1, 00:31:00.351 "bdev_name": "Nvme0n1", 00:31:00.351 "name": "Nvme0n1", 00:31:00.351 "nguid": "5E84D05003B747BE85DEE6A6FACAD164", 00:31:00.351 "uuid": "5e84d050-03b7-47be-85de-e6a6facad164" 00:31:00.351 } 00:31:00.351 ] 00:31:00.351 } 00:31:00.351 ] 00:31:00.351 12:06:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:00.351 12:06:50 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:00.351 12:06:50 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:31:00.351 12:06:50 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:31:00.351 EAL: No free 2048 kB hugepages reported on node 1 00:31:00.351 12:06:50 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLN916500W71P6AGN 00:31:00.351 12:06:50 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:00.351 12:06:50 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:31:00.351 12:06:50 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:31:00.351 EAL: No free 2048 kB hugepages reported on node 1 00:31:00.351 12:06:50 -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:31:00.351 12:06:50 -- target/identify_passthru.sh@63 -- # '[' BTLN916500W71P6AGN '!=' BTLN916500W71P6AGN ']' 00:31:00.351 12:06:50 -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:31:00.351 12:06:50 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:00.351 12:06:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:00.351 12:06:50 -- common/autotest_common.sh@10 -- # set +x 00:31:00.351 12:06:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:00.351 12:06:50 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:31:00.351 12:06:50 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:31:00.351 12:06:50 -- nvmf/common.sh@477 -- # nvmfcleanup 00:31:00.351 12:06:50 -- nvmf/common.sh@117 -- # sync 00:31:00.351 12:06:50 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:00.351 12:06:50 -- nvmf/common.sh@120 -- # set +e 00:31:00.351 12:06:50 -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:00.351 12:06:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:00.351 rmmod nvme_tcp 00:31:00.351 rmmod nvme_fabrics 00:31:00.351 rmmod nvme_keyring 00:31:00.351 12:06:50 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:00.351 12:06:50 -- nvmf/common.sh@124 -- # set -e 00:31:00.351 12:06:50 -- nvmf/common.sh@125 -- # return 0 00:31:00.351 12:06:50 -- nvmf/common.sh@478 -- # '[' -n 2667415 ']' 00:31:00.351 12:06:50 -- nvmf/common.sh@479 -- # killprocess 2667415 00:31:00.351 12:06:50 -- common/autotest_common.sh@936 -- # '[' -z 2667415 ']' 00:31:00.351 12:06:50 -- common/autotest_common.sh@940 -- # kill -0 2667415 00:31:00.351 12:06:50 -- common/autotest_common.sh@941 -- # uname 00:31:00.351 12:06:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:00.351 12:06:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2667415 00:31:00.351 12:06:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:31:00.351 12:06:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:31:00.351 12:06:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2667415' 00:31:00.351 killing process with pid 2667415 00:31:00.351 12:06:50 -- common/autotest_common.sh@955 -- # kill 2667415 00:31:00.351 [2024-04-18 12:06:50.897194] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:31:00.351 12:06:50 -- common/autotest_common.sh@960 -- # wait 2667415 00:31:03.636 12:06:54 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:31:03.636 12:06:54 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:31:03.636 12:06:54 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:31:03.636 12:06:54 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:03.636 12:06:54 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:03.636 12:06:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:03.636 12:06:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:03.636 12:06:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:06.171 12:06:56 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:06.171 00:31:06.171 real 0m26.755s 00:31:06.171 user 0m37.471s 00:31:06.171 sys 0m6.724s 00:31:06.171 12:06:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:06.171 12:06:56 -- common/autotest_common.sh@10 -- # set +x 00:31:06.171 ************************************ 00:31:06.171 END TEST nvmf_identify_passthru 00:31:06.171 ************************************ 00:31:06.171 12:06:56 -- spdk/autotest.sh@290 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:31:06.171 12:06:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:06.171 12:06:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:06.171 12:06:56 -- common/autotest_common.sh@10 -- # set +x 00:31:06.171 ************************************ 00:31:06.171 START TEST nvmf_dif 00:31:06.171 ************************************ 00:31:06.171 12:06:56 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:31:06.171 * Looking for test storage... 00:31:06.171 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:06.171 12:06:56 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:06.171 12:06:56 -- nvmf/common.sh@7 -- # uname -s 00:31:06.171 12:06:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:06.171 12:06:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:06.171 12:06:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:06.171 12:06:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:06.171 12:06:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:06.171 12:06:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:06.171 12:06:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:06.171 12:06:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:06.171 12:06:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:06.171 12:06:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:06.171 12:06:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:31:06.171 12:06:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:31:06.171 12:06:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:06.171 12:06:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:06.171 12:06:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:06.171 12:06:56 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:06.171 12:06:56 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:06.171 12:06:56 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:06.171 12:06:56 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:06.171 12:06:56 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:06.171 12:06:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.171 12:06:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.171 12:06:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.171 12:06:56 -- paths/export.sh@5 -- # export PATH 00:31:06.171 12:06:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.171 12:06:56 -- nvmf/common.sh@47 -- # : 0 00:31:06.171 12:06:56 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:06.171 12:06:56 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:06.171 12:06:56 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:06.171 12:06:56 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:06.171 12:06:56 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:06.171 12:06:56 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:06.171 12:06:56 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:06.171 12:06:56 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:06.171 12:06:56 -- target/dif.sh@15 -- # NULL_META=16 00:31:06.171 12:06:56 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:31:06.171 12:06:56 -- target/dif.sh@15 -- # NULL_SIZE=64 00:31:06.171 12:06:56 -- target/dif.sh@15 -- # NULL_DIF=1 00:31:06.171 12:06:56 -- target/dif.sh@135 -- # nvmftestinit 00:31:06.171 12:06:56 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:31:06.171 12:06:56 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:06.171 12:06:56 -- nvmf/common.sh@437 -- # prepare_net_devs 00:31:06.171 12:06:56 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:31:06.171 12:06:56 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:31:06.171 12:06:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:06.171 12:06:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:06.171 12:06:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:06.171 12:06:56 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:31:06.171 12:06:56 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:31:06.171 12:06:56 -- nvmf/common.sh@285 -- # xtrace_disable 00:31:06.171 12:06:56 -- common/autotest_common.sh@10 -- # set +x 00:31:12.737 12:07:02 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:12.737 12:07:02 -- nvmf/common.sh@291 -- # pci_devs=() 00:31:12.737 12:07:02 -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:12.737 12:07:02 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:12.737 12:07:02 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:12.737 12:07:02 -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:12.737 12:07:02 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:12.737 12:07:02 -- nvmf/common.sh@295 -- # net_devs=() 00:31:12.737 12:07:02 -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:12.737 12:07:02 -- nvmf/common.sh@296 -- # e810=() 00:31:12.737 12:07:02 -- nvmf/common.sh@296 -- # local -ga e810 00:31:12.737 12:07:02 -- nvmf/common.sh@297 -- # x722=() 00:31:12.737 12:07:02 -- nvmf/common.sh@297 -- # local -ga x722 00:31:12.737 12:07:02 -- nvmf/common.sh@298 -- # mlx=() 00:31:12.737 12:07:02 -- nvmf/common.sh@298 -- # local -ga mlx 00:31:12.737 12:07:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:12.737 12:07:02 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:12.737 12:07:02 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:12.737 12:07:02 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:12.737 12:07:02 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:12.737 12:07:02 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:12.737 12:07:02 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:12.737 12:07:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:12.737 12:07:02 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:12.737 12:07:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:12.737 12:07:02 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:12.737 12:07:02 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:12.737 12:07:02 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:12.737 12:07:02 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:12.737 12:07:02 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:12.737 12:07:02 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:12.737 12:07:02 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:12.737 12:07:02 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:12.737 12:07:02 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:12.737 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:12.737 12:07:02 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:12.737 12:07:02 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:12.737 12:07:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:12.737 12:07:02 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:12.737 12:07:02 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:12.737 12:07:02 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:12.737 12:07:02 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:12.737 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:12.737 12:07:02 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:12.737 12:07:02 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:12.737 12:07:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:12.737 12:07:02 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:12.737 12:07:02 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:12.737 12:07:02 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:12.737 12:07:02 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:12.737 12:07:02 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:12.737 12:07:02 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:12.737 12:07:02 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:12.737 12:07:02 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:31:12.737 12:07:02 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:12.737 12:07:02 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:12.737 Found net devices under 0000:af:00.0: cvl_0_0 00:31:12.737 12:07:02 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:31:12.737 12:07:02 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:12.737 12:07:02 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:12.737 12:07:02 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:31:12.737 12:07:02 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:12.737 12:07:02 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:12.737 Found net devices under 0000:af:00.1: cvl_0_1 00:31:12.737 12:07:02 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:31:12.737 12:07:02 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:31:12.737 12:07:02 -- nvmf/common.sh@403 -- # is_hw=yes 00:31:12.737 12:07:02 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:31:12.737 12:07:02 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:31:12.737 12:07:02 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:31:12.737 12:07:02 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:12.737 12:07:02 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:12.737 12:07:02 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:12.737 12:07:02 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:12.737 12:07:02 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:12.737 12:07:02 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:12.737 12:07:02 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:12.737 12:07:02 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:12.737 12:07:02 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:12.737 12:07:02 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:12.737 12:07:02 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:12.737 12:07:02 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:12.737 12:07:02 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:12.737 12:07:03 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:12.737 12:07:03 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:12.737 12:07:03 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:12.737 12:07:03 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:12.737 12:07:03 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:12.737 12:07:03 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:12.737 12:07:03 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:12.737 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:12.737 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:31:12.737 00:31:12.737 --- 10.0.0.2 ping statistics --- 00:31:12.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:12.737 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:31:12.997 12:07:03 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:12.997 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:12.997 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:31:12.997 00:31:12.997 --- 10.0.0.1 ping statistics --- 00:31:12.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:12.997 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:31:12.997 12:07:03 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:12.997 12:07:03 -- nvmf/common.sh@411 -- # return 0 00:31:12.997 12:07:03 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:31:12.997 12:07:03 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:16.285 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:31:16.285 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:31:16.285 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:31:16.285 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:31:16.285 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:31:16.285 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:31:16.285 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:31:16.285 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:31:16.285 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:31:16.285 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:31:16.285 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:31:16.286 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:31:16.286 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:31:16.286 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:31:16.286 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:31:16.286 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:31:16.286 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:31:16.286 12:07:06 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:16.286 12:07:06 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:31:16.286 12:07:06 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:31:16.286 12:07:06 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:16.286 12:07:06 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:31:16.286 12:07:06 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:31:16.286 12:07:06 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:31:16.286 12:07:06 -- target/dif.sh@137 -- # nvmfappstart 00:31:16.286 12:07:06 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:31:16.286 12:07:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:31:16.286 12:07:06 -- common/autotest_common.sh@10 -- # set +x 00:31:16.286 12:07:06 -- nvmf/common.sh@470 -- # nvmfpid=2673619 00:31:16.286 12:07:06 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:31:16.286 12:07:06 -- nvmf/common.sh@471 -- # waitforlisten 2673619 00:31:16.286 12:07:06 -- common/autotest_common.sh@817 -- # '[' -z 2673619 ']' 00:31:16.286 12:07:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:16.286 12:07:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:16.286 12:07:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:16.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:16.286 12:07:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:16.286 12:07:06 -- common/autotest_common.sh@10 -- # set +x 00:31:16.286 [2024-04-18 12:07:06.696319] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:31:16.286 [2024-04-18 12:07:06.696396] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:16.286 EAL: No free 2048 kB hugepages reported on node 1 00:31:16.286 [2024-04-18 12:07:06.824787] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:16.545 [2024-04-18 12:07:07.038619] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:16.545 [2024-04-18 12:07:07.038670] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:16.545 [2024-04-18 12:07:07.038683] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:16.545 [2024-04-18 12:07:07.038697] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:16.545 [2024-04-18 12:07:07.038706] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:16.545 [2024-04-18 12:07:07.038744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:17.114 12:07:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:17.114 12:07:07 -- common/autotest_common.sh@850 -- # return 0 00:31:17.114 12:07:07 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:31:17.114 12:07:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:31:17.114 12:07:07 -- common/autotest_common.sh@10 -- # set +x 00:31:17.114 12:07:07 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:17.114 12:07:07 -- target/dif.sh@139 -- # create_transport 00:31:17.114 12:07:07 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:31:17.114 12:07:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:17.114 12:07:07 -- common/autotest_common.sh@10 -- # set +x 00:31:17.114 [2024-04-18 12:07:07.505847] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:17.114 12:07:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:17.114 12:07:07 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:31:17.114 12:07:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:17.115 12:07:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:17.115 12:07:07 -- common/autotest_common.sh@10 -- # set +x 00:31:17.374 ************************************ 00:31:17.374 START TEST fio_dif_1_default 00:31:17.374 ************************************ 00:31:17.374 12:07:07 -- common/autotest_common.sh@1111 -- # fio_dif_1 00:31:17.374 12:07:07 -- target/dif.sh@86 -- # create_subsystems 0 00:31:17.374 12:07:07 -- target/dif.sh@28 -- # local sub 00:31:17.374 12:07:07 -- target/dif.sh@30 -- # for sub in "$@" 00:31:17.374 12:07:07 -- target/dif.sh@31 -- # create_subsystem 0 00:31:17.374 12:07:07 -- target/dif.sh@18 -- # local sub_id=0 00:31:17.374 12:07:07 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:17.374 12:07:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:17.374 12:07:07 -- common/autotest_common.sh@10 -- # set +x 00:31:17.374 bdev_null0 00:31:17.374 12:07:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:17.374 12:07:07 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:17.374 12:07:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:17.374 12:07:07 -- common/autotest_common.sh@10 -- # set +x 00:31:17.374 12:07:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:17.374 12:07:07 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:17.374 12:07:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:17.374 12:07:07 -- common/autotest_common.sh@10 -- # set +x 00:31:17.374 12:07:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:17.374 12:07:07 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:17.374 12:07:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:17.374 12:07:07 -- common/autotest_common.sh@10 -- # set +x 00:31:17.374 [2024-04-18 12:07:07.698514] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:17.374 12:07:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:17.374 12:07:07 -- target/dif.sh@87 -- # fio /dev/fd/62 00:31:17.374 12:07:07 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:31:17.374 12:07:07 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:17.374 12:07:07 -- nvmf/common.sh@521 -- # config=() 00:31:17.374 12:07:07 -- nvmf/common.sh@521 -- # local subsystem config 00:31:17.374 12:07:07 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:17.374 12:07:07 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:31:17.374 12:07:07 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:31:17.374 { 00:31:17.374 "params": { 00:31:17.374 "name": "Nvme$subsystem", 00:31:17.374 "trtype": "$TEST_TRANSPORT", 00:31:17.374 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:17.374 "adrfam": "ipv4", 00:31:17.374 "trsvcid": "$NVMF_PORT", 00:31:17.374 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:17.374 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:17.374 "hdgst": ${hdgst:-false}, 00:31:17.374 "ddgst": ${ddgst:-false} 00:31:17.374 }, 00:31:17.374 "method": "bdev_nvme_attach_controller" 00:31:17.374 } 00:31:17.374 EOF 00:31:17.374 )") 00:31:17.374 12:07:07 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:17.374 12:07:07 -- target/dif.sh@82 -- # gen_fio_conf 00:31:17.374 12:07:07 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:31:17.374 12:07:07 -- target/dif.sh@54 -- # local file 00:31:17.374 12:07:07 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:17.374 12:07:07 -- target/dif.sh@56 -- # cat 00:31:17.374 12:07:07 -- common/autotest_common.sh@1325 -- # local sanitizers 00:31:17.374 12:07:07 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:17.374 12:07:07 -- common/autotest_common.sh@1327 -- # shift 00:31:17.374 12:07:07 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:31:17.374 12:07:07 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:31:17.374 12:07:07 -- nvmf/common.sh@543 -- # cat 00:31:17.374 12:07:07 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:17.374 12:07:07 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:17.374 12:07:07 -- target/dif.sh@72 -- # (( file <= files )) 00:31:17.374 12:07:07 -- common/autotest_common.sh@1331 -- # grep libasan 00:31:17.374 12:07:07 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:31:17.374 12:07:07 -- nvmf/common.sh@545 -- # jq . 00:31:17.374 12:07:07 -- nvmf/common.sh@546 -- # IFS=, 00:31:17.374 12:07:07 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:31:17.374 "params": { 00:31:17.374 "name": "Nvme0", 00:31:17.374 "trtype": "tcp", 00:31:17.374 "traddr": "10.0.0.2", 00:31:17.374 "adrfam": "ipv4", 00:31:17.374 "trsvcid": "4420", 00:31:17.374 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:17.374 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:17.374 "hdgst": false, 00:31:17.374 "ddgst": false 00:31:17.374 }, 00:31:17.374 "method": "bdev_nvme_attach_controller" 00:31:17.374 }' 00:31:17.374 12:07:07 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:31:17.374 12:07:07 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:31:17.374 12:07:07 -- common/autotest_common.sh@1333 -- # break 00:31:17.374 12:07:07 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:17.374 12:07:07 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:17.633 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:17.633 fio-3.35 00:31:17.633 Starting 1 thread 00:31:17.892 EAL: No free 2048 kB hugepages reported on node 1 00:31:30.129 00:31:30.129 filename0: (groupid=0, jobs=1): err= 0: pid=2674278: Thu Apr 18 12:07:19 2024 00:31:30.129 read: IOPS=95, BW=381KiB/s (390kB/s)(3808KiB/10004msec) 00:31:30.129 slat (nsec): min=6086, max=36215, avg=8296.51, stdev=2797.70 00:31:30.129 clat (usec): min=41775, max=44613, avg=42006.55, stdev=224.90 00:31:30.129 lat (usec): min=41782, max=44649, avg=42014.85, stdev=225.39 00:31:30.129 clat percentiles (usec): 00:31:30.130 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:31:30.130 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:31:30.130 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:31:30.130 | 99.00th=[42730], 99.50th=[43254], 99.90th=[44827], 99.95th=[44827], 00:31:30.130 | 99.99th=[44827] 00:31:30.130 bw ( KiB/s): min= 352, max= 384, per=99.83%, avg=380.63, stdev=10.09, samples=19 00:31:30.130 iops : min= 88, max= 96, avg=95.16, stdev= 2.52, samples=19 00:31:30.130 lat (msec) : 50=100.00% 00:31:30.130 cpu : usr=88.08%, sys=11.61%, ctx=12, majf=0, minf=1634 00:31:30.130 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:30.130 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.130 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.130 issued rwts: total=952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.130 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:30.130 00:31:30.130 Run status group 0 (all jobs): 00:31:30.130 READ: bw=381KiB/s (390kB/s), 381KiB/s-381KiB/s (390kB/s-390kB/s), io=3808KiB (3899kB), run=10004-10004msec 00:31:30.130 ----------------------------------------------------- 00:31:30.130 Suppressions used: 00:31:30.130 count bytes template 00:31:30.130 1 8 /usr/src/fio/parse.c 00:31:30.130 1 8 libtcmalloc_minimal.so 00:31:30.130 1 904 libcrypto.so 00:31:30.130 ----------------------------------------------------- 00:31:30.130 00:31:30.130 12:07:20 -- target/dif.sh@88 -- # destroy_subsystems 0 00:31:30.130 12:07:20 -- target/dif.sh@43 -- # local sub 00:31:30.130 12:07:20 -- target/dif.sh@45 -- # for sub in "$@" 00:31:30.130 12:07:20 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:30.130 12:07:20 -- target/dif.sh@36 -- # local sub_id=0 00:31:30.130 12:07:20 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:30.130 12:07:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:30.130 12:07:20 -- common/autotest_common.sh@10 -- # set +x 00:31:30.130 12:07:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:30.130 12:07:20 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:30.130 12:07:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:30.130 12:07:20 -- common/autotest_common.sh@10 -- # set +x 00:31:30.130 12:07:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:30.130 00:31:30.130 real 0m12.452s 00:31:30.130 user 0m19.355s 00:31:30.130 sys 0m1.824s 00:31:30.130 12:07:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:30.130 12:07:20 -- common/autotest_common.sh@10 -- # set +x 00:31:30.130 ************************************ 00:31:30.130 END TEST fio_dif_1_default 00:31:30.130 ************************************ 00:31:30.130 12:07:20 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:31:30.130 12:07:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:30.130 12:07:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:30.130 12:07:20 -- common/autotest_common.sh@10 -- # set +x 00:31:30.130 ************************************ 00:31:30.130 START TEST fio_dif_1_multi_subsystems 00:31:30.130 ************************************ 00:31:30.130 12:07:20 -- common/autotest_common.sh@1111 -- # fio_dif_1_multi_subsystems 00:31:30.130 12:07:20 -- target/dif.sh@92 -- # local files=1 00:31:30.130 12:07:20 -- target/dif.sh@94 -- # create_subsystems 0 1 00:31:30.130 12:07:20 -- target/dif.sh@28 -- # local sub 00:31:30.130 12:07:20 -- target/dif.sh@30 -- # for sub in "$@" 00:31:30.130 12:07:20 -- target/dif.sh@31 -- # create_subsystem 0 00:31:30.130 12:07:20 -- target/dif.sh@18 -- # local sub_id=0 00:31:30.130 12:07:20 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:30.130 12:07:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:30.130 12:07:20 -- common/autotest_common.sh@10 -- # set +x 00:31:30.130 bdev_null0 00:31:30.130 12:07:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:30.130 12:07:20 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:30.130 12:07:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:30.130 12:07:20 -- common/autotest_common.sh@10 -- # set +x 00:31:30.130 12:07:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:30.130 12:07:20 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:30.130 12:07:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:30.130 12:07:20 -- common/autotest_common.sh@10 -- # set +x 00:31:30.130 12:07:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:30.130 12:07:20 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:30.130 12:07:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:30.130 12:07:20 -- common/autotest_common.sh@10 -- # set +x 00:31:30.130 [2024-04-18 12:07:20.348568] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:30.130 12:07:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:30.130 12:07:20 -- target/dif.sh@30 -- # for sub in "$@" 00:31:30.130 12:07:20 -- target/dif.sh@31 -- # create_subsystem 1 00:31:30.130 12:07:20 -- target/dif.sh@18 -- # local sub_id=1 00:31:30.130 12:07:20 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:30.130 12:07:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:30.130 12:07:20 -- common/autotest_common.sh@10 -- # set +x 00:31:30.130 bdev_null1 00:31:30.130 12:07:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:30.130 12:07:20 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:30.130 12:07:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:30.130 12:07:20 -- common/autotest_common.sh@10 -- # set +x 00:31:30.130 12:07:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:30.130 12:07:20 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:30.130 12:07:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:30.130 12:07:20 -- common/autotest_common.sh@10 -- # set +x 00:31:30.130 12:07:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:30.130 12:07:20 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:30.130 12:07:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:30.130 12:07:20 -- common/autotest_common.sh@10 -- # set +x 00:31:30.130 12:07:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:30.130 12:07:20 -- target/dif.sh@95 -- # fio /dev/fd/62 00:31:30.130 12:07:20 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:31:30.130 12:07:20 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:30.130 12:07:20 -- nvmf/common.sh@521 -- # config=() 00:31:30.130 12:07:20 -- nvmf/common.sh@521 -- # local subsystem config 00:31:30.130 12:07:20 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:31:30.130 12:07:20 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:30.130 12:07:20 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:31:30.130 { 00:31:30.130 "params": { 00:31:30.130 "name": "Nvme$subsystem", 00:31:30.130 "trtype": "$TEST_TRANSPORT", 00:31:30.130 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:30.130 "adrfam": "ipv4", 00:31:30.130 "trsvcid": "$NVMF_PORT", 00:31:30.130 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:30.130 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:30.130 "hdgst": ${hdgst:-false}, 00:31:30.130 "ddgst": ${ddgst:-false} 00:31:30.130 }, 00:31:30.130 "method": "bdev_nvme_attach_controller" 00:31:30.130 } 00:31:30.130 EOF 00:31:30.130 )") 00:31:30.130 12:07:20 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:30.130 12:07:20 -- target/dif.sh@82 -- # gen_fio_conf 00:31:30.130 12:07:20 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:31:30.130 12:07:20 -- target/dif.sh@54 -- # local file 00:31:30.130 12:07:20 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:30.130 12:07:20 -- target/dif.sh@56 -- # cat 00:31:30.130 12:07:20 -- common/autotest_common.sh@1325 -- # local sanitizers 00:31:30.130 12:07:20 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:30.130 12:07:20 -- common/autotest_common.sh@1327 -- # shift 00:31:30.130 12:07:20 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:31:30.130 12:07:20 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:31:30.130 12:07:20 -- nvmf/common.sh@543 -- # cat 00:31:30.130 12:07:20 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:30.130 12:07:20 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:30.130 12:07:20 -- target/dif.sh@72 -- # (( file <= files )) 00:31:30.130 12:07:20 -- common/autotest_common.sh@1331 -- # grep libasan 00:31:30.130 12:07:20 -- target/dif.sh@73 -- # cat 00:31:30.130 12:07:20 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:31:30.130 12:07:20 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:31:30.130 12:07:20 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:31:30.130 { 00:31:30.130 "params": { 00:31:30.130 "name": "Nvme$subsystem", 00:31:30.130 "trtype": "$TEST_TRANSPORT", 00:31:30.130 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:30.130 "adrfam": "ipv4", 00:31:30.130 "trsvcid": "$NVMF_PORT", 00:31:30.130 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:30.130 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:30.130 "hdgst": ${hdgst:-false}, 00:31:30.130 "ddgst": ${ddgst:-false} 00:31:30.130 }, 00:31:30.130 "method": "bdev_nvme_attach_controller" 00:31:30.130 } 00:31:30.130 EOF 00:31:30.130 )") 00:31:30.130 12:07:20 -- nvmf/common.sh@543 -- # cat 00:31:30.130 12:07:20 -- target/dif.sh@72 -- # (( file++ )) 00:31:30.130 12:07:20 -- target/dif.sh@72 -- # (( file <= files )) 00:31:30.130 12:07:20 -- nvmf/common.sh@545 -- # jq . 00:31:30.130 12:07:20 -- nvmf/common.sh@546 -- # IFS=, 00:31:30.130 12:07:20 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:31:30.130 "params": { 00:31:30.130 "name": "Nvme0", 00:31:30.130 "trtype": "tcp", 00:31:30.130 "traddr": "10.0.0.2", 00:31:30.130 "adrfam": "ipv4", 00:31:30.130 "trsvcid": "4420", 00:31:30.130 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:30.130 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:30.130 "hdgst": false, 00:31:30.130 "ddgst": false 00:31:30.130 }, 00:31:30.130 "method": "bdev_nvme_attach_controller" 00:31:30.130 },{ 00:31:30.130 "params": { 00:31:30.131 "name": "Nvme1", 00:31:30.131 "trtype": "tcp", 00:31:30.131 "traddr": "10.0.0.2", 00:31:30.131 "adrfam": "ipv4", 00:31:30.131 "trsvcid": "4420", 00:31:30.131 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:30.131 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:30.131 "hdgst": false, 00:31:30.131 "ddgst": false 00:31:30.131 }, 00:31:30.131 "method": "bdev_nvme_attach_controller" 00:31:30.131 }' 00:31:30.131 12:07:20 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:31:30.131 12:07:20 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:31:30.131 12:07:20 -- common/autotest_common.sh@1333 -- # break 00:31:30.131 12:07:20 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:30.131 12:07:20 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:30.404 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:30.404 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:30.404 fio-3.35 00:31:30.404 Starting 2 threads 00:31:30.404 EAL: No free 2048 kB hugepages reported on node 1 00:31:42.612 00:31:42.612 filename0: (groupid=0, jobs=1): err= 0: pid=2676495: Thu Apr 18 12:07:31 2024 00:31:42.612 read: IOPS=95, BW=381KiB/s (390kB/s)(3808KiB/10001msec) 00:31:42.612 slat (nsec): min=6467, max=28002, avg=8595.08, stdev=2719.10 00:31:42.612 clat (usec): min=41779, max=43068, avg=41993.31, stdev=146.09 00:31:42.612 lat (usec): min=41786, max=43091, avg=42001.90, stdev=146.35 00:31:42.612 clat percentiles (usec): 00:31:42.612 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:31:42.612 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:31:42.612 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:31:42.612 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:31:42.612 | 99.99th=[43254] 00:31:42.612 bw ( KiB/s): min= 352, max= 384, per=33.90%, avg=380.63, stdev=10.09, samples=19 00:31:42.612 iops : min= 88, max= 96, avg=95.16, stdev= 2.52, samples=19 00:31:42.612 lat (msec) : 50=100.00% 00:31:42.612 cpu : usr=93.83%, sys=5.89%, ctx=13, majf=0, minf=1634 00:31:42.612 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:42.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.612 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.612 issued rwts: total=952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.612 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:42.612 filename1: (groupid=0, jobs=1): err= 0: pid=2676496: Thu Apr 18 12:07:31 2024 00:31:42.612 read: IOPS=185, BW=741KiB/s (759kB/s)(7424KiB/10020msec) 00:31:42.612 slat (nsec): min=6504, max=25966, avg=8034.47, stdev=2053.86 00:31:42.612 clat (usec): min=709, max=43006, avg=21569.87, stdev=20392.84 00:31:42.612 lat (usec): min=716, max=43024, avg=21577.91, stdev=20392.48 00:31:42.612 clat percentiles (usec): 00:31:42.612 | 1.00th=[ 1057], 5.00th=[ 1057], 10.00th=[ 1074], 20.00th=[ 1090], 00:31:42.612 | 30.00th=[ 1090], 40.00th=[ 1106], 50.00th=[41157], 60.00th=[41681], 00:31:42.612 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:31:42.612 | 99.00th=[42206], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:31:42.612 | 99.99th=[43254] 00:31:42.612 bw ( KiB/s): min= 672, max= 768, per=66.01%, avg=740.80, stdev=33.28, samples=20 00:31:42.612 iops : min= 168, max= 192, avg=185.20, stdev= 8.32, samples=20 00:31:42.612 lat (usec) : 750=0.22% 00:31:42.612 lat (msec) : 2=49.57%, 50=50.22% 00:31:42.612 cpu : usr=93.65%, sys=6.06%, ctx=25, majf=0, minf=1636 00:31:42.612 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:42.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.612 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.612 issued rwts: total=1856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.612 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:42.612 00:31:42.612 Run status group 0 (all jobs): 00:31:42.612 READ: bw=1121KiB/s (1148kB/s), 381KiB/s-741KiB/s (390kB/s-759kB/s), io=11.0MiB (11.5MB), run=10001-10020msec 00:31:42.612 ----------------------------------------------------- 00:31:42.612 Suppressions used: 00:31:42.612 count bytes template 00:31:42.612 2 16 /usr/src/fio/parse.c 00:31:42.612 1 8 libtcmalloc_minimal.so 00:31:42.612 1 904 libcrypto.so 00:31:42.612 ----------------------------------------------------- 00:31:42.612 00:31:42.612 12:07:32 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:31:42.612 12:07:32 -- target/dif.sh@43 -- # local sub 00:31:42.612 12:07:32 -- target/dif.sh@45 -- # for sub in "$@" 00:31:42.612 12:07:32 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:42.612 12:07:32 -- target/dif.sh@36 -- # local sub_id=0 00:31:42.612 12:07:32 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:42.612 12:07:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:42.612 12:07:32 -- common/autotest_common.sh@10 -- # set +x 00:31:42.612 12:07:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:42.612 12:07:32 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:42.612 12:07:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:42.612 12:07:32 -- common/autotest_common.sh@10 -- # set +x 00:31:42.612 12:07:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:42.612 12:07:32 -- target/dif.sh@45 -- # for sub in "$@" 00:31:42.612 12:07:32 -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:42.612 12:07:32 -- target/dif.sh@36 -- # local sub_id=1 00:31:42.612 12:07:32 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:42.612 12:07:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:42.612 12:07:32 -- common/autotest_common.sh@10 -- # set +x 00:31:42.612 12:07:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:42.612 12:07:32 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:42.612 12:07:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:42.612 12:07:32 -- common/autotest_common.sh@10 -- # set +x 00:31:42.612 12:07:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:42.612 00:31:42.612 real 0m12.623s 00:31:42.612 user 0m29.258s 00:31:42.612 sys 0m1.858s 00:31:42.612 12:07:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:42.612 12:07:32 -- common/autotest_common.sh@10 -- # set +x 00:31:42.612 ************************************ 00:31:42.612 END TEST fio_dif_1_multi_subsystems 00:31:42.612 ************************************ 00:31:42.612 12:07:32 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:31:42.612 12:07:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:42.612 12:07:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:42.612 12:07:32 -- common/autotest_common.sh@10 -- # set +x 00:31:42.612 ************************************ 00:31:42.612 START TEST fio_dif_rand_params 00:31:42.612 ************************************ 00:31:42.612 12:07:33 -- common/autotest_common.sh@1111 -- # fio_dif_rand_params 00:31:42.612 12:07:33 -- target/dif.sh@100 -- # local NULL_DIF 00:31:42.612 12:07:33 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:31:42.612 12:07:33 -- target/dif.sh@103 -- # NULL_DIF=3 00:31:42.612 12:07:33 -- target/dif.sh@103 -- # bs=128k 00:31:42.612 12:07:33 -- target/dif.sh@103 -- # numjobs=3 00:31:42.612 12:07:33 -- target/dif.sh@103 -- # iodepth=3 00:31:42.612 12:07:33 -- target/dif.sh@103 -- # runtime=5 00:31:42.612 12:07:33 -- target/dif.sh@105 -- # create_subsystems 0 00:31:42.612 12:07:33 -- target/dif.sh@28 -- # local sub 00:31:42.612 12:07:33 -- target/dif.sh@30 -- # for sub in "$@" 00:31:42.612 12:07:33 -- target/dif.sh@31 -- # create_subsystem 0 00:31:42.612 12:07:33 -- target/dif.sh@18 -- # local sub_id=0 00:31:42.612 12:07:33 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:42.612 12:07:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:42.612 12:07:33 -- common/autotest_common.sh@10 -- # set +x 00:31:42.612 bdev_null0 00:31:42.612 12:07:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:42.612 12:07:33 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:42.612 12:07:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:42.613 12:07:33 -- common/autotest_common.sh@10 -- # set +x 00:31:42.871 12:07:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:42.871 12:07:33 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:42.871 12:07:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:42.871 12:07:33 -- common/autotest_common.sh@10 -- # set +x 00:31:42.871 12:07:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:42.871 12:07:33 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:42.871 12:07:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:42.871 12:07:33 -- common/autotest_common.sh@10 -- # set +x 00:31:42.871 [2024-04-18 12:07:33.178720] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:42.871 12:07:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:42.871 12:07:33 -- target/dif.sh@106 -- # fio /dev/fd/62 00:31:42.871 12:07:33 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:31:42.871 12:07:33 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:42.871 12:07:33 -- nvmf/common.sh@521 -- # config=() 00:31:42.871 12:07:33 -- nvmf/common.sh@521 -- # local subsystem config 00:31:42.871 12:07:33 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:42.871 12:07:33 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:31:42.871 12:07:33 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:31:42.871 { 00:31:42.871 "params": { 00:31:42.871 "name": "Nvme$subsystem", 00:31:42.871 "trtype": "$TEST_TRANSPORT", 00:31:42.871 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:42.871 "adrfam": "ipv4", 00:31:42.871 "trsvcid": "$NVMF_PORT", 00:31:42.871 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:42.871 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:42.871 "hdgst": ${hdgst:-false}, 00:31:42.871 "ddgst": ${ddgst:-false} 00:31:42.871 }, 00:31:42.871 "method": "bdev_nvme_attach_controller" 00:31:42.871 } 00:31:42.871 EOF 00:31:42.871 )") 00:31:42.871 12:07:33 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:42.871 12:07:33 -- target/dif.sh@82 -- # gen_fio_conf 00:31:42.871 12:07:33 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:31:42.871 12:07:33 -- target/dif.sh@54 -- # local file 00:31:42.871 12:07:33 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:42.871 12:07:33 -- target/dif.sh@56 -- # cat 00:31:42.871 12:07:33 -- common/autotest_common.sh@1325 -- # local sanitizers 00:31:42.871 12:07:33 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:42.871 12:07:33 -- common/autotest_common.sh@1327 -- # shift 00:31:42.871 12:07:33 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:31:42.871 12:07:33 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:31:42.871 12:07:33 -- nvmf/common.sh@543 -- # cat 00:31:42.871 12:07:33 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:42.871 12:07:33 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:42.871 12:07:33 -- target/dif.sh@72 -- # (( file <= files )) 00:31:42.871 12:07:33 -- common/autotest_common.sh@1331 -- # grep libasan 00:31:42.871 12:07:33 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:31:42.871 12:07:33 -- nvmf/common.sh@545 -- # jq . 00:31:42.871 12:07:33 -- nvmf/common.sh@546 -- # IFS=, 00:31:42.871 12:07:33 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:31:42.871 "params": { 00:31:42.872 "name": "Nvme0", 00:31:42.872 "trtype": "tcp", 00:31:42.872 "traddr": "10.0.0.2", 00:31:42.872 "adrfam": "ipv4", 00:31:42.872 "trsvcid": "4420", 00:31:42.872 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:42.872 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:42.872 "hdgst": false, 00:31:42.872 "ddgst": false 00:31:42.872 }, 00:31:42.872 "method": "bdev_nvme_attach_controller" 00:31:42.872 }' 00:31:42.872 12:07:33 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:31:42.872 12:07:33 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:31:42.872 12:07:33 -- common/autotest_common.sh@1333 -- # break 00:31:42.872 12:07:33 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:42.872 12:07:33 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:43.130 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:43.130 ... 00:31:43.130 fio-3.35 00:31:43.130 Starting 3 threads 00:31:43.130 EAL: No free 2048 kB hugepages reported on node 1 00:31:49.693 00:31:49.693 filename0: (groupid=0, jobs=1): err= 0: pid=2678645: Thu Apr 18 12:07:39 2024 00:31:49.693 read: IOPS=221, BW=27.7MiB/s (29.1MB/s)(139MiB/5025msec) 00:31:49.693 slat (nsec): min=4177, max=21113, avg=11133.99, stdev=2388.61 00:31:49.693 clat (usec): min=4660, max=92944, avg=13502.51, stdev=13800.27 00:31:49.693 lat (usec): min=4669, max=92951, avg=13513.64, stdev=13800.30 00:31:49.693 clat percentiles (usec): 00:31:49.693 | 1.00th=[ 5080], 5.00th=[ 5866], 10.00th=[ 6390], 20.00th=[ 7439], 00:31:49.693 | 30.00th=[ 7963], 40.00th=[ 8455], 50.00th=[ 8979], 60.00th=[ 9765], 00:31:49.693 | 70.00th=[10421], 80.00th=[11207], 90.00th=[49021], 95.00th=[51119], 00:31:49.693 | 99.00th=[53740], 99.50th=[54789], 99.90th=[92799], 99.95th=[92799], 00:31:49.693 | 99.99th=[92799] 00:31:49.693 bw ( KiB/s): min=11776, max=38144, per=32.23%, avg=28467.20, stdev=8328.46, samples=10 00:31:49.693 iops : min= 92, max= 298, avg=222.40, stdev=65.07, samples=10 00:31:49.693 lat (msec) : 10=63.59%, 20=25.74%, 50=3.32%, 100=7.35% 00:31:49.693 cpu : usr=92.38%, sys=7.13%, ctx=9, majf=0, minf=1637 00:31:49.693 IO depths : 1=2.3%, 2=97.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:49.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.693 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.693 issued rwts: total=1115,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:49.693 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:49.693 filename0: (groupid=0, jobs=1): err= 0: pid=2678646: Thu Apr 18 12:07:39 2024 00:31:49.693 read: IOPS=245, BW=30.6MiB/s (32.1MB/s)(155MiB/5046msec) 00:31:49.693 slat (nsec): min=6757, max=31037, avg=10774.55, stdev=2526.77 00:31:49.693 clat (usec): min=4365, max=92882, avg=12185.17, stdev=12086.82 00:31:49.693 lat (usec): min=4376, max=92895, avg=12195.94, stdev=12086.92 00:31:49.693 clat percentiles (usec): 00:31:49.693 | 1.00th=[ 4883], 5.00th=[ 5407], 10.00th=[ 5866], 20.00th=[ 6783], 00:31:49.693 | 30.00th=[ 7570], 40.00th=[ 8291], 50.00th=[ 8848], 60.00th=[ 9503], 00:31:49.693 | 70.00th=[10290], 80.00th=[11207], 90.00th=[12780], 95.00th=[50070], 00:31:49.693 | 99.00th=[52691], 99.50th=[54264], 99.90th=[90702], 99.95th=[92799], 00:31:49.693 | 99.99th=[92799] 00:31:49.693 bw ( KiB/s): min=22528, max=40448, per=35.76%, avg=31590.40, stdev=5214.70, samples=10 00:31:49.693 iops : min= 176, max= 316, avg=246.80, stdev=40.74, samples=10 00:31:49.693 lat (msec) : 10=65.56%, 20=26.19%, 50=2.99%, 100=5.25% 00:31:49.693 cpu : usr=91.52%, sys=8.03%, ctx=12, majf=0, minf=1632 00:31:49.693 IO depths : 1=2.3%, 2=97.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:49.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.693 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.693 issued rwts: total=1237,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:49.693 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:49.693 filename0: (groupid=0, jobs=1): err= 0: pid=2678647: Thu Apr 18 12:07:39 2024 00:31:49.693 read: IOPS=224, BW=28.0MiB/s (29.4MB/s)(141MiB/5044msec) 00:31:49.693 slat (nsec): min=6771, max=29719, avg=11047.38, stdev=2527.16 00:31:49.693 clat (usec): min=4435, max=93985, avg=13336.42, stdev=13170.32 00:31:49.693 lat (usec): min=4445, max=93997, avg=13347.47, stdev=13170.37 00:31:49.693 clat percentiles (usec): 00:31:49.693 | 1.00th=[ 4948], 5.00th=[ 5735], 10.00th=[ 6521], 20.00th=[ 7570], 00:31:49.693 | 30.00th=[ 8029], 40.00th=[ 8455], 50.00th=[ 9110], 60.00th=[ 9765], 00:31:49.693 | 70.00th=[10814], 80.00th=[11600], 90.00th=[46924], 95.00th=[51643], 00:31:49.693 | 99.00th=[54789], 99.50th=[55313], 99.90th=[56886], 99.95th=[93848], 00:31:49.693 | 99.99th=[93848] 00:31:49.693 bw ( KiB/s): min=17664, max=38656, per=32.69%, avg=28876.80, stdev=6782.58, samples=10 00:31:49.693 iops : min= 138, max= 302, avg=225.60, stdev=52.99, samples=10 00:31:49.693 lat (msec) : 10=61.95%, 20=27.88%, 50=2.39%, 100=7.79% 00:31:49.693 cpu : usr=91.67%, sys=7.89%, ctx=8, majf=0, minf=1635 00:31:49.693 IO depths : 1=2.1%, 2=97.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:49.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.693 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.693 issued rwts: total=1130,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:49.693 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:49.693 00:31:49.693 Run status group 0 (all jobs): 00:31:49.694 READ: bw=86.3MiB/s (90.4MB/s), 27.7MiB/s-30.6MiB/s (29.1MB/s-32.1MB/s), io=435MiB (456MB), run=5025-5046msec 00:31:50.261 ----------------------------------------------------- 00:31:50.261 Suppressions used: 00:31:50.261 count bytes template 00:31:50.261 5 44 /usr/src/fio/parse.c 00:31:50.261 1 8 libtcmalloc_minimal.so 00:31:50.261 1 904 libcrypto.so 00:31:50.261 ----------------------------------------------------- 00:31:50.261 00:31:50.261 12:07:40 -- target/dif.sh@107 -- # destroy_subsystems 0 00:31:50.261 12:07:40 -- target/dif.sh@43 -- # local sub 00:31:50.261 12:07:40 -- target/dif.sh@45 -- # for sub in "$@" 00:31:50.261 12:07:40 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:50.261 12:07:40 -- target/dif.sh@36 -- # local sub_id=0 00:31:50.261 12:07:40 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:50.261 12:07:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:50.261 12:07:40 -- common/autotest_common.sh@10 -- # set +x 00:31:50.261 12:07:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:50.261 12:07:40 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:50.261 12:07:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:50.261 12:07:40 -- common/autotest_common.sh@10 -- # set +x 00:31:50.261 12:07:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:50.261 12:07:40 -- target/dif.sh@109 -- # NULL_DIF=2 00:31:50.261 12:07:40 -- target/dif.sh@109 -- # bs=4k 00:31:50.261 12:07:40 -- target/dif.sh@109 -- # numjobs=8 00:31:50.261 12:07:40 -- target/dif.sh@109 -- # iodepth=16 00:31:50.261 12:07:40 -- target/dif.sh@109 -- # runtime= 00:31:50.261 12:07:40 -- target/dif.sh@109 -- # files=2 00:31:50.261 12:07:40 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:31:50.261 12:07:40 -- target/dif.sh@28 -- # local sub 00:31:50.261 12:07:40 -- target/dif.sh@30 -- # for sub in "$@" 00:31:50.261 12:07:40 -- target/dif.sh@31 -- # create_subsystem 0 00:31:50.261 12:07:40 -- target/dif.sh@18 -- # local sub_id=0 00:31:50.261 12:07:40 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:31:50.261 12:07:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:50.261 12:07:40 -- common/autotest_common.sh@10 -- # set +x 00:31:50.261 bdev_null0 00:31:50.261 12:07:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:50.261 12:07:40 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:50.261 12:07:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:50.261 12:07:40 -- common/autotest_common.sh@10 -- # set +x 00:31:50.261 12:07:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:50.261 12:07:40 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:50.261 12:07:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:50.261 12:07:40 -- common/autotest_common.sh@10 -- # set +x 00:31:50.261 12:07:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:50.261 12:07:40 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:50.261 12:07:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:50.261 12:07:40 -- common/autotest_common.sh@10 -- # set +x 00:31:50.261 [2024-04-18 12:07:40.734660] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:50.261 12:07:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:50.261 12:07:40 -- target/dif.sh@30 -- # for sub in "$@" 00:31:50.261 12:07:40 -- target/dif.sh@31 -- # create_subsystem 1 00:31:50.261 12:07:40 -- target/dif.sh@18 -- # local sub_id=1 00:31:50.261 12:07:40 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:31:50.261 12:07:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:50.261 12:07:40 -- common/autotest_common.sh@10 -- # set +x 00:31:50.261 bdev_null1 00:31:50.261 12:07:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:50.261 12:07:40 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:50.261 12:07:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:50.261 12:07:40 -- common/autotest_common.sh@10 -- # set +x 00:31:50.261 12:07:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:50.261 12:07:40 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:50.261 12:07:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:50.261 12:07:40 -- common/autotest_common.sh@10 -- # set +x 00:31:50.261 12:07:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:50.261 12:07:40 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:50.261 12:07:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:50.261 12:07:40 -- common/autotest_common.sh@10 -- # set +x 00:31:50.261 12:07:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:50.261 12:07:40 -- target/dif.sh@30 -- # for sub in "$@" 00:31:50.261 12:07:40 -- target/dif.sh@31 -- # create_subsystem 2 00:31:50.261 12:07:40 -- target/dif.sh@18 -- # local sub_id=2 00:31:50.261 12:07:40 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:31:50.261 12:07:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:50.261 12:07:40 -- common/autotest_common.sh@10 -- # set +x 00:31:50.261 bdev_null2 00:31:50.261 12:07:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:50.261 12:07:40 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:31:50.261 12:07:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:50.261 12:07:40 -- common/autotest_common.sh@10 -- # set +x 00:31:50.261 12:07:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:50.261 12:07:40 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:31:50.261 12:07:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:50.261 12:07:40 -- common/autotest_common.sh@10 -- # set +x 00:31:50.261 12:07:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:50.261 12:07:40 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:50.261 12:07:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:50.261 12:07:40 -- common/autotest_common.sh@10 -- # set +x 00:31:50.261 12:07:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:50.261 12:07:40 -- target/dif.sh@112 -- # fio /dev/fd/62 00:31:50.261 12:07:40 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:31:50.261 12:07:40 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:31:50.261 12:07:40 -- nvmf/common.sh@521 -- # config=() 00:31:50.261 12:07:40 -- nvmf/common.sh@521 -- # local subsystem config 00:31:50.261 12:07:40 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:50.261 12:07:40 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:31:50.261 12:07:40 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:31:50.261 { 00:31:50.261 "params": { 00:31:50.261 "name": "Nvme$subsystem", 00:31:50.261 "trtype": "$TEST_TRANSPORT", 00:31:50.261 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:50.261 "adrfam": "ipv4", 00:31:50.261 "trsvcid": "$NVMF_PORT", 00:31:50.261 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:50.261 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:50.261 "hdgst": ${hdgst:-false}, 00:31:50.261 "ddgst": ${ddgst:-false} 00:31:50.261 }, 00:31:50.261 "method": "bdev_nvme_attach_controller" 00:31:50.261 } 00:31:50.261 EOF 00:31:50.261 )") 00:31:50.261 12:07:40 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:50.261 12:07:40 -- target/dif.sh@82 -- # gen_fio_conf 00:31:50.261 12:07:40 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:31:50.261 12:07:40 -- target/dif.sh@54 -- # local file 00:31:50.261 12:07:40 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:50.261 12:07:40 -- target/dif.sh@56 -- # cat 00:31:50.261 12:07:40 -- common/autotest_common.sh@1325 -- # local sanitizers 00:31:50.261 12:07:40 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:50.261 12:07:40 -- common/autotest_common.sh@1327 -- # shift 00:31:50.261 12:07:40 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:31:50.261 12:07:40 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:31:50.520 12:07:40 -- nvmf/common.sh@543 -- # cat 00:31:50.520 12:07:40 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:50.520 12:07:40 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:50.520 12:07:40 -- target/dif.sh@72 -- # (( file <= files )) 00:31:50.520 12:07:40 -- common/autotest_common.sh@1331 -- # grep libasan 00:31:50.520 12:07:40 -- target/dif.sh@73 -- # cat 00:31:50.520 12:07:40 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:31:50.520 12:07:40 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:31:50.520 12:07:40 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:31:50.520 { 00:31:50.520 "params": { 00:31:50.520 "name": "Nvme$subsystem", 00:31:50.520 "trtype": "$TEST_TRANSPORT", 00:31:50.520 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:50.520 "adrfam": "ipv4", 00:31:50.520 "trsvcid": "$NVMF_PORT", 00:31:50.520 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:50.520 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:50.520 "hdgst": ${hdgst:-false}, 00:31:50.520 "ddgst": ${ddgst:-false} 00:31:50.520 }, 00:31:50.520 "method": "bdev_nvme_attach_controller" 00:31:50.520 } 00:31:50.520 EOF 00:31:50.520 )") 00:31:50.520 12:07:40 -- target/dif.sh@72 -- # (( file++ )) 00:31:50.521 12:07:40 -- target/dif.sh@72 -- # (( file <= files )) 00:31:50.521 12:07:40 -- nvmf/common.sh@543 -- # cat 00:31:50.521 12:07:40 -- target/dif.sh@73 -- # cat 00:31:50.521 12:07:40 -- target/dif.sh@72 -- # (( file++ )) 00:31:50.521 12:07:40 -- target/dif.sh@72 -- # (( file <= files )) 00:31:50.521 12:07:40 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:31:50.521 12:07:40 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:31:50.521 { 00:31:50.521 "params": { 00:31:50.521 "name": "Nvme$subsystem", 00:31:50.521 "trtype": "$TEST_TRANSPORT", 00:31:50.521 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:50.521 "adrfam": "ipv4", 00:31:50.521 "trsvcid": "$NVMF_PORT", 00:31:50.521 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:50.521 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:50.521 "hdgst": ${hdgst:-false}, 00:31:50.521 "ddgst": ${ddgst:-false} 00:31:50.521 }, 00:31:50.521 "method": "bdev_nvme_attach_controller" 00:31:50.521 } 00:31:50.521 EOF 00:31:50.521 )") 00:31:50.521 12:07:40 -- nvmf/common.sh@543 -- # cat 00:31:50.521 12:07:40 -- nvmf/common.sh@545 -- # jq . 00:31:50.521 12:07:40 -- nvmf/common.sh@546 -- # IFS=, 00:31:50.521 12:07:40 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:31:50.521 "params": { 00:31:50.521 "name": "Nvme0", 00:31:50.521 "trtype": "tcp", 00:31:50.521 "traddr": "10.0.0.2", 00:31:50.521 "adrfam": "ipv4", 00:31:50.521 "trsvcid": "4420", 00:31:50.521 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:50.521 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:50.521 "hdgst": false, 00:31:50.521 "ddgst": false 00:31:50.521 }, 00:31:50.521 "method": "bdev_nvme_attach_controller" 00:31:50.521 },{ 00:31:50.521 "params": { 00:31:50.521 "name": "Nvme1", 00:31:50.521 "trtype": "tcp", 00:31:50.521 "traddr": "10.0.0.2", 00:31:50.521 "adrfam": "ipv4", 00:31:50.521 "trsvcid": "4420", 00:31:50.521 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:50.521 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:50.521 "hdgst": false, 00:31:50.521 "ddgst": false 00:31:50.521 }, 00:31:50.521 "method": "bdev_nvme_attach_controller" 00:31:50.521 },{ 00:31:50.521 "params": { 00:31:50.521 "name": "Nvme2", 00:31:50.521 "trtype": "tcp", 00:31:50.521 "traddr": "10.0.0.2", 00:31:50.521 "adrfam": "ipv4", 00:31:50.521 "trsvcid": "4420", 00:31:50.521 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:50.521 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:50.521 "hdgst": false, 00:31:50.521 "ddgst": false 00:31:50.521 }, 00:31:50.521 "method": "bdev_nvme_attach_controller" 00:31:50.521 }' 00:31:50.521 12:07:40 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:31:50.521 12:07:40 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:31:50.521 12:07:40 -- common/autotest_common.sh@1333 -- # break 00:31:50.521 12:07:40 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:50.521 12:07:40 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:50.780 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:50.780 ... 00:31:50.780 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:50.780 ... 00:31:50.780 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:50.780 ... 00:31:50.780 fio-3.35 00:31:50.780 Starting 24 threads 00:31:50.780 EAL: No free 2048 kB hugepages reported on node 1 00:32:02.984 00:32:02.984 filename0: (groupid=0, jobs=1): err= 0: pid=2680079: Thu Apr 18 12:07:52 2024 00:32:02.984 read: IOPS=548, BW=2195KiB/s (2247kB/s)(21.4MiB/10003msec) 00:32:02.984 slat (nsec): min=4873, max=92237, avg=33375.99, stdev=20379.56 00:32:02.984 clat (usec): min=8327, max=47932, avg=28904.28, stdev=2915.19 00:32:02.984 lat (usec): min=8335, max=48008, avg=28937.66, stdev=2917.75 00:32:02.984 clat percentiles (usec): 00:32:02.984 | 1.00th=[17433], 5.00th=[26608], 10.00th=[27657], 20.00th=[28181], 00:32:02.984 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28705], 60.00th=[28967], 00:32:02.984 | 70.00th=[29492], 80.00th=[30016], 90.00th=[30802], 95.00th=[31589], 00:32:02.984 | 99.00th=[37487], 99.50th=[41157], 99.90th=[45351], 99.95th=[47973], 00:32:02.984 | 99.99th=[47973] 00:32:02.984 bw ( KiB/s): min= 2048, max= 2336, per=4.20%, avg=2202.16, stdev=67.20, samples=19 00:32:02.984 iops : min= 512, max= 584, avg=550.42, stdev=16.86, samples=19 00:32:02.984 lat (msec) : 10=0.26%, 20=1.80%, 50=97.94% 00:32:02.984 cpu : usr=94.55%, sys=3.18%, ctx=53, majf=0, minf=1632 00:32:02.984 IO depths : 1=4.3%, 2=8.7%, 4=22.1%, 8=56.4%, 16=8.5%, 32=0.0%, >=64=0.0% 00:32:02.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.984 complete : 0=0.0%, 4=93.9%, 8=0.5%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.984 issued rwts: total=5488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:02.984 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:02.984 filename0: (groupid=0, jobs=1): err= 0: pid=2680080: Thu Apr 18 12:07:52 2024 00:32:02.984 read: IOPS=550, BW=2200KiB/s (2253kB/s)(21.5MiB/10005msec) 00:32:02.984 slat (nsec): min=5351, max=85241, avg=25384.66, stdev=17293.22 00:32:02.984 clat (usec): min=4760, max=42218, avg=28881.25, stdev=1789.56 00:32:02.984 lat (usec): min=4771, max=42241, avg=28906.64, stdev=1787.45 00:32:02.984 clat percentiles (usec): 00:32:02.984 | 1.00th=[22938], 5.00th=[27395], 10.00th=[27657], 20.00th=[28181], 00:32:02.984 | 30.00th=[28443], 40.00th=[28705], 50.00th=[28705], 60.00th=[28967], 00:32:02.984 | 70.00th=[29492], 80.00th=[30016], 90.00th=[30802], 95.00th=[31065], 00:32:02.984 | 99.00th=[31589], 99.50th=[31589], 99.90th=[39584], 99.95th=[40633], 00:32:02.984 | 99.99th=[42206] 00:32:02.984 bw ( KiB/s): min= 2048, max= 2304, per=4.20%, avg=2202.42, stdev=68.76, samples=19 00:32:02.984 iops : min= 512, max= 576, avg=550.53, stdev=17.23, samples=19 00:32:02.984 lat (msec) : 10=0.13%, 20=0.49%, 50=99.38% 00:32:02.984 cpu : usr=97.55%, sys=1.76%, ctx=172, majf=0, minf=1634 00:32:02.984 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:32:02.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.984 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.984 issued rwts: total=5504,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:02.984 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:02.984 filename0: (groupid=0, jobs=1): err= 0: pid=2680081: Thu Apr 18 12:07:52 2024 00:32:02.984 read: IOPS=546, BW=2186KiB/s (2238kB/s)(21.4MiB/10015msec) 00:32:02.984 slat (nsec): min=3454, max=82757, avg=35677.28, stdev=16027.98 00:32:02.984 clat (usec): min=19034, max=61878, avg=28948.69, stdev=2155.00 00:32:02.984 lat (usec): min=19047, max=61891, avg=28984.37, stdev=2154.92 00:32:02.984 clat percentiles (usec): 00:32:02.984 | 1.00th=[26608], 5.00th=[27395], 10.00th=[27657], 20.00th=[28181], 00:32:02.984 | 30.00th=[28181], 40.00th=[28443], 50.00th=[28705], 60.00th=[28967], 00:32:02.984 | 70.00th=[29230], 80.00th=[29754], 90.00th=[30540], 95.00th=[30802], 00:32:02.984 | 99.00th=[31851], 99.50th=[32375], 99.90th=[61604], 99.95th=[61604], 00:32:02.984 | 99.99th=[62129] 00:32:02.984 bw ( KiB/s): min= 2048, max= 2304, per=4.17%, avg=2189.42, stdev=72.62, samples=19 00:32:02.984 iops : min= 512, max= 576, avg=547.32, stdev=18.17, samples=19 00:32:02.984 lat (msec) : 20=0.11%, 50=99.60%, 100=0.29% 00:32:02.984 cpu : usr=97.85%, sys=1.70%, ctx=47, majf=0, minf=1632 00:32:02.984 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:02.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.984 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.984 issued rwts: total=5472,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:02.984 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:02.984 filename0: (groupid=0, jobs=1): err= 0: pid=2680083: Thu Apr 18 12:07:52 2024 00:32:02.984 read: IOPS=546, BW=2184KiB/s (2237kB/s)(21.4MiB/10020msec) 00:32:02.984 slat (nsec): min=3463, max=74111, avg=35520.65, stdev=14393.34 00:32:02.984 clat (usec): min=17786, max=75677, avg=28990.96, stdev=2433.00 00:32:02.984 lat (usec): min=17797, max=75693, avg=29026.48, stdev=2432.59 00:32:02.984 clat percentiles (usec): 00:32:02.985 | 1.00th=[26608], 5.00th=[27395], 10.00th=[27657], 20.00th=[28181], 00:32:02.985 | 30.00th=[28181], 40.00th=[28443], 50.00th=[28705], 60.00th=[28967], 00:32:02.985 | 70.00th=[29230], 80.00th=[29754], 90.00th=[30540], 95.00th=[31065], 00:32:02.985 | 99.00th=[31851], 99.50th=[32637], 99.90th=[66847], 99.95th=[66847], 00:32:02.985 | 99.99th=[76022] 00:32:02.985 bw ( KiB/s): min= 2048, max= 2304, per=4.17%, avg=2188.16, stdev=93.87, samples=19 00:32:02.985 iops : min= 512, max= 576, avg=546.84, stdev=23.39, samples=19 00:32:02.985 lat (msec) : 20=0.15%, 50=99.56%, 100=0.29% 00:32:02.985 cpu : usr=97.50%, sys=1.83%, ctx=23, majf=0, minf=1634 00:32:02.985 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:02.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.985 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.985 issued rwts: total=5472,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:02.985 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:02.985 filename0: (groupid=0, jobs=1): err= 0: pid=2680084: Thu Apr 18 12:07:52 2024 00:32:02.985 read: IOPS=542, BW=2169KiB/s (2222kB/s)(21.2MiB/10008msec) 00:32:02.985 slat (nsec): min=3807, max=92270, avg=37982.57, stdev=20666.94 00:32:02.985 clat (usec): min=11999, max=48097, avg=29206.31, stdev=3435.18 00:32:02.985 lat (usec): min=12014, max=48148, avg=29244.29, stdev=3435.80 00:32:02.985 clat percentiles (usec): 00:32:02.985 | 1.00th=[19268], 5.00th=[26346], 10.00th=[27395], 20.00th=[27919], 00:32:02.985 | 30.00th=[28181], 40.00th=[28443], 50.00th=[28705], 60.00th=[28967], 00:32:02.985 | 70.00th=[29492], 80.00th=[30278], 90.00th=[31327], 95.00th=[35914], 00:32:02.985 | 99.00th=[42730], 99.50th=[44827], 99.90th=[47449], 99.95th=[47449], 00:32:02.985 | 99.99th=[47973] 00:32:02.985 bw ( KiB/s): min= 2032, max= 2304, per=4.13%, avg=2168.32, stdev=77.79, samples=19 00:32:02.985 iops : min= 508, max= 576, avg=542.00, stdev=19.51, samples=19 00:32:02.985 lat (msec) : 20=1.40%, 50=98.60% 00:32:02.985 cpu : usr=93.69%, sys=3.88%, ctx=298, majf=0, minf=1632 00:32:02.985 IO depths : 1=3.1%, 2=6.7%, 4=16.6%, 8=62.8%, 16=10.9%, 32=0.0%, >=64=0.0% 00:32:02.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.985 complete : 0=0.0%, 4=92.4%, 8=3.3%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.985 issued rwts: total=5428,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:02.985 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:02.985 filename0: (groupid=0, jobs=1): err= 0: pid=2680085: Thu Apr 18 12:07:52 2024 00:32:02.985 read: IOPS=558, BW=2234KiB/s (2288kB/s)(21.8MiB/10005msec) 00:32:02.985 slat (nsec): min=3616, max=93378, avg=32795.64, stdev=20991.12 00:32:02.985 clat (usec): min=10554, max=64389, avg=28326.91, stdev=3562.24 00:32:02.985 lat (usec): min=10568, max=64403, avg=28359.70, stdev=3565.31 00:32:02.985 clat percentiles (usec): 00:32:02.985 | 1.00th=[17433], 5.00th=[20579], 10.00th=[26870], 20.00th=[27657], 00:32:02.985 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28443], 60.00th=[28705], 00:32:02.985 | 70.00th=[28967], 80.00th=[29754], 90.00th=[30540], 95.00th=[31065], 00:32:02.985 | 99.00th=[37487], 99.50th=[42206], 99.90th=[64226], 99.95th=[64226], 00:32:02.985 | 99.99th=[64226] 00:32:02.985 bw ( KiB/s): min= 2048, max= 2544, per=4.25%, avg=2230.79, stdev=128.28, samples=19 00:32:02.985 iops : min= 512, max= 636, avg=557.58, stdev=32.11, samples=19 00:32:02.985 lat (msec) : 20=3.94%, 50=95.78%, 100=0.29% 00:32:02.985 cpu : usr=97.73%, sys=1.77%, ctx=53, majf=0, minf=1636 00:32:02.985 IO depths : 1=5.2%, 2=10.4%, 4=21.5%, 8=55.2%, 16=7.7%, 32=0.0%, >=64=0.0% 00:32:02.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.985 complete : 0=0.0%, 4=93.2%, 8=1.4%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.985 issued rwts: total=5588,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:02.985 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:02.985 filename0: (groupid=0, jobs=1): err= 0: pid=2680086: Thu Apr 18 12:07:52 2024 00:32:02.985 read: IOPS=548, BW=2192KiB/s (2245kB/s)(21.4MiB/10013msec) 00:32:02.985 slat (nsec): min=7635, max=92139, avg=38695.39, stdev=19772.62 00:32:02.985 clat (usec): min=14729, max=45982, avg=28846.25, stdev=1730.60 00:32:02.985 lat (usec): min=14778, max=46016, avg=28884.94, stdev=1733.50 00:32:02.985 clat percentiles (usec): 00:32:02.985 | 1.00th=[26608], 5.00th=[27395], 10.00th=[27657], 20.00th=[27919], 00:32:02.985 | 30.00th=[28181], 40.00th=[28443], 50.00th=[28705], 60.00th=[28967], 00:32:02.985 | 70.00th=[29230], 80.00th=[29754], 90.00th=[30540], 95.00th=[30802], 00:32:02.985 | 99.00th=[31851], 99.50th=[32637], 99.90th=[45876], 99.95th=[45876], 00:32:02.985 | 99.99th=[45876] 00:32:02.985 bw ( KiB/s): min= 2048, max= 2432, per=4.17%, avg=2188.95, stdev=84.31, samples=19 00:32:02.985 iops : min= 512, max= 608, avg=547.16, stdev=21.10, samples=19 00:32:02.985 lat (msec) : 20=0.36%, 50=99.64% 00:32:02.985 cpu : usr=94.58%, sys=3.15%, ctx=35, majf=0, minf=1634 00:32:02.985 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:02.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.985 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.985 issued rwts: total=5488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:02.985 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:02.985 filename0: (groupid=0, jobs=1): err= 0: pid=2680087: Thu Apr 18 12:07:52 2024 00:32:02.985 read: IOPS=547, BW=2190KiB/s (2242kB/s)(21.4MiB/10025msec) 00:32:02.985 slat (nsec): min=3641, max=80920, avg=33538.77, stdev=17710.87 00:32:02.985 clat (usec): min=18460, max=39120, avg=28939.51, stdev=1321.68 00:32:02.985 lat (usec): min=18470, max=39163, avg=28973.04, stdev=1319.09 00:32:02.985 clat percentiles (usec): 00:32:02.985 | 1.00th=[26608], 5.00th=[27395], 10.00th=[27657], 20.00th=[28181], 00:32:02.985 | 30.00th=[28181], 40.00th=[28443], 50.00th=[28705], 60.00th=[28967], 00:32:02.985 | 70.00th=[29230], 80.00th=[30016], 90.00th=[30540], 95.00th=[31065], 00:32:02.985 | 99.00th=[31589], 99.50th=[31589], 99.90th=[38536], 99.95th=[38536], 00:32:02.985 | 99.99th=[39060] 00:32:02.985 bw ( KiB/s): min= 2048, max= 2304, per=4.19%, avg=2195.11, stdev=63.49, samples=19 00:32:02.985 iops : min= 512, max= 576, avg=548.58, stdev=15.79, samples=19 00:32:02.985 lat (msec) : 20=0.11%, 50=99.89% 00:32:02.985 cpu : usr=97.43%, sys=2.04%, ctx=49, majf=0, minf=1632 00:32:02.985 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:02.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.985 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.985 issued rwts: total=5488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:02.985 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:02.985 filename1: (groupid=0, jobs=1): err= 0: pid=2680088: Thu Apr 18 12:07:52 2024 00:32:02.985 read: IOPS=548, BW=2196KiB/s (2249kB/s)(21.5MiB/10011msec) 00:32:02.985 slat (nsec): min=6857, max=83542, avg=32250.78, stdev=16899.70 00:32:02.985 clat (usec): min=12207, max=45901, avg=28867.60, stdev=2495.54 00:32:02.985 lat (usec): min=12260, max=45962, avg=28899.85, stdev=2496.69 00:32:02.985 clat percentiles (usec): 00:32:02.985 | 1.00th=[20055], 5.00th=[26870], 10.00th=[27395], 20.00th=[27919], 00:32:02.985 | 30.00th=[28181], 40.00th=[28443], 50.00th=[28705], 60.00th=[28967], 00:32:02.985 | 70.00th=[29230], 80.00th=[30016], 90.00th=[30540], 95.00th=[31327], 00:32:02.985 | 99.00th=[40109], 99.50th=[43254], 99.90th=[45351], 99.95th=[45876], 00:32:02.985 | 99.99th=[45876] 00:32:02.985 bw ( KiB/s): min= 2048, max= 2304, per=4.18%, avg=2194.32, stdev=81.06, samples=19 00:32:02.985 iops : min= 512, max= 576, avg=548.42, stdev=20.28, samples=19 00:32:02.985 lat (msec) : 20=0.71%, 50=99.29% 00:32:02.985 cpu : usr=97.89%, sys=1.67%, ctx=21, majf=0, minf=1636 00:32:02.985 IO depths : 1=5.1%, 2=10.2%, 4=21.7%, 8=55.1%, 16=7.9%, 32=0.0%, >=64=0.0% 00:32:02.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.985 complete : 0=0.0%, 4=93.3%, 8=1.3%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.985 issued rwts: total=5496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:02.985 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:02.985 filename1: (groupid=0, jobs=1): err= 0: pid=2680089: Thu Apr 18 12:07:52 2024 00:32:02.985 read: IOPS=550, BW=2201KiB/s (2254kB/s)(21.5MiB/10019msec) 00:32:02.985 slat (nsec): min=5649, max=94790, avg=36804.98, stdev=19315.28 00:32:02.985 clat (usec): min=12219, max=48016, avg=28794.53, stdev=2679.08 00:32:02.985 lat (usec): min=12227, max=48046, avg=28831.34, stdev=2682.38 00:32:02.985 clat percentiles (usec): 00:32:02.985 | 1.00th=[18482], 5.00th=[26870], 10.00th=[27395], 20.00th=[27919], 00:32:02.985 | 30.00th=[28181], 40.00th=[28443], 50.00th=[28705], 60.00th=[28967], 00:32:02.985 | 70.00th=[29230], 80.00th=[30016], 90.00th=[30802], 95.00th=[31327], 00:32:02.985 | 99.00th=[39584], 99.50th=[40633], 99.90th=[45876], 99.95th=[47973], 00:32:02.985 | 99.99th=[47973] 00:32:02.985 bw ( KiB/s): min= 2035, max= 2416, per=4.19%, avg=2197.30, stdev=87.35, samples=20 00:32:02.985 iops : min= 508, max= 604, avg=549.10, stdev=21.98, samples=20 00:32:02.985 lat (msec) : 20=1.96%, 50=98.04% 00:32:02.985 cpu : usr=98.13%, sys=1.43%, ctx=21, majf=0, minf=1633 00:32:02.985 IO depths : 1=3.2%, 2=7.9%, 4=22.5%, 8=56.7%, 16=9.7%, 32=0.0%, >=64=0.0% 00:32:02.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.985 complete : 0=0.0%, 4=94.1%, 8=0.3%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.985 issued rwts: total=5514,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:02.985 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:02.985 filename1: (groupid=0, jobs=1): err= 0: pid=2680090: Thu Apr 18 12:07:52 2024 00:32:02.985 read: IOPS=548, BW=2194KiB/s (2247kB/s)(21.4MiB/10006msec) 00:32:02.985 slat (nsec): min=3581, max=86660, avg=37854.87, stdev=17899.79 00:32:02.985 clat (usec): min=10527, max=44898, avg=28807.10, stdev=1745.55 00:32:02.985 lat (usec): min=10548, max=44913, avg=28844.96, stdev=1744.85 00:32:02.985 clat percentiles (usec): 00:32:02.985 | 1.00th=[26608], 5.00th=[27132], 10.00th=[27657], 20.00th=[27919], 00:32:02.985 | 30.00th=[28181], 40.00th=[28443], 50.00th=[28705], 60.00th=[28967], 00:32:02.985 | 70.00th=[29230], 80.00th=[29754], 90.00th=[30540], 95.00th=[30802], 00:32:02.985 | 99.00th=[31327], 99.50th=[31589], 99.90th=[44827], 99.95th=[44827], 00:32:02.985 | 99.99th=[44827] 00:32:02.985 bw ( KiB/s): min= 2048, max= 2304, per=4.17%, avg=2189.37, stdev=58.79, samples=19 00:32:02.985 iops : min= 512, max= 576, avg=547.26, stdev=14.73, samples=19 00:32:02.985 lat (msec) : 20=0.31%, 50=99.69% 00:32:02.985 cpu : usr=97.74%, sys=1.73%, ctx=43, majf=0, minf=1635 00:32:02.985 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:02.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.985 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.985 issued rwts: total=5488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:02.985 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:02.985 filename1: (groupid=0, jobs=1): err= 0: pid=2680091: Thu Apr 18 12:07:52 2024 00:32:02.985 read: IOPS=547, BW=2190KiB/s (2242kB/s)(21.4MiB/10006msec) 00:32:02.985 slat (nsec): min=6551, max=92272, avg=31562.41, stdev=22014.96 00:32:02.985 clat (usec): min=7146, max=46041, avg=28986.44, stdev=2918.26 00:32:02.985 lat (usec): min=7158, max=46073, avg=29018.01, stdev=2923.42 00:32:02.985 clat percentiles (usec): 00:32:02.985 | 1.00th=[18220], 5.00th=[27395], 10.00th=[27657], 20.00th=[28181], 00:32:02.985 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28705], 60.00th=[28967], 00:32:02.985 | 70.00th=[29492], 80.00th=[30016], 90.00th=[30802], 95.00th=[31589], 00:32:02.985 | 99.00th=[39060], 99.50th=[40109], 99.90th=[41157], 99.95th=[41157], 00:32:02.985 | 99.99th=[45876] 00:32:02.986 bw ( KiB/s): min= 1916, max= 2480, per=4.19%, avg=2197.74, stdev=121.52, samples=19 00:32:02.986 iops : min= 479, max= 620, avg=549.32, stdev=30.41, samples=19 00:32:02.986 lat (msec) : 10=0.33%, 20=1.06%, 50=98.61% 00:32:02.986 cpu : usr=97.76%, sys=1.82%, ctx=19, majf=0, minf=1637 00:32:02.986 IO depths : 1=5.7%, 2=11.4%, 4=23.8%, 8=52.2%, 16=6.9%, 32=0.0%, >=64=0.0% 00:32:02.986 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.986 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.986 issued rwts: total=5478,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:02.986 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:02.986 filename1: (groupid=0, jobs=1): err= 0: pid=2680092: Thu Apr 18 12:07:52 2024 00:32:02.986 read: IOPS=548, BW=2192KiB/s (2245kB/s)(21.4MiB/10014msec) 00:32:02.986 slat (nsec): min=3663, max=92014, avg=24065.95, stdev=17309.13 00:32:02.986 clat (usec): min=18673, max=39625, avg=29014.70, stdev=1196.90 00:32:02.986 lat (usec): min=18685, max=39634, avg=29038.77, stdev=1197.79 00:32:02.986 clat percentiles (usec): 00:32:02.986 | 1.00th=[26870], 5.00th=[27395], 10.00th=[27919], 20.00th=[28181], 00:32:02.986 | 30.00th=[28443], 40.00th=[28705], 50.00th=[28967], 60.00th=[28967], 00:32:02.986 | 70.00th=[29492], 80.00th=[30016], 90.00th=[30540], 95.00th=[31065], 00:32:02.986 | 99.00th=[31851], 99.50th=[32375], 99.90th=[33162], 99.95th=[33162], 00:32:02.986 | 99.99th=[39584] 00:32:02.986 bw ( KiB/s): min= 2043, max= 2304, per=4.18%, avg=2194.63, stdev=78.00, samples=19 00:32:02.986 iops : min= 510, max= 576, avg=548.42, stdev=19.65, samples=19 00:32:02.986 lat (msec) : 20=0.04%, 50=99.96% 00:32:02.986 cpu : usr=97.79%, sys=1.71%, ctx=78, majf=0, minf=1636 00:32:02.986 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:02.986 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.986 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.986 issued rwts: total=5488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:02.986 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:02.986 filename1: (groupid=0, jobs=1): err= 0: pid=2680093: Thu Apr 18 12:07:52 2024 00:32:02.986 read: IOPS=545, BW=2181KiB/s (2233kB/s)(21.3MiB/10016msec) 00:32:02.986 slat (nsec): min=3903, max=85965, avg=25383.37, stdev=16952.60 00:32:02.986 clat (usec): min=13493, max=71769, avg=29180.77, stdev=3784.29 00:32:02.986 lat (usec): min=13509, max=71786, avg=29206.15, stdev=3784.63 00:32:02.986 clat percentiles (usec): 00:32:02.986 | 1.00th=[18220], 5.00th=[25035], 10.00th=[27132], 20.00th=[27919], 00:32:02.986 | 30.00th=[28443], 40.00th=[28705], 50.00th=[28967], 60.00th=[28967], 00:32:02.986 | 70.00th=[29492], 80.00th=[30278], 90.00th=[31327], 95.00th=[33424], 00:32:02.986 | 99.00th=[41157], 99.50th=[44303], 99.90th=[71828], 99.95th=[71828], 00:32:02.986 | 99.99th=[71828] 00:32:02.986 bw ( KiB/s): min= 1968, max= 2368, per=4.17%, avg=2184.37, stdev=106.73, samples=19 00:32:02.986 iops : min= 492, max= 592, avg=546.05, stdev=26.69, samples=19 00:32:02.986 lat (msec) : 20=1.61%, 50=98.10%, 100=0.29% 00:32:02.986 cpu : usr=97.73%, sys=1.84%, ctx=16, majf=0, minf=1634 00:32:02.986 IO depths : 1=1.9%, 2=4.0%, 4=9.6%, 8=70.6%, 16=14.0%, 32=0.0%, >=64=0.0% 00:32:02.986 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.986 complete : 0=0.0%, 4=91.1%, 8=6.4%, 16=2.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.986 issued rwts: total=5460,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:02.986 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:02.986 filename1: (groupid=0, jobs=1): err= 0: pid=2680094: Thu Apr 18 12:07:52 2024 00:32:02.986 read: IOPS=539, BW=2159KiB/s (2210kB/s)(21.1MiB/10003msec) 00:32:02.986 slat (nsec): min=6922, max=91299, avg=29630.03, stdev=19757.59 00:32:02.986 clat (usec): min=12522, max=61394, avg=29506.26, stdev=3795.94 00:32:02.986 lat (usec): min=12550, max=61421, avg=29535.89, stdev=3795.90 00:32:02.986 clat percentiles (usec): 00:32:02.986 | 1.00th=[20841], 5.00th=[24511], 10.00th=[27132], 20.00th=[27919], 00:32:02.986 | 30.00th=[28443], 40.00th=[28705], 50.00th=[28967], 60.00th=[29230], 00:32:02.986 | 70.00th=[29754], 80.00th=[30540], 90.00th=[32375], 95.00th=[36439], 00:32:02.986 | 99.00th=[43779], 99.50th=[44827], 99.90th=[61080], 99.95th=[61604], 00:32:02.986 | 99.99th=[61604] 00:32:02.986 bw ( KiB/s): min= 2043, max= 2304, per=4.11%, avg=2157.00, stdev=78.52, samples=19 00:32:02.986 iops : min= 510, max= 576, avg=539.05, stdev=19.80, samples=19 00:32:02.986 lat (msec) : 20=0.59%, 50=99.11%, 100=0.30% 00:32:02.986 cpu : usr=97.27%, sys=2.12%, ctx=75, majf=0, minf=1636 00:32:02.986 IO depths : 1=0.9%, 2=1.9%, 4=6.8%, 8=74.9%, 16=15.4%, 32=0.0%, >=64=0.0% 00:32:02.986 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.986 complete : 0=0.0%, 4=90.5%, 8=7.5%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.986 issued rwts: total=5398,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:02.986 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:02.986 filename1: (groupid=0, jobs=1): err= 0: pid=2680095: Thu Apr 18 12:07:52 2024 00:32:02.986 read: IOPS=547, BW=2190KiB/s (2243kB/s)(21.4MiB/10005msec) 00:32:02.986 slat (nsec): min=7110, max=95508, avg=27281.07, stdev=20520.32 00:32:02.986 clat (usec): min=4494, max=73039, avg=29110.32, stdev=4341.56 00:32:02.986 lat (usec): min=4504, max=73067, avg=29137.60, stdev=4341.91 00:32:02.986 clat percentiles (usec): 00:32:02.986 | 1.00th=[18744], 5.00th=[23200], 10.00th=[23987], 20.00th=[27395], 00:32:02.986 | 30.00th=[28181], 40.00th=[28443], 50.00th=[28705], 60.00th=[29230], 00:32:02.986 | 70.00th=[30016], 80.00th=[30802], 90.00th=[33817], 95.00th=[36439], 00:32:02.986 | 99.00th=[43254], 99.50th=[44303], 99.90th=[59507], 99.95th=[59507], 00:32:02.986 | 99.99th=[72877] 00:32:02.986 bw ( KiB/s): min= 2059, max= 2538, per=4.16%, avg=2182.26, stdev=111.43, samples=19 00:32:02.986 iops : min= 514, max= 634, avg=545.26, stdev=27.94, samples=19 00:32:02.986 lat (msec) : 10=0.18%, 20=1.52%, 50=98.01%, 100=0.29% 00:32:02.986 cpu : usr=94.07%, sys=3.21%, ctx=50, majf=0, minf=1637 00:32:02.986 IO depths : 1=0.1%, 2=0.2%, 4=3.2%, 8=80.1%, 16=16.5%, 32=0.0%, >=64=0.0% 00:32:02.986 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.986 complete : 0=0.0%, 4=89.4%, 8=8.9%, 16=1.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.986 issued rwts: total=5478,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:02.986 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:02.986 filename2: (groupid=0, jobs=1): err= 0: pid=2680096: Thu Apr 18 12:07:52 2024 00:32:02.986 read: IOPS=543, BW=2173KiB/s (2225kB/s)(21.2MiB/10004msec) 00:32:02.986 slat (nsec): min=5932, max=94803, avg=30820.76, stdev=20502.52 00:32:02.986 clat (usec): min=14965, max=53365, avg=29268.42, stdev=3336.88 00:32:02.986 lat (usec): min=14974, max=53388, avg=29299.24, stdev=3337.17 00:32:02.986 clat percentiles (usec): 00:32:02.986 | 1.00th=[19530], 5.00th=[25035], 10.00th=[27657], 20.00th=[28181], 00:32:02.986 | 30.00th=[28443], 40.00th=[28705], 50.00th=[28967], 60.00th=[29230], 00:32:02.986 | 70.00th=[29754], 80.00th=[30278], 90.00th=[31065], 95.00th=[34866], 00:32:02.986 | 99.00th=[42206], 99.50th=[46400], 99.90th=[53216], 99.95th=[53216], 00:32:02.986 | 99.99th=[53216] 00:32:02.986 bw ( KiB/s): min= 1888, max= 2352, per=4.14%, avg=2172.95, stdev=103.72, samples=19 00:32:02.986 iops : min= 472, max= 588, avg=543.16, stdev=25.95, samples=19 00:32:02.986 lat (msec) : 20=1.42%, 50=98.18%, 100=0.40% 00:32:02.986 cpu : usr=97.73%, sys=1.78%, ctx=79, majf=0, minf=1633 00:32:02.986 IO depths : 1=1.3%, 2=2.7%, 4=9.2%, 8=72.3%, 16=14.5%, 32=0.0%, >=64=0.0% 00:32:02.986 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.986 complete : 0=0.0%, 4=91.1%, 8=6.3%, 16=2.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.986 issued rwts: total=5434,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:02.986 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:02.986 filename2: (groupid=0, jobs=1): err= 0: pid=2680097: Thu Apr 18 12:07:52 2024 00:32:02.986 read: IOPS=547, BW=2191KiB/s (2244kB/s)(21.4MiB/10018msec) 00:32:02.986 slat (nsec): min=4741, max=86293, avg=37662.75, stdev=18220.24 00:32:02.986 clat (usec): min=16821, max=40474, avg=28888.92, stdev=1423.19 00:32:02.986 lat (usec): min=16835, max=40510, avg=28926.58, stdev=1421.28 00:32:02.986 clat percentiles (usec): 00:32:02.986 | 1.00th=[26608], 5.00th=[27395], 10.00th=[27657], 20.00th=[27919], 00:32:02.986 | 30.00th=[28181], 40.00th=[28443], 50.00th=[28705], 60.00th=[28967], 00:32:02.986 | 70.00th=[29230], 80.00th=[30016], 90.00th=[30540], 95.00th=[31065], 00:32:02.986 | 99.00th=[31589], 99.50th=[32637], 99.90th=[39060], 99.95th=[40109], 00:32:02.986 | 99.99th=[40633] 00:32:02.986 bw ( KiB/s): min= 2048, max= 2304, per=4.19%, avg=2195.11, stdev=63.49, samples=19 00:32:02.986 iops : min= 512, max= 576, avg=548.58, stdev=15.79, samples=19 00:32:02.986 lat (msec) : 20=0.27%, 50=99.73% 00:32:02.986 cpu : usr=97.80%, sys=1.66%, ctx=36, majf=0, minf=1635 00:32:02.986 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:32:02.986 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.986 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.986 issued rwts: total=5488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:02.986 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:02.986 filename2: (groupid=0, jobs=1): err= 0: pid=2680098: Thu Apr 18 12:07:52 2024 00:32:02.986 read: IOPS=541, BW=2167KiB/s (2219kB/s)(21.2MiB/10006msec) 00:32:02.986 slat (nsec): min=3605, max=92102, avg=30248.59, stdev=21532.28 00:32:02.986 clat (usec): min=10968, max=63804, avg=29413.72, stdev=3939.52 00:32:02.986 lat (usec): min=10975, max=63818, avg=29443.97, stdev=3938.11 00:32:02.986 clat percentiles (usec): 00:32:02.986 | 1.00th=[20841], 5.00th=[23725], 10.00th=[25035], 20.00th=[27919], 00:32:02.986 | 30.00th=[28443], 40.00th=[28705], 50.00th=[28967], 60.00th=[29230], 00:32:02.986 | 70.00th=[30016], 80.00th=[30802], 90.00th=[33817], 95.00th=[36439], 00:32:02.986 | 99.00th=[41681], 99.50th=[43779], 99.90th=[63701], 99.95th=[63701], 00:32:02.986 | 99.99th=[63701] 00:32:02.986 bw ( KiB/s): min= 1987, max= 2256, per=4.13%, avg=2166.11, stdev=65.84, samples=19 00:32:02.986 iops : min= 496, max= 564, avg=541.37, stdev=16.59, samples=19 00:32:02.986 lat (msec) : 20=0.72%, 50=98.99%, 100=0.30% 00:32:02.986 cpu : usr=97.59%, sys=1.79%, ctx=56, majf=0, minf=1634 00:32:02.986 IO depths : 1=0.1%, 2=0.3%, 4=3.3%, 8=79.7%, 16=16.6%, 32=0.0%, >=64=0.0% 00:32:02.986 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.986 complete : 0=0.0%, 4=89.5%, 8=8.9%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.986 issued rwts: total=5420,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:02.986 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:02.986 filename2: (groupid=0, jobs=1): err= 0: pid=2680099: Thu Apr 18 12:07:52 2024 00:32:02.986 read: IOPS=552, BW=2211KiB/s (2264kB/s)(21.6MiB/10003msec) 00:32:02.986 slat (nsec): min=4491, max=86015, avg=31805.83, stdev=16704.97 00:32:02.986 clat (usec): min=14183, max=61749, avg=28682.87, stdev=3587.33 00:32:02.986 lat (usec): min=14192, max=61767, avg=28714.68, stdev=3589.24 00:32:02.986 clat percentiles (usec): 00:32:02.986 | 1.00th=[18744], 5.00th=[22676], 10.00th=[26608], 20.00th=[27919], 00:32:02.986 | 30.00th=[28181], 40.00th=[28443], 50.00th=[28705], 60.00th=[28967], 00:32:02.986 | 70.00th=[29230], 80.00th=[30278], 90.00th=[30802], 95.00th=[31589], 00:32:02.986 | 99.00th=[41157], 99.50th=[44827], 99.90th=[61604], 99.95th=[61604], 00:32:02.986 | 99.99th=[61604] 00:32:02.986 bw ( KiB/s): min= 2043, max= 2400, per=4.22%, avg=2213.00, stdev=103.52, samples=19 00:32:02.986 iops : min= 510, max= 600, avg=553.21, stdev=25.95, samples=19 00:32:02.986 lat (msec) : 20=2.84%, 50=96.87%, 100=0.29% 00:32:02.986 cpu : usr=97.90%, sys=1.60%, ctx=45, majf=0, minf=1635 00:32:02.986 IO depths : 1=4.4%, 2=9.1%, 4=20.0%, 8=57.6%, 16=8.9%, 32=0.0%, >=64=0.0% 00:32:02.986 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.986 complete : 0=0.0%, 4=93.0%, 8=1.9%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.986 issued rwts: total=5528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:02.986 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:02.986 filename2: (groupid=0, jobs=1): err= 0: pid=2680100: Thu Apr 18 12:07:52 2024 00:32:02.986 read: IOPS=547, BW=2190KiB/s (2243kB/s)(21.4MiB/10017msec) 00:32:02.986 slat (usec): min=3, max=129, avg=34.09, stdev=19.00 00:32:02.986 clat (usec): min=16715, max=73254, avg=28922.75, stdev=2714.24 00:32:02.986 lat (usec): min=16723, max=73270, avg=28956.84, stdev=2715.82 00:32:02.986 clat percentiles (usec): 00:32:02.987 | 1.00th=[21365], 5.00th=[27132], 10.00th=[27657], 20.00th=[28181], 00:32:02.987 | 30.00th=[28181], 40.00th=[28443], 50.00th=[28705], 60.00th=[28967], 00:32:02.987 | 70.00th=[29230], 80.00th=[29754], 90.00th=[30540], 95.00th=[31065], 00:32:02.987 | 99.00th=[33817], 99.50th=[42730], 99.90th=[72877], 99.95th=[72877], 00:32:02.987 | 99.99th=[72877] 00:32:02.987 bw ( KiB/s): min= 2043, max= 2304, per=4.17%, avg=2186.35, stdev=91.01, samples=20 00:32:02.987 iops : min= 510, max= 576, avg=546.40, stdev=22.73, samples=20 00:32:02.987 lat (msec) : 20=0.57%, 50=99.14%, 100=0.29% 00:32:02.987 cpu : usr=97.72%, sys=1.80%, ctx=65, majf=0, minf=1634 00:32:02.987 IO depths : 1=5.6%, 2=11.5%, 4=24.0%, 8=51.7%, 16=7.1%, 32=0.0%, >=64=0.0% 00:32:02.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.987 complete : 0=0.0%, 4=94.1%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.987 issued rwts: total=5485,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:02.987 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:02.987 filename2: (groupid=0, jobs=1): err= 0: pid=2680101: Thu Apr 18 12:07:52 2024 00:32:02.987 read: IOPS=548, BW=2193KiB/s (2246kB/s)(21.4MiB/10008msec) 00:32:02.987 slat (nsec): min=5159, max=80846, avg=35988.37, stdev=15440.26 00:32:02.987 clat (usec): min=10554, max=46015, avg=28849.14, stdev=1794.37 00:32:02.987 lat (usec): min=10594, max=46035, avg=28885.13, stdev=1793.87 00:32:02.987 clat percentiles (usec): 00:32:02.987 | 1.00th=[26608], 5.00th=[27395], 10.00th=[27657], 20.00th=[28181], 00:32:02.987 | 30.00th=[28181], 40.00th=[28443], 50.00th=[28705], 60.00th=[28967], 00:32:02.987 | 70.00th=[29230], 80.00th=[29754], 90.00th=[30540], 95.00th=[30802], 00:32:02.987 | 99.00th=[31327], 99.50th=[31589], 99.90th=[45876], 99.95th=[45876], 00:32:02.987 | 99.99th=[45876] 00:32:02.987 bw ( KiB/s): min= 2048, max= 2304, per=4.17%, avg=2189.16, stdev=58.84, samples=19 00:32:02.987 iops : min= 512, max= 576, avg=547.21, stdev=14.74, samples=19 00:32:02.987 lat (msec) : 20=0.38%, 50=99.62% 00:32:02.987 cpu : usr=98.01%, sys=1.54%, ctx=63, majf=0, minf=1636 00:32:02.987 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:02.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.987 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.987 issued rwts: total=5488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:02.987 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:02.987 filename2: (groupid=0, jobs=1): err= 0: pid=2680102: Thu Apr 18 12:07:52 2024 00:32:02.987 read: IOPS=546, BW=2184KiB/s (2237kB/s)(21.4MiB/10021msec) 00:32:02.987 slat (nsec): min=4495, max=80200, avg=19046.63, stdev=14805.75 00:32:02.987 clat (usec): min=18527, max=66016, avg=29149.15, stdev=2292.19 00:32:02.987 lat (usec): min=18538, max=66036, avg=29168.20, stdev=2291.10 00:32:02.987 clat percentiles (usec): 00:32:02.987 | 1.00th=[26870], 5.00th=[27395], 10.00th=[27919], 20.00th=[28181], 00:32:02.987 | 30.00th=[28443], 40.00th=[28705], 50.00th=[28705], 60.00th=[28967], 00:32:02.987 | 70.00th=[29230], 80.00th=[30016], 90.00th=[30802], 95.00th=[31065], 00:32:02.987 | 99.00th=[31851], 99.50th=[32375], 99.90th=[65799], 99.95th=[65799], 00:32:02.987 | 99.99th=[65799] 00:32:02.987 bw ( KiB/s): min= 2043, max= 2304, per=4.16%, avg=2181.40, stdev=77.53, samples=20 00:32:02.987 iops : min= 510, max= 576, avg=545.20, stdev=19.41, samples=20 00:32:02.987 lat (msec) : 20=0.04%, 50=99.67%, 100=0.29% 00:32:02.987 cpu : usr=97.52%, sys=1.95%, ctx=132, majf=0, minf=1634 00:32:02.987 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:02.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.987 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.987 issued rwts: total=5472,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:02.987 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:02.987 filename2: (groupid=0, jobs=1): err= 0: pid=2680103: Thu Apr 18 12:07:52 2024 00:32:02.987 read: IOPS=535, BW=2143KiB/s (2195kB/s)(20.9MiB/10009msec) 00:32:02.987 slat (nsec): min=3828, max=87853, avg=30453.48, stdev=16622.84 00:32:02.987 clat (usec): min=8573, max=65330, avg=29639.66, stdev=4175.27 00:32:02.987 lat (usec): min=8602, max=65344, avg=29670.11, stdev=4175.66 00:32:02.987 clat percentiles (usec): 00:32:02.987 | 1.00th=[20055], 5.00th=[25297], 10.00th=[27395], 20.00th=[28181], 00:32:02.987 | 30.00th=[28443], 40.00th=[28705], 50.00th=[28705], 60.00th=[28967], 00:32:02.987 | 70.00th=[29754], 80.00th=[30540], 90.00th=[34866], 95.00th=[38011], 00:32:02.987 | 99.00th=[42730], 99.50th=[45876], 99.90th=[65274], 99.95th=[65274], 00:32:02.987 | 99.99th=[65274] 00:32:02.987 bw ( KiB/s): min= 1920, max= 2272, per=4.10%, avg=2151.47, stdev=87.83, samples=19 00:32:02.987 iops : min= 480, max= 568, avg=537.79, stdev=21.93, samples=19 00:32:02.987 lat (msec) : 10=0.02%, 20=0.93%, 50=98.75%, 100=0.30% 00:32:02.987 cpu : usr=98.10%, sys=1.48%, ctx=16, majf=0, minf=1633 00:32:02.987 IO depths : 1=3.0%, 2=6.0%, 4=14.9%, 8=64.6%, 16=11.5%, 32=0.0%, >=64=0.0% 00:32:02.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.987 complete : 0=0.0%, 4=92.1%, 8=4.1%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.987 issued rwts: total=5363,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:02.987 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:02.987 00:32:02.987 Run status group 0 (all jobs): 00:32:02.987 READ: bw=51.2MiB/s (53.7MB/s), 2143KiB/s-2234KiB/s (2195kB/s-2288kB/s), io=513MiB (538MB), run=10003-10025msec 00:32:03.611 ----------------------------------------------------- 00:32:03.611 Suppressions used: 00:32:03.611 count bytes template 00:32:03.611 45 402 /usr/src/fio/parse.c 00:32:03.611 1 8 libtcmalloc_minimal.so 00:32:03.611 1 904 libcrypto.so 00:32:03.611 ----------------------------------------------------- 00:32:03.611 00:32:03.611 12:07:54 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:32:03.611 12:07:54 -- target/dif.sh@43 -- # local sub 00:32:03.611 12:07:54 -- target/dif.sh@45 -- # for sub in "$@" 00:32:03.611 12:07:54 -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:03.611 12:07:54 -- target/dif.sh@36 -- # local sub_id=0 00:32:03.611 12:07:54 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:03.611 12:07:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:03.611 12:07:54 -- common/autotest_common.sh@10 -- # set +x 00:32:03.611 12:07:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:03.611 12:07:54 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:03.611 12:07:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:03.611 12:07:54 -- common/autotest_common.sh@10 -- # set +x 00:32:03.611 12:07:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:03.611 12:07:54 -- target/dif.sh@45 -- # for sub in "$@" 00:32:03.611 12:07:54 -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:03.611 12:07:54 -- target/dif.sh@36 -- # local sub_id=1 00:32:03.611 12:07:54 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:03.611 12:07:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:03.611 12:07:54 -- common/autotest_common.sh@10 -- # set +x 00:32:03.611 12:07:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:03.611 12:07:54 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:03.611 12:07:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:03.611 12:07:54 -- common/autotest_common.sh@10 -- # set +x 00:32:03.611 12:07:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:03.611 12:07:54 -- target/dif.sh@45 -- # for sub in "$@" 00:32:03.611 12:07:54 -- target/dif.sh@46 -- # destroy_subsystem 2 00:32:03.611 12:07:54 -- target/dif.sh@36 -- # local sub_id=2 00:32:03.611 12:07:54 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:03.611 12:07:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:03.611 12:07:54 -- common/autotest_common.sh@10 -- # set +x 00:32:03.611 12:07:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:03.611 12:07:54 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:32:03.611 12:07:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:03.611 12:07:54 -- common/autotest_common.sh@10 -- # set +x 00:32:03.611 12:07:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:03.611 12:07:54 -- target/dif.sh@115 -- # NULL_DIF=1 00:32:03.611 12:07:54 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:32:03.611 12:07:54 -- target/dif.sh@115 -- # numjobs=2 00:32:03.611 12:07:54 -- target/dif.sh@115 -- # iodepth=8 00:32:03.611 12:07:54 -- target/dif.sh@115 -- # runtime=5 00:32:03.611 12:07:54 -- target/dif.sh@115 -- # files=1 00:32:03.611 12:07:54 -- target/dif.sh@117 -- # create_subsystems 0 1 00:32:03.611 12:07:54 -- target/dif.sh@28 -- # local sub 00:32:03.611 12:07:54 -- target/dif.sh@30 -- # for sub in "$@" 00:32:03.611 12:07:54 -- target/dif.sh@31 -- # create_subsystem 0 00:32:03.611 12:07:54 -- target/dif.sh@18 -- # local sub_id=0 00:32:03.611 12:07:54 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:03.611 12:07:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:03.611 12:07:54 -- common/autotest_common.sh@10 -- # set +x 00:32:03.611 bdev_null0 00:32:03.611 12:07:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:03.611 12:07:54 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:03.611 12:07:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:03.611 12:07:54 -- common/autotest_common.sh@10 -- # set +x 00:32:03.611 12:07:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:03.611 12:07:54 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:03.611 12:07:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:03.611 12:07:54 -- common/autotest_common.sh@10 -- # set +x 00:32:03.611 12:07:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:03.611 12:07:54 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:03.611 12:07:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:03.611 12:07:54 -- common/autotest_common.sh@10 -- # set +x 00:32:03.611 [2024-04-18 12:07:54.125376] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:03.611 12:07:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:03.611 12:07:54 -- target/dif.sh@30 -- # for sub in "$@" 00:32:03.611 12:07:54 -- target/dif.sh@31 -- # create_subsystem 1 00:32:03.611 12:07:54 -- target/dif.sh@18 -- # local sub_id=1 00:32:03.611 12:07:54 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:32:03.611 12:07:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:03.611 12:07:54 -- common/autotest_common.sh@10 -- # set +x 00:32:03.611 bdev_null1 00:32:03.611 12:07:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:03.611 12:07:54 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:03.611 12:07:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:03.611 12:07:54 -- common/autotest_common.sh@10 -- # set +x 00:32:03.611 12:07:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:03.611 12:07:54 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:03.611 12:07:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:03.611 12:07:54 -- common/autotest_common.sh@10 -- # set +x 00:32:03.611 12:07:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:03.611 12:07:54 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:03.611 12:07:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:03.611 12:07:54 -- common/autotest_common.sh@10 -- # set +x 00:32:03.870 12:07:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:03.870 12:07:54 -- target/dif.sh@118 -- # fio /dev/fd/62 00:32:03.870 12:07:54 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:32:03.870 12:07:54 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:32:03.870 12:07:54 -- nvmf/common.sh@521 -- # config=() 00:32:03.870 12:07:54 -- nvmf/common.sh@521 -- # local subsystem config 00:32:03.870 12:07:54 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:03.870 12:07:54 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:32:03.870 12:07:54 -- target/dif.sh@82 -- # gen_fio_conf 00:32:03.870 12:07:54 -- target/dif.sh@54 -- # local file 00:32:03.870 12:07:54 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:32:03.870 { 00:32:03.870 "params": { 00:32:03.870 "name": "Nvme$subsystem", 00:32:03.870 "trtype": "$TEST_TRANSPORT", 00:32:03.870 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:03.870 "adrfam": "ipv4", 00:32:03.870 "trsvcid": "$NVMF_PORT", 00:32:03.870 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:03.870 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:03.870 "hdgst": ${hdgst:-false}, 00:32:03.870 "ddgst": ${ddgst:-false} 00:32:03.870 }, 00:32:03.870 "method": "bdev_nvme_attach_controller" 00:32:03.870 } 00:32:03.870 EOF 00:32:03.870 )") 00:32:03.870 12:07:54 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:03.870 12:07:54 -- target/dif.sh@56 -- # cat 00:32:03.870 12:07:54 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:32:03.870 12:07:54 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:03.870 12:07:54 -- common/autotest_common.sh@1325 -- # local sanitizers 00:32:03.870 12:07:54 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:03.870 12:07:54 -- common/autotest_common.sh@1327 -- # shift 00:32:03.870 12:07:54 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:32:03.870 12:07:54 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:32:03.870 12:07:54 -- nvmf/common.sh@543 -- # cat 00:32:03.870 12:07:54 -- target/dif.sh@72 -- # (( file = 1 )) 00:32:03.870 12:07:54 -- target/dif.sh@72 -- # (( file <= files )) 00:32:03.870 12:07:54 -- target/dif.sh@73 -- # cat 00:32:03.870 12:07:54 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:03.870 12:07:54 -- common/autotest_common.sh@1331 -- # grep libasan 00:32:03.870 12:07:54 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:32:03.870 12:07:54 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:32:03.870 12:07:54 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:32:03.870 { 00:32:03.870 "params": { 00:32:03.870 "name": "Nvme$subsystem", 00:32:03.870 "trtype": "$TEST_TRANSPORT", 00:32:03.870 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:03.870 "adrfam": "ipv4", 00:32:03.870 "trsvcid": "$NVMF_PORT", 00:32:03.870 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:03.870 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:03.870 "hdgst": ${hdgst:-false}, 00:32:03.870 "ddgst": ${ddgst:-false} 00:32:03.870 }, 00:32:03.870 "method": "bdev_nvme_attach_controller" 00:32:03.871 } 00:32:03.871 EOF 00:32:03.871 )") 00:32:03.871 12:07:54 -- target/dif.sh@72 -- # (( file++ )) 00:32:03.871 12:07:54 -- target/dif.sh@72 -- # (( file <= files )) 00:32:03.871 12:07:54 -- nvmf/common.sh@543 -- # cat 00:32:03.871 12:07:54 -- nvmf/common.sh@545 -- # jq . 00:32:03.871 12:07:54 -- nvmf/common.sh@546 -- # IFS=, 00:32:03.871 12:07:54 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:32:03.871 "params": { 00:32:03.871 "name": "Nvme0", 00:32:03.871 "trtype": "tcp", 00:32:03.871 "traddr": "10.0.0.2", 00:32:03.871 "adrfam": "ipv4", 00:32:03.871 "trsvcid": "4420", 00:32:03.871 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:03.871 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:03.871 "hdgst": false, 00:32:03.871 "ddgst": false 00:32:03.871 }, 00:32:03.871 "method": "bdev_nvme_attach_controller" 00:32:03.871 },{ 00:32:03.871 "params": { 00:32:03.871 "name": "Nvme1", 00:32:03.871 "trtype": "tcp", 00:32:03.871 "traddr": "10.0.0.2", 00:32:03.871 "adrfam": "ipv4", 00:32:03.871 "trsvcid": "4420", 00:32:03.871 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:03.871 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:03.871 "hdgst": false, 00:32:03.871 "ddgst": false 00:32:03.871 }, 00:32:03.871 "method": "bdev_nvme_attach_controller" 00:32:03.871 }' 00:32:03.871 12:07:54 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:03.871 12:07:54 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:03.871 12:07:54 -- common/autotest_common.sh@1333 -- # break 00:32:03.871 12:07:54 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:03.871 12:07:54 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:04.130 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:32:04.130 ... 00:32:04.130 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:32:04.130 ... 00:32:04.130 fio-3.35 00:32:04.130 Starting 4 threads 00:32:04.130 EAL: No free 2048 kB hugepages reported on node 1 00:32:10.698 00:32:10.698 filename0: (groupid=0, jobs=1): err= 0: pid=2682388: Thu Apr 18 12:08:00 2024 00:32:10.698 read: IOPS=2338, BW=18.3MiB/s (19.2MB/s)(91.4MiB/5002msec) 00:32:10.698 slat (nsec): min=3529, max=30132, avg=9985.52, stdev=3341.13 00:32:10.698 clat (usec): min=1574, max=43903, avg=3394.34, stdev=1191.40 00:32:10.698 lat (usec): min=1580, max=43918, avg=3404.33, stdev=1191.24 00:32:10.698 clat percentiles (usec): 00:32:10.698 | 1.00th=[ 2212], 5.00th=[ 2573], 10.00th=[ 2769], 20.00th=[ 2900], 00:32:10.698 | 30.00th=[ 3064], 40.00th=[ 3163], 50.00th=[ 3359], 60.00th=[ 3458], 00:32:10.698 | 70.00th=[ 3556], 80.00th=[ 3785], 90.00th=[ 4146], 95.00th=[ 4424], 00:32:10.698 | 99.00th=[ 4817], 99.50th=[ 4948], 99.90th=[ 5473], 99.95th=[43779], 00:32:10.698 | 99.99th=[43779] 00:32:10.698 bw ( KiB/s): min=18192, max=19248, per=25.42%, avg=18656.00, stdev=337.24, samples=9 00:32:10.698 iops : min= 2274, max= 2406, avg=2332.00, stdev=42.15, samples=9 00:32:10.698 lat (msec) : 2=0.32%, 4=86.75%, 10=12.86%, 50=0.07% 00:32:10.698 cpu : usr=92.74%, sys=6.88%, ctx=9, majf=0, minf=1636 00:32:10.698 IO depths : 1=0.2%, 2=1.4%, 4=68.3%, 8=30.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:10.698 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.698 complete : 0=0.0%, 4=94.4%, 8=5.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.698 issued rwts: total=11695,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:10.698 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:10.698 filename0: (groupid=0, jobs=1): err= 0: pid=2682389: Thu Apr 18 12:08:00 2024 00:32:10.698 read: IOPS=2276, BW=17.8MiB/s (18.7MB/s)(89.0MiB/5005msec) 00:32:10.698 slat (nsec): min=5919, max=44487, avg=10590.70, stdev=3570.28 00:32:10.698 clat (usec): min=1131, max=44729, avg=3485.83, stdev=1220.01 00:32:10.698 lat (usec): min=1138, max=44745, avg=3496.42, stdev=1220.04 00:32:10.698 clat percentiles (usec): 00:32:10.698 | 1.00th=[ 1860], 5.00th=[ 2573], 10.00th=[ 2802], 20.00th=[ 3064], 00:32:10.698 | 30.00th=[ 3228], 40.00th=[ 3425], 50.00th=[ 3458], 60.00th=[ 3556], 00:32:10.698 | 70.00th=[ 3720], 80.00th=[ 3818], 90.00th=[ 4080], 95.00th=[ 4293], 00:32:10.698 | 99.00th=[ 4817], 99.50th=[ 5080], 99.90th=[ 7439], 99.95th=[44827], 00:32:10.698 | 99.99th=[44827] 00:32:10.698 bw ( KiB/s): min=17408, max=18800, per=24.83%, avg=18220.80, stdev=397.02, samples=10 00:32:10.698 iops : min= 2176, max= 2350, avg=2277.60, stdev=49.63, samples=10 00:32:10.698 lat (msec) : 2=1.36%, 4=86.37%, 10=12.20%, 50=0.07% 00:32:10.698 cpu : usr=92.51%, sys=7.11%, ctx=9, majf=0, minf=1637 00:32:10.698 IO depths : 1=0.2%, 2=1.5%, 4=66.5%, 8=31.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:10.698 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.698 complete : 0=0.0%, 4=95.7%, 8=4.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.698 issued rwts: total=11396,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:10.698 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:10.698 filename1: (groupid=0, jobs=1): err= 0: pid=2682390: Thu Apr 18 12:08:00 2024 00:32:10.698 read: IOPS=2307, BW=18.0MiB/s (18.9MB/s)(90.3MiB/5007msec) 00:32:10.698 slat (nsec): min=5889, max=51162, avg=10273.83, stdev=3449.03 00:32:10.698 clat (usec): min=1792, max=49923, avg=3439.12, stdev=1336.02 00:32:10.698 lat (usec): min=1799, max=49948, avg=3449.39, stdev=1335.93 00:32:10.698 clat percentiles (usec): 00:32:10.698 | 1.00th=[ 2343], 5.00th=[ 2606], 10.00th=[ 2769], 20.00th=[ 2933], 00:32:10.698 | 30.00th=[ 3097], 40.00th=[ 3228], 50.00th=[ 3392], 60.00th=[ 3458], 00:32:10.698 | 70.00th=[ 3654], 80.00th=[ 3851], 90.00th=[ 4146], 95.00th=[ 4424], 00:32:10.698 | 99.00th=[ 4817], 99.50th=[ 4948], 99.90th=[ 6456], 99.95th=[50070], 00:32:10.698 | 99.99th=[50070] 00:32:10.698 bw ( KiB/s): min=16944, max=19424, per=25.17%, avg=18473.60, stdev=632.36, samples=10 00:32:10.698 iops : min= 2118, max= 2428, avg=2309.20, stdev=79.05, samples=10 00:32:10.698 lat (msec) : 2=0.08%, 4=85.73%, 10=14.12%, 50=0.07% 00:32:10.698 cpu : usr=93.09%, sys=6.51%, ctx=7, majf=0, minf=1634 00:32:10.698 IO depths : 1=0.2%, 2=1.3%, 4=67.5%, 8=31.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:10.698 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.698 complete : 0=0.0%, 4=95.1%, 8=4.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.698 issued rwts: total=11554,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:10.698 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:10.698 filename1: (groupid=0, jobs=1): err= 0: pid=2682391: Thu Apr 18 12:08:00 2024 00:32:10.698 read: IOPS=2256, BW=17.6MiB/s (18.5MB/s)(88.2MiB/5003msec) 00:32:10.698 slat (nsec): min=6626, max=30252, avg=10279.00, stdev=3567.86 00:32:10.698 clat (usec): min=1822, max=45666, avg=3518.44, stdev=1233.43 00:32:10.698 lat (usec): min=1829, max=45692, avg=3528.72, stdev=1233.44 00:32:10.698 clat percentiles (usec): 00:32:10.698 | 1.00th=[ 2376], 5.00th=[ 2638], 10.00th=[ 2835], 20.00th=[ 3064], 00:32:10.698 | 30.00th=[ 3261], 40.00th=[ 3425], 50.00th=[ 3458], 60.00th=[ 3589], 00:32:10.698 | 70.00th=[ 3752], 80.00th=[ 3851], 90.00th=[ 4113], 95.00th=[ 4359], 00:32:10.698 | 99.00th=[ 4817], 99.50th=[ 5080], 99.90th=[ 6849], 99.95th=[45876], 00:32:10.698 | 99.99th=[45876] 00:32:10.698 bw ( KiB/s): min=16880, max=18432, per=24.59%, avg=18048.00, stdev=485.07, samples=10 00:32:10.698 iops : min= 2110, max= 2304, avg=2256.00, stdev=60.63, samples=10 00:32:10.698 lat (msec) : 2=0.09%, 4=86.37%, 10=13.47%, 50=0.07% 00:32:10.698 cpu : usr=94.04%, sys=5.56%, ctx=7, majf=0, minf=1636 00:32:10.698 IO depths : 1=0.3%, 2=1.9%, 4=65.8%, 8=32.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:10.698 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.698 complete : 0=0.0%, 4=95.8%, 8=4.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.698 issued rwts: total=11288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:10.698 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:10.698 00:32:10.698 Run status group 0 (all jobs): 00:32:10.698 READ: bw=71.7MiB/s (75.2MB/s), 17.6MiB/s-18.3MiB/s (18.5MB/s-19.2MB/s), io=359MiB (376MB), run=5002-5007msec 00:32:11.265 ----------------------------------------------------- 00:32:11.265 Suppressions used: 00:32:11.265 count bytes template 00:32:11.265 6 52 /usr/src/fio/parse.c 00:32:11.265 1 8 libtcmalloc_minimal.so 00:32:11.265 1 904 libcrypto.so 00:32:11.265 ----------------------------------------------------- 00:32:11.265 00:32:11.265 12:08:01 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:32:11.265 12:08:01 -- target/dif.sh@43 -- # local sub 00:32:11.265 12:08:01 -- target/dif.sh@45 -- # for sub in "$@" 00:32:11.265 12:08:01 -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:11.265 12:08:01 -- target/dif.sh@36 -- # local sub_id=0 00:32:11.265 12:08:01 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:11.265 12:08:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:11.265 12:08:01 -- common/autotest_common.sh@10 -- # set +x 00:32:11.265 12:08:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:11.265 12:08:01 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:11.265 12:08:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:11.265 12:08:01 -- common/autotest_common.sh@10 -- # set +x 00:32:11.265 12:08:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:11.265 12:08:01 -- target/dif.sh@45 -- # for sub in "$@" 00:32:11.265 12:08:01 -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:11.265 12:08:01 -- target/dif.sh@36 -- # local sub_id=1 00:32:11.265 12:08:01 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:11.265 12:08:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:11.265 12:08:01 -- common/autotest_common.sh@10 -- # set +x 00:32:11.265 12:08:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:11.265 12:08:01 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:11.265 12:08:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:11.265 12:08:01 -- common/autotest_common.sh@10 -- # set +x 00:32:11.265 12:08:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:11.265 00:32:11.265 real 0m28.472s 00:32:11.265 user 4m59.668s 00:32:11.265 sys 0m9.301s 00:32:11.265 12:08:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:11.265 12:08:01 -- common/autotest_common.sh@10 -- # set +x 00:32:11.265 ************************************ 00:32:11.265 END TEST fio_dif_rand_params 00:32:11.265 ************************************ 00:32:11.265 12:08:01 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:32:11.265 12:08:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:11.265 12:08:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:11.265 12:08:01 -- common/autotest_common.sh@10 -- # set +x 00:32:11.265 ************************************ 00:32:11.265 START TEST fio_dif_digest 00:32:11.265 ************************************ 00:32:11.265 12:08:01 -- common/autotest_common.sh@1111 -- # fio_dif_digest 00:32:11.265 12:08:01 -- target/dif.sh@123 -- # local NULL_DIF 00:32:11.265 12:08:01 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:32:11.265 12:08:01 -- target/dif.sh@125 -- # local hdgst ddgst 00:32:11.265 12:08:01 -- target/dif.sh@127 -- # NULL_DIF=3 00:32:11.265 12:08:01 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:32:11.265 12:08:01 -- target/dif.sh@127 -- # numjobs=3 00:32:11.265 12:08:01 -- target/dif.sh@127 -- # iodepth=3 00:32:11.265 12:08:01 -- target/dif.sh@127 -- # runtime=10 00:32:11.265 12:08:01 -- target/dif.sh@128 -- # hdgst=true 00:32:11.265 12:08:01 -- target/dif.sh@128 -- # ddgst=true 00:32:11.265 12:08:01 -- target/dif.sh@130 -- # create_subsystems 0 00:32:11.265 12:08:01 -- target/dif.sh@28 -- # local sub 00:32:11.265 12:08:01 -- target/dif.sh@30 -- # for sub in "$@" 00:32:11.265 12:08:01 -- target/dif.sh@31 -- # create_subsystem 0 00:32:11.265 12:08:01 -- target/dif.sh@18 -- # local sub_id=0 00:32:11.265 12:08:01 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:32:11.265 12:08:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:11.265 12:08:01 -- common/autotest_common.sh@10 -- # set +x 00:32:11.265 bdev_null0 00:32:11.265 12:08:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:11.265 12:08:01 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:11.265 12:08:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:11.265 12:08:01 -- common/autotest_common.sh@10 -- # set +x 00:32:11.265 12:08:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:11.266 12:08:01 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:11.266 12:08:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:11.266 12:08:01 -- common/autotest_common.sh@10 -- # set +x 00:32:11.266 12:08:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:11.266 12:08:01 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:11.266 12:08:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:11.266 12:08:01 -- common/autotest_common.sh@10 -- # set +x 00:32:11.523 [2024-04-18 12:08:01.814599] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:11.523 12:08:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:11.523 12:08:01 -- target/dif.sh@131 -- # fio /dev/fd/62 00:32:11.523 12:08:01 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:11.523 12:08:01 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:11.523 12:08:01 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:32:11.523 12:08:01 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:11.523 12:08:01 -- common/autotest_common.sh@1325 -- # local sanitizers 00:32:11.523 12:08:01 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:11.523 12:08:01 -- common/autotest_common.sh@1327 -- # shift 00:32:11.523 12:08:01 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:32:11.523 12:08:01 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:32:11.523 12:08:01 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:32:11.523 12:08:01 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:11.523 12:08:01 -- target/dif.sh@82 -- # gen_fio_conf 00:32:11.523 12:08:01 -- nvmf/common.sh@521 -- # config=() 00:32:11.523 12:08:01 -- target/dif.sh@54 -- # local file 00:32:11.523 12:08:01 -- nvmf/common.sh@521 -- # local subsystem config 00:32:11.523 12:08:01 -- target/dif.sh@56 -- # cat 00:32:11.523 12:08:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:32:11.523 12:08:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:32:11.523 { 00:32:11.523 "params": { 00:32:11.523 "name": "Nvme$subsystem", 00:32:11.523 "trtype": "$TEST_TRANSPORT", 00:32:11.523 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:11.523 "adrfam": "ipv4", 00:32:11.523 "trsvcid": "$NVMF_PORT", 00:32:11.523 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:11.523 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:11.523 "hdgst": ${hdgst:-false}, 00:32:11.523 "ddgst": ${ddgst:-false} 00:32:11.523 }, 00:32:11.523 "method": "bdev_nvme_attach_controller" 00:32:11.523 } 00:32:11.523 EOF 00:32:11.523 )") 00:32:11.523 12:08:01 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:11.523 12:08:01 -- common/autotest_common.sh@1331 -- # grep libasan 00:32:11.523 12:08:01 -- nvmf/common.sh@543 -- # cat 00:32:11.523 12:08:01 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:32:11.523 12:08:01 -- target/dif.sh@72 -- # (( file = 1 )) 00:32:11.523 12:08:01 -- target/dif.sh@72 -- # (( file <= files )) 00:32:11.523 12:08:01 -- nvmf/common.sh@545 -- # jq . 00:32:11.523 12:08:01 -- nvmf/common.sh@546 -- # IFS=, 00:32:11.523 12:08:01 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:32:11.523 "params": { 00:32:11.523 "name": "Nvme0", 00:32:11.523 "trtype": "tcp", 00:32:11.523 "traddr": "10.0.0.2", 00:32:11.523 "adrfam": "ipv4", 00:32:11.523 "trsvcid": "4420", 00:32:11.523 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:11.523 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:11.523 "hdgst": true, 00:32:11.523 "ddgst": true 00:32:11.523 }, 00:32:11.523 "method": "bdev_nvme_attach_controller" 00:32:11.523 }' 00:32:11.523 12:08:01 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:11.523 12:08:01 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:11.523 12:08:01 -- common/autotest_common.sh@1333 -- # break 00:32:11.523 12:08:01 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:11.523 12:08:01 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:11.782 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:32:11.782 ... 00:32:11.782 fio-3.35 00:32:11.782 Starting 3 threads 00:32:11.782 EAL: No free 2048 kB hugepages reported on node 1 00:32:23.981 00:32:23.981 filename0: (groupid=0, jobs=1): err= 0: pid=2683808: Thu Apr 18 12:08:13 2024 00:32:23.981 read: IOPS=255, BW=31.9MiB/s (33.4MB/s)(321MiB/10050msec) 00:32:23.981 slat (nsec): min=7496, max=44244, avg=21633.44, stdev=5143.50 00:32:23.981 clat (usec): min=5378, max=95731, avg=11716.95, stdev=4551.74 00:32:23.981 lat (usec): min=5389, max=95748, avg=11738.58, stdev=4552.28 00:32:23.981 clat percentiles (usec): 00:32:23.981 | 1.00th=[ 7504], 5.00th=[ 8455], 10.00th=[ 8848], 20.00th=[ 9372], 00:32:23.981 | 30.00th=[10290], 40.00th=[11207], 50.00th=[11731], 60.00th=[12125], 00:32:23.981 | 70.00th=[12518], 80.00th=[12911], 90.00th=[13435], 95.00th=[13829], 00:32:23.981 | 99.00th=[15533], 99.50th=[54264], 99.90th=[56361], 99.95th=[56886], 00:32:23.981 | 99.99th=[95945] 00:32:23.981 bw ( KiB/s): min=25344, max=37120, per=36.90%, avg=32784.80, stdev=2819.22, samples=20 00:32:23.981 iops : min= 198, max= 290, avg=256.10, stdev=22.07, samples=20 00:32:23.981 lat (msec) : 10=27.42%, 20=71.72%, 50=0.04%, 100=0.82% 00:32:23.981 cpu : usr=95.01%, sys=4.59%, ctx=16, majf=0, minf=1632 00:32:23.981 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:23.981 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:23.981 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:23.981 issued rwts: total=2564,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:23.981 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:23.981 filename0: (groupid=0, jobs=1): err= 0: pid=2683809: Thu Apr 18 12:08:13 2024 00:32:23.981 read: IOPS=265, BW=33.1MiB/s (34.8MB/s)(332MiB/10010msec) 00:32:23.981 slat (nsec): min=4451, max=66597, avg=18842.12, stdev=6219.47 00:32:23.981 clat (usec): min=5361, max=56003, avg=11290.59, stdev=3722.40 00:32:23.981 lat (usec): min=5372, max=56017, avg=11309.44, stdev=3722.35 00:32:23.981 clat percentiles (usec): 00:32:23.981 | 1.00th=[ 5932], 5.00th=[ 8160], 10.00th=[ 8455], 20.00th=[ 9110], 00:32:23.981 | 30.00th=[ 9896], 40.00th=[10945], 50.00th=[11469], 60.00th=[11994], 00:32:23.981 | 70.00th=[12256], 80.00th=[12649], 90.00th=[13173], 95.00th=[13566], 00:32:23.981 | 99.00th=[14615], 99.50th=[50594], 99.90th=[55313], 99.95th=[55313], 00:32:23.981 | 99.99th=[55837] 00:32:23.981 bw ( KiB/s): min=27904, max=37120, per=38.11%, avg=33859.37, stdev=2452.26, samples=19 00:32:23.981 iops : min= 218, max= 290, avg=264.53, stdev=19.16, samples=19 00:32:23.981 lat (msec) : 10=30.97%, 20=68.35%, 50=0.11%, 100=0.57% 00:32:23.981 cpu : usr=93.62%, sys=5.98%, ctx=14, majf=0, minf=1636 00:32:23.981 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:23.981 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:23.981 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:23.981 issued rwts: total=2654,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:23.981 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:23.981 filename0: (groupid=0, jobs=1): err= 0: pid=2683810: Thu Apr 18 12:08:13 2024 00:32:23.981 read: IOPS=174, BW=21.9MiB/s (22.9MB/s)(220MiB/10046msec) 00:32:23.981 slat (nsec): min=7089, max=42122, avg=18792.92, stdev=5718.27 00:32:23.981 clat (usec): min=7925, max=98740, avg=17096.01, stdev=13281.24 00:32:23.981 lat (usec): min=7949, max=98763, avg=17114.80, stdev=13281.27 00:32:23.981 clat percentiles (usec): 00:32:23.981 | 1.00th=[ 8717], 5.00th=[10290], 10.00th=[11469], 20.00th=[12125], 00:32:23.981 | 30.00th=[12518], 40.00th=[12780], 50.00th=[13042], 60.00th=[13304], 00:32:23.981 | 70.00th=[13698], 80.00th=[14222], 90.00th=[17695], 95.00th=[54264], 00:32:23.981 | 99.00th=[56361], 99.50th=[57410], 99.90th=[98042], 99.95th=[99091], 00:32:23.981 | 99.99th=[99091] 00:32:23.981 bw ( KiB/s): min=15104, max=26880, per=25.30%, avg=22476.80, stdev=3404.95, samples=20 00:32:23.981 iops : min= 118, max= 210, avg=175.60, stdev=26.60, samples=20 00:32:23.981 lat (msec) : 10=4.72%, 20=85.32%, 50=0.06%, 100=9.90% 00:32:23.981 cpu : usr=95.07%, sys=4.58%, ctx=22, majf=0, minf=1639 00:32:23.981 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:23.981 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:23.981 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:23.981 issued rwts: total=1758,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:23.981 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:23.981 00:32:23.981 Run status group 0 (all jobs): 00:32:23.981 READ: bw=86.8MiB/s (91.0MB/s), 21.9MiB/s-33.1MiB/s (22.9MB/s-34.8MB/s), io=872MiB (914MB), run=10010-10050msec 00:32:23.981 ----------------------------------------------------- 00:32:23.981 Suppressions used: 00:32:23.981 count bytes template 00:32:23.981 5 44 /usr/src/fio/parse.c 00:32:23.981 1 8 libtcmalloc_minimal.so 00:32:23.981 1 904 libcrypto.so 00:32:23.981 ----------------------------------------------------- 00:32:23.981 00:32:23.981 12:08:14 -- target/dif.sh@132 -- # destroy_subsystems 0 00:32:23.981 12:08:14 -- target/dif.sh@43 -- # local sub 00:32:23.981 12:08:14 -- target/dif.sh@45 -- # for sub in "$@" 00:32:23.981 12:08:14 -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:23.981 12:08:14 -- target/dif.sh@36 -- # local sub_id=0 00:32:23.981 12:08:14 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:23.981 12:08:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:23.981 12:08:14 -- common/autotest_common.sh@10 -- # set +x 00:32:23.981 12:08:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:23.981 12:08:14 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:23.981 12:08:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:23.981 12:08:14 -- common/autotest_common.sh@10 -- # set +x 00:32:23.981 12:08:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:23.981 00:32:23.981 real 0m12.695s 00:32:23.981 user 0m39.064s 00:32:23.981 sys 0m2.126s 00:32:23.981 12:08:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:23.981 12:08:14 -- common/autotest_common.sh@10 -- # set +x 00:32:23.981 ************************************ 00:32:23.981 END TEST fio_dif_digest 00:32:23.981 ************************************ 00:32:23.981 12:08:14 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:32:23.981 12:08:14 -- target/dif.sh@147 -- # nvmftestfini 00:32:23.981 12:08:14 -- nvmf/common.sh@477 -- # nvmfcleanup 00:32:23.981 12:08:14 -- nvmf/common.sh@117 -- # sync 00:32:23.981 12:08:14 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:23.981 12:08:14 -- nvmf/common.sh@120 -- # set +e 00:32:23.981 12:08:14 -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:23.981 12:08:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:24.239 rmmod nvme_tcp 00:32:24.239 rmmod nvme_fabrics 00:32:24.239 rmmod nvme_keyring 00:32:24.239 12:08:14 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:24.239 12:08:14 -- nvmf/common.sh@124 -- # set -e 00:32:24.239 12:08:14 -- nvmf/common.sh@125 -- # return 0 00:32:24.239 12:08:14 -- nvmf/common.sh@478 -- # '[' -n 2673619 ']' 00:32:24.239 12:08:14 -- nvmf/common.sh@479 -- # killprocess 2673619 00:32:24.239 12:08:14 -- common/autotest_common.sh@936 -- # '[' -z 2673619 ']' 00:32:24.239 12:08:14 -- common/autotest_common.sh@940 -- # kill -0 2673619 00:32:24.239 12:08:14 -- common/autotest_common.sh@941 -- # uname 00:32:24.239 12:08:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:24.239 12:08:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2673619 00:32:24.239 12:08:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:32:24.239 12:08:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:32:24.239 12:08:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2673619' 00:32:24.239 killing process with pid 2673619 00:32:24.239 12:08:14 -- common/autotest_common.sh@955 -- # kill 2673619 00:32:24.239 12:08:14 -- common/autotest_common.sh@960 -- # wait 2673619 00:32:25.615 12:08:15 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:32:25.615 12:08:15 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:28.147 Waiting for block devices as requested 00:32:28.147 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:28.147 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:28.147 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:28.147 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:28.406 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:28.406 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:28.406 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:28.665 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:28.665 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:28.665 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:28.923 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:28.923 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:28.923 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:28.923 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:29.182 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:29.182 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:29.182 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:32:29.441 12:08:19 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:32:29.441 12:08:19 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:32:29.441 12:08:19 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:29.441 12:08:19 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:29.441 12:08:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:29.441 12:08:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:29.441 12:08:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:31.372 12:08:21 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:31.372 00:32:31.372 real 1m25.541s 00:32:31.372 user 7m36.288s 00:32:31.372 sys 0m29.139s 00:32:31.372 12:08:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:31.372 12:08:21 -- common/autotest_common.sh@10 -- # set +x 00:32:31.372 ************************************ 00:32:31.372 END TEST nvmf_dif 00:32:31.372 ************************************ 00:32:31.630 12:08:21 -- spdk/autotest.sh@291 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:31.630 12:08:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:31.630 12:08:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:31.630 12:08:21 -- common/autotest_common.sh@10 -- # set +x 00:32:31.630 ************************************ 00:32:31.630 START TEST nvmf_abort_qd_sizes 00:32:31.630 ************************************ 00:32:31.630 12:08:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:31.889 * Looking for test storage... 00:32:31.889 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:31.889 12:08:22 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:31.889 12:08:22 -- nvmf/common.sh@7 -- # uname -s 00:32:31.889 12:08:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:31.889 12:08:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:31.889 12:08:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:31.889 12:08:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:31.889 12:08:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:31.889 12:08:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:31.889 12:08:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:31.889 12:08:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:31.889 12:08:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:31.889 12:08:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:31.889 12:08:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:32:31.889 12:08:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:32:31.889 12:08:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:31.889 12:08:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:31.889 12:08:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:31.889 12:08:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:31.889 12:08:22 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:31.889 12:08:22 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:31.889 12:08:22 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:31.889 12:08:22 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:31.889 12:08:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:31.889 12:08:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:31.889 12:08:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:31.889 12:08:22 -- paths/export.sh@5 -- # export PATH 00:32:31.889 12:08:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:31.889 12:08:22 -- nvmf/common.sh@47 -- # : 0 00:32:31.889 12:08:22 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:31.889 12:08:22 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:31.889 12:08:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:31.889 12:08:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:31.889 12:08:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:31.889 12:08:22 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:31.889 12:08:22 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:31.889 12:08:22 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:31.889 12:08:22 -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:32:31.889 12:08:22 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:32:31.889 12:08:22 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:31.889 12:08:22 -- nvmf/common.sh@437 -- # prepare_net_devs 00:32:31.889 12:08:22 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:32:31.889 12:08:22 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:32:31.889 12:08:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:31.889 12:08:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:31.889 12:08:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:31.889 12:08:22 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:32:31.889 12:08:22 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:32:31.889 12:08:22 -- nvmf/common.sh@285 -- # xtrace_disable 00:32:31.889 12:08:22 -- common/autotest_common.sh@10 -- # set +x 00:32:38.451 12:08:28 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:32:38.451 12:08:28 -- nvmf/common.sh@291 -- # pci_devs=() 00:32:38.451 12:08:28 -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:38.451 12:08:28 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:38.451 12:08:28 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:38.451 12:08:28 -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:38.451 12:08:28 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:38.451 12:08:28 -- nvmf/common.sh@295 -- # net_devs=() 00:32:38.451 12:08:28 -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:38.451 12:08:28 -- nvmf/common.sh@296 -- # e810=() 00:32:38.451 12:08:28 -- nvmf/common.sh@296 -- # local -ga e810 00:32:38.451 12:08:28 -- nvmf/common.sh@297 -- # x722=() 00:32:38.451 12:08:28 -- nvmf/common.sh@297 -- # local -ga x722 00:32:38.451 12:08:28 -- nvmf/common.sh@298 -- # mlx=() 00:32:38.451 12:08:28 -- nvmf/common.sh@298 -- # local -ga mlx 00:32:38.451 12:08:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:38.451 12:08:28 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:38.451 12:08:28 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:38.451 12:08:28 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:38.451 12:08:28 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:38.451 12:08:28 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:38.451 12:08:28 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:38.451 12:08:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:38.451 12:08:28 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:38.451 12:08:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:38.451 12:08:28 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:38.451 12:08:28 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:38.451 12:08:28 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:38.451 12:08:28 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:38.451 12:08:28 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:38.451 12:08:28 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:38.451 12:08:28 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:38.451 12:08:28 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:38.451 12:08:28 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:38.451 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:38.451 12:08:28 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:38.451 12:08:28 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:38.451 12:08:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:38.451 12:08:28 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:38.451 12:08:28 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:38.451 12:08:28 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:38.451 12:08:28 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:38.451 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:38.451 12:08:28 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:38.451 12:08:28 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:38.451 12:08:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:38.451 12:08:28 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:38.451 12:08:28 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:38.451 12:08:28 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:38.451 12:08:28 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:38.451 12:08:28 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:38.451 12:08:28 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:38.451 12:08:28 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:38.451 12:08:28 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:32:38.451 12:08:28 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:38.451 12:08:28 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:38.451 Found net devices under 0000:af:00.0: cvl_0_0 00:32:38.451 12:08:28 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:32:38.451 12:08:28 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:38.451 12:08:28 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:38.451 12:08:28 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:32:38.451 12:08:28 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:38.451 12:08:28 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:38.451 Found net devices under 0000:af:00.1: cvl_0_1 00:32:38.451 12:08:28 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:32:38.451 12:08:28 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:32:38.451 12:08:28 -- nvmf/common.sh@403 -- # is_hw=yes 00:32:38.451 12:08:28 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:32:38.451 12:08:28 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:32:38.451 12:08:28 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:32:38.451 12:08:28 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:38.451 12:08:28 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:38.451 12:08:28 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:38.451 12:08:28 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:38.451 12:08:28 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:38.451 12:08:28 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:38.451 12:08:28 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:38.451 12:08:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:38.451 12:08:28 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:38.451 12:08:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:38.451 12:08:28 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:38.451 12:08:28 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:38.451 12:08:28 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:38.451 12:08:28 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:38.451 12:08:28 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:38.451 12:08:28 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:38.451 12:08:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:38.451 12:08:28 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:38.451 12:08:28 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:38.451 12:08:28 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:38.451 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:38.451 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:32:38.451 00:32:38.451 --- 10.0.0.2 ping statistics --- 00:32:38.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:38.451 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:32:38.451 12:08:28 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:38.451 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:38.451 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.253 ms 00:32:38.451 00:32:38.451 --- 10.0.0.1 ping statistics --- 00:32:38.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:38.451 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:32:38.451 12:08:28 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:38.451 12:08:28 -- nvmf/common.sh@411 -- # return 0 00:32:38.451 12:08:28 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:32:38.451 12:08:28 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:41.739 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:32:41.739 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:32:41.739 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:32:41.739 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:32:41.739 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:32:41.739 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:32:41.739 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:32:41.739 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:32:41.739 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:32:41.739 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:32:41.739 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:32:41.739 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:32:41.739 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:32:41.739 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:32:41.739 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:32:41.739 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:32:43.644 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:32:43.645 12:08:33 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:43.645 12:08:33 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:32:43.645 12:08:33 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:32:43.645 12:08:33 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:43.645 12:08:33 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:32:43.645 12:08:33 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:32:43.645 12:08:33 -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:32:43.645 12:08:33 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:32:43.645 12:08:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:32:43.645 12:08:33 -- common/autotest_common.sh@10 -- # set +x 00:32:43.645 12:08:33 -- nvmf/common.sh@470 -- # nvmfpid=2692375 00:32:43.645 12:08:33 -- nvmf/common.sh@471 -- # waitforlisten 2692375 00:32:43.645 12:08:33 -- common/autotest_common.sh@817 -- # '[' -z 2692375 ']' 00:32:43.645 12:08:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:43.645 12:08:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:43.645 12:08:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:43.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:43.645 12:08:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:43.645 12:08:33 -- common/autotest_common.sh@10 -- # set +x 00:32:43.645 12:08:33 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:32:43.645 [2024-04-18 12:08:33.905079] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:32:43.645 [2024-04-18 12:08:33.905181] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:43.645 EAL: No free 2048 kB hugepages reported on node 1 00:32:43.645 [2024-04-18 12:08:34.036142] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:43.902 [2024-04-18 12:08:34.251210] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:43.902 [2024-04-18 12:08:34.251257] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:43.902 [2024-04-18 12:08:34.251270] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:43.902 [2024-04-18 12:08:34.251301] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:43.903 [2024-04-18 12:08:34.251310] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:43.903 [2024-04-18 12:08:34.251383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:43.903 [2024-04-18 12:08:34.251468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:43.903 [2024-04-18 12:08:34.251521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:43.903 [2024-04-18 12:08:34.251530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:44.160 12:08:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:44.160 12:08:34 -- common/autotest_common.sh@850 -- # return 0 00:32:44.160 12:08:34 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:32:44.160 12:08:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:32:44.160 12:08:34 -- common/autotest_common.sh@10 -- # set +x 00:32:44.422 12:08:34 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:44.422 12:08:34 -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:32:44.422 12:08:34 -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:32:44.422 12:08:34 -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:32:44.422 12:08:34 -- scripts/common.sh@309 -- # local bdf bdfs 00:32:44.422 12:08:34 -- scripts/common.sh@310 -- # local nvmes 00:32:44.422 12:08:34 -- scripts/common.sh@312 -- # [[ -n 0000:d8:00.0 ]] 00:32:44.422 12:08:34 -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:32:44.422 12:08:34 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:32:44.422 12:08:34 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:d8:00.0 ]] 00:32:44.422 12:08:34 -- scripts/common.sh@320 -- # uname -s 00:32:44.422 12:08:34 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:32:44.422 12:08:34 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:32:44.422 12:08:34 -- scripts/common.sh@325 -- # (( 1 )) 00:32:44.422 12:08:34 -- scripts/common.sh@326 -- # printf '%s\n' 0000:d8:00.0 00:32:44.422 12:08:34 -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:32:44.422 12:08:34 -- target/abort_qd_sizes.sh@78 -- # nvme=0000:d8:00.0 00:32:44.422 12:08:34 -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:32:44.422 12:08:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:44.422 12:08:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:44.422 12:08:34 -- common/autotest_common.sh@10 -- # set +x 00:32:44.422 ************************************ 00:32:44.422 START TEST spdk_target_abort 00:32:44.422 ************************************ 00:32:44.422 12:08:34 -- common/autotest_common.sh@1111 -- # spdk_target 00:32:44.422 12:08:34 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:32:44.422 12:08:34 -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:d8:00.0 -b spdk_target 00:32:44.422 12:08:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:44.422 12:08:34 -- common/autotest_common.sh@10 -- # set +x 00:32:47.712 spdk_targetn1 00:32:47.712 12:08:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:47.712 12:08:37 -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:47.712 12:08:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:47.712 12:08:37 -- common/autotest_common.sh@10 -- # set +x 00:32:47.712 [2024-04-18 12:08:37.807397] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:47.712 12:08:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:47.712 12:08:37 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:32:47.712 12:08:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:47.712 12:08:37 -- common/autotest_common.sh@10 -- # set +x 00:32:47.712 12:08:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:47.712 12:08:37 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:32:47.712 12:08:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:47.712 12:08:37 -- common/autotest_common.sh@10 -- # set +x 00:32:47.712 12:08:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:47.712 12:08:37 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:32:47.712 12:08:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:47.712 12:08:37 -- common/autotest_common.sh@10 -- # set +x 00:32:47.712 [2024-04-18 12:08:37.858845] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:47.712 12:08:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:47.712 12:08:37 -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:32:47.712 12:08:37 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:47.712 12:08:37 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:47.712 12:08:37 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:32:47.712 12:08:37 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:47.712 12:08:37 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:47.712 12:08:37 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:47.712 12:08:37 -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:47.712 12:08:37 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:47.712 12:08:37 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:47.712 12:08:37 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:47.712 12:08:37 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:47.712 12:08:37 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:47.712 12:08:37 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:47.712 12:08:37 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:32:47.712 12:08:37 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:47.712 12:08:37 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:47.712 12:08:37 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:47.712 12:08:37 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:47.712 12:08:37 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:47.712 12:08:37 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:47.712 EAL: No free 2048 kB hugepages reported on node 1 00:32:51.001 Initializing NVMe Controllers 00:32:51.001 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:51.001 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:51.001 Initialization complete. Launching workers. 00:32:51.001 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8419, failed: 0 00:32:51.001 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1425, failed to submit 6994 00:32:51.001 success 863, unsuccess 562, failed 0 00:32:51.001 12:08:41 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:51.001 12:08:41 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:51.001 EAL: No free 2048 kB hugepages reported on node 1 00:32:54.288 Initializing NVMe Controllers 00:32:54.288 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:54.288 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:54.288 Initialization complete. Launching workers. 00:32:54.288 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8672, failed: 0 00:32:54.288 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1266, failed to submit 7406 00:32:54.288 success 300, unsuccess 966, failed 0 00:32:54.288 12:08:44 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:54.288 12:08:44 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:54.288 EAL: No free 2048 kB hugepages reported on node 1 00:32:57.602 Initializing NVMe Controllers 00:32:57.602 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:57.602 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:57.602 Initialization complete. Launching workers. 00:32:57.602 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 33445, failed: 0 00:32:57.602 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2762, failed to submit 30683 00:32:57.602 success 594, unsuccess 2168, failed 0 00:32:57.602 12:08:47 -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:32:57.602 12:08:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:57.602 12:08:47 -- common/autotest_common.sh@10 -- # set +x 00:32:57.602 12:08:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:57.602 12:08:47 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:32:57.602 12:08:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:57.602 12:08:47 -- common/autotest_common.sh@10 -- # set +x 00:32:59.506 12:08:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:59.506 12:08:49 -- target/abort_qd_sizes.sh@61 -- # killprocess 2692375 00:32:59.506 12:08:49 -- common/autotest_common.sh@936 -- # '[' -z 2692375 ']' 00:32:59.506 12:08:49 -- common/autotest_common.sh@940 -- # kill -0 2692375 00:32:59.506 12:08:49 -- common/autotest_common.sh@941 -- # uname 00:32:59.506 12:08:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:59.506 12:08:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2692375 00:32:59.506 12:08:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:32:59.506 12:08:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:32:59.506 12:08:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2692375' 00:32:59.506 killing process with pid 2692375 00:32:59.506 12:08:49 -- common/autotest_common.sh@955 -- # kill 2692375 00:32:59.506 12:08:49 -- common/autotest_common.sh@960 -- # wait 2692375 00:33:00.444 00:33:00.444 real 0m15.803s 00:33:00.444 user 1m1.315s 00:33:00.444 sys 0m2.796s 00:33:00.444 12:08:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:00.444 12:08:50 -- common/autotest_common.sh@10 -- # set +x 00:33:00.444 ************************************ 00:33:00.444 END TEST spdk_target_abort 00:33:00.444 ************************************ 00:33:00.444 12:08:50 -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:33:00.444 12:08:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:33:00.444 12:08:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:00.444 12:08:50 -- common/autotest_common.sh@10 -- # set +x 00:33:00.444 ************************************ 00:33:00.444 START TEST kernel_target_abort 00:33:00.444 ************************************ 00:33:00.444 12:08:50 -- common/autotest_common.sh@1111 -- # kernel_target 00:33:00.444 12:08:50 -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:33:00.444 12:08:50 -- nvmf/common.sh@717 -- # local ip 00:33:00.444 12:08:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:33:00.444 12:08:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:33:00.444 12:08:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:00.444 12:08:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:00.444 12:08:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:33:00.444 12:08:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:00.444 12:08:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:33:00.444 12:08:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:33:00.444 12:08:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:33:00.444 12:08:50 -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:33:00.444 12:08:50 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:33:00.444 12:08:50 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:33:00.444 12:08:50 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:00.444 12:08:50 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:00.444 12:08:50 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:00.444 12:08:50 -- nvmf/common.sh@628 -- # local block nvme 00:33:00.444 12:08:50 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:33:00.444 12:08:50 -- nvmf/common.sh@631 -- # modprobe nvmet 00:33:00.444 12:08:50 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:00.444 12:08:50 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:03.737 Waiting for block devices as requested 00:33:03.737 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:03.737 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:03.737 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:03.996 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:03.996 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:03.996 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:04.254 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:04.254 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:04.254 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:04.254 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:04.513 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:04.513 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:04.513 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:04.772 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:04.772 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:04.772 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:05.031 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:33:05.967 12:08:56 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:33:05.967 12:08:56 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:05.967 12:08:56 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:33:05.967 12:08:56 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:33:05.967 12:08:56 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:05.967 12:08:56 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:33:05.967 12:08:56 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:33:05.967 12:08:56 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:33:05.967 12:08:56 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:05.967 No valid GPT data, bailing 00:33:05.967 12:08:56 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:05.967 12:08:56 -- scripts/common.sh@391 -- # pt= 00:33:05.967 12:08:56 -- scripts/common.sh@392 -- # return 1 00:33:05.967 12:08:56 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:33:05.967 12:08:56 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:33:05.967 12:08:56 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:05.967 12:08:56 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:05.967 12:08:56 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:05.967 12:08:56 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:33:05.967 12:08:56 -- nvmf/common.sh@656 -- # echo 1 00:33:05.967 12:08:56 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:33:05.967 12:08:56 -- nvmf/common.sh@658 -- # echo 1 00:33:05.967 12:08:56 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:33:05.967 12:08:56 -- nvmf/common.sh@661 -- # echo tcp 00:33:05.968 12:08:56 -- nvmf/common.sh@662 -- # echo 4420 00:33:05.968 12:08:56 -- nvmf/common.sh@663 -- # echo ipv4 00:33:05.968 12:08:56 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:05.968 12:08:56 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.1 -t tcp -s 4420 00:33:05.968 00:33:05.968 Discovery Log Number of Records 2, Generation counter 2 00:33:05.968 =====Discovery Log Entry 0====== 00:33:05.968 trtype: tcp 00:33:05.968 adrfam: ipv4 00:33:05.968 subtype: current discovery subsystem 00:33:05.968 treq: not specified, sq flow control disable supported 00:33:05.968 portid: 1 00:33:05.968 trsvcid: 4420 00:33:05.968 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:05.968 traddr: 10.0.0.1 00:33:05.968 eflags: none 00:33:05.968 sectype: none 00:33:05.968 =====Discovery Log Entry 1====== 00:33:05.968 trtype: tcp 00:33:05.968 adrfam: ipv4 00:33:05.968 subtype: nvme subsystem 00:33:05.968 treq: not specified, sq flow control disable supported 00:33:05.968 portid: 1 00:33:05.968 trsvcid: 4420 00:33:05.968 subnqn: nqn.2016-06.io.spdk:testnqn 00:33:05.968 traddr: 10.0.0.1 00:33:05.968 eflags: none 00:33:05.968 sectype: none 00:33:05.968 12:08:56 -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:33:05.968 12:08:56 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:33:05.968 12:08:56 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:33:05.968 12:08:56 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:33:05.968 12:08:56 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:33:05.968 12:08:56 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:33:05.968 12:08:56 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:33:05.968 12:08:56 -- target/abort_qd_sizes.sh@24 -- # local target r 00:33:05.968 12:08:56 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:33:05.968 12:08:56 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:05.968 12:08:56 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:33:05.968 12:08:56 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:05.968 12:08:56 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:33:05.968 12:08:56 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:05.968 12:08:56 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:33:05.968 12:08:56 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:05.968 12:08:56 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:33:05.968 12:08:56 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:05.968 12:08:56 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:05.968 12:08:56 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:05.968 12:08:56 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:05.968 EAL: No free 2048 kB hugepages reported on node 1 00:33:09.252 Initializing NVMe Controllers 00:33:09.252 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:09.252 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:09.252 Initialization complete. Launching workers. 00:33:09.252 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 62169, failed: 0 00:33:09.252 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 62169, failed to submit 0 00:33:09.252 success 0, unsuccess 62169, failed 0 00:33:09.252 12:08:59 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:09.252 12:08:59 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:09.252 EAL: No free 2048 kB hugepages reported on node 1 00:33:12.534 Initializing NVMe Controllers 00:33:12.534 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:12.534 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:12.534 Initialization complete. Launching workers. 00:33:12.534 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 104214, failed: 0 00:33:12.534 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 26322, failed to submit 77892 00:33:12.534 success 0, unsuccess 26322, failed 0 00:33:12.534 12:09:02 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:12.534 12:09:02 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:12.534 EAL: No free 2048 kB hugepages reported on node 1 00:33:15.857 Initializing NVMe Controllers 00:33:15.857 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:15.857 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:15.857 Initialization complete. Launching workers. 00:33:15.857 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 101600, failed: 0 00:33:15.857 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25402, failed to submit 76198 00:33:15.857 success 0, unsuccess 25402, failed 0 00:33:15.857 12:09:05 -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:33:15.857 12:09:05 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:33:15.857 12:09:05 -- nvmf/common.sh@675 -- # echo 0 00:33:15.857 12:09:05 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:15.857 12:09:05 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:15.857 12:09:05 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:15.857 12:09:05 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:15.857 12:09:05 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:33:15.857 12:09:05 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:33:15.857 12:09:05 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:18.389 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:33:18.389 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:33:18.389 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:33:18.389 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:33:18.389 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:33:18.389 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:33:18.389 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:33:18.389 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:33:18.389 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:33:18.389 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:33:18.389 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:33:18.389 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:33:18.389 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:33:18.389 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:33:18.389 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:33:18.389 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:33:19.765 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:33:20.025 00:33:20.025 real 0m19.449s 00:33:20.025 user 0m7.700s 00:33:20.025 sys 0m6.593s 00:33:20.025 12:09:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:20.025 12:09:10 -- common/autotest_common.sh@10 -- # set +x 00:33:20.025 ************************************ 00:33:20.025 END TEST kernel_target_abort 00:33:20.025 ************************************ 00:33:20.025 12:09:10 -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:33:20.025 12:09:10 -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:33:20.025 12:09:10 -- nvmf/common.sh@477 -- # nvmfcleanup 00:33:20.025 12:09:10 -- nvmf/common.sh@117 -- # sync 00:33:20.025 12:09:10 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:20.025 12:09:10 -- nvmf/common.sh@120 -- # set +e 00:33:20.025 12:09:10 -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:20.025 12:09:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:20.025 rmmod nvme_tcp 00:33:20.025 rmmod nvme_fabrics 00:33:20.025 rmmod nvme_keyring 00:33:20.025 12:09:10 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:20.025 12:09:10 -- nvmf/common.sh@124 -- # set -e 00:33:20.025 12:09:10 -- nvmf/common.sh@125 -- # return 0 00:33:20.025 12:09:10 -- nvmf/common.sh@478 -- # '[' -n 2692375 ']' 00:33:20.025 12:09:10 -- nvmf/common.sh@479 -- # killprocess 2692375 00:33:20.025 12:09:10 -- common/autotest_common.sh@936 -- # '[' -z 2692375 ']' 00:33:20.025 12:09:10 -- common/autotest_common.sh@940 -- # kill -0 2692375 00:33:20.025 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (2692375) - No such process 00:33:20.025 12:09:10 -- common/autotest_common.sh@963 -- # echo 'Process with pid 2692375 is not found' 00:33:20.025 Process with pid 2692375 is not found 00:33:20.025 12:09:10 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:33:20.025 12:09:10 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:23.311 Waiting for block devices as requested 00:33:23.311 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:23.311 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:23.311 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:23.569 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:23.569 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:23.569 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:23.569 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:23.827 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:23.827 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:23.827 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:24.086 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:24.086 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:24.086 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:24.345 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:24.345 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:24.345 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:24.603 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:33:24.603 12:09:15 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:33:24.603 12:09:15 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:33:24.603 12:09:15 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:24.603 12:09:15 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:24.603 12:09:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:24.603 12:09:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:24.603 12:09:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:27.135 12:09:17 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:27.135 00:33:27.135 real 0m55.068s 00:33:27.135 user 1m13.786s 00:33:27.135 sys 0m19.620s 00:33:27.135 12:09:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:27.135 12:09:17 -- common/autotest_common.sh@10 -- # set +x 00:33:27.136 ************************************ 00:33:27.136 END TEST nvmf_abort_qd_sizes 00:33:27.136 ************************************ 00:33:27.136 12:09:17 -- spdk/autotest.sh@293 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:33:27.136 12:09:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:33:27.136 12:09:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:27.136 12:09:17 -- common/autotest_common.sh@10 -- # set +x 00:33:27.136 ************************************ 00:33:27.136 START TEST keyring_file 00:33:27.136 ************************************ 00:33:27.136 12:09:17 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:33:27.136 * Looking for test storage... 00:33:27.136 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:33:27.136 12:09:17 -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:33:27.136 12:09:17 -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:27.136 12:09:17 -- nvmf/common.sh@7 -- # uname -s 00:33:27.136 12:09:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:27.136 12:09:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:27.136 12:09:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:27.136 12:09:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:27.136 12:09:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:27.136 12:09:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:27.136 12:09:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:27.136 12:09:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:27.136 12:09:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:27.136 12:09:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:27.136 12:09:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:33:27.136 12:09:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:33:27.136 12:09:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:27.136 12:09:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:27.136 12:09:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:27.136 12:09:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:27.136 12:09:17 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:27.136 12:09:17 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:27.136 12:09:17 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:27.136 12:09:17 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:27.136 12:09:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.136 12:09:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.136 12:09:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.136 12:09:17 -- paths/export.sh@5 -- # export PATH 00:33:27.136 12:09:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.136 12:09:17 -- nvmf/common.sh@47 -- # : 0 00:33:27.136 12:09:17 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:27.136 12:09:17 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:27.136 12:09:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:27.136 12:09:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:27.136 12:09:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:27.136 12:09:17 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:27.136 12:09:17 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:27.136 12:09:17 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:27.136 12:09:17 -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:33:27.136 12:09:17 -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:33:27.136 12:09:17 -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:33:27.136 12:09:17 -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:33:27.136 12:09:17 -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:33:27.136 12:09:17 -- keyring/file.sh@24 -- # trap cleanup EXIT 00:33:27.136 12:09:17 -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:27.136 12:09:17 -- keyring/common.sh@15 -- # local name key digest path 00:33:27.136 12:09:17 -- keyring/common.sh@17 -- # name=key0 00:33:27.136 12:09:17 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:27.136 12:09:17 -- keyring/common.sh@17 -- # digest=0 00:33:27.136 12:09:17 -- keyring/common.sh@18 -- # mktemp 00:33:27.136 12:09:17 -- keyring/common.sh@18 -- # path=/tmp/tmp.6DjK1Yrm95 00:33:27.136 12:09:17 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:27.136 12:09:17 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:27.136 12:09:17 -- nvmf/common.sh@691 -- # local prefix key digest 00:33:27.136 12:09:17 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:33:27.136 12:09:17 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:33:27.136 12:09:17 -- nvmf/common.sh@693 -- # digest=0 00:33:27.136 12:09:17 -- nvmf/common.sh@694 -- # python - 00:33:27.136 12:09:17 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.6DjK1Yrm95 00:33:27.136 12:09:17 -- keyring/common.sh@23 -- # echo /tmp/tmp.6DjK1Yrm95 00:33:27.136 12:09:17 -- keyring/file.sh@26 -- # key0path=/tmp/tmp.6DjK1Yrm95 00:33:27.136 12:09:17 -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:33:27.136 12:09:17 -- keyring/common.sh@15 -- # local name key digest path 00:33:27.136 12:09:17 -- keyring/common.sh@17 -- # name=key1 00:33:27.136 12:09:17 -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:33:27.136 12:09:17 -- keyring/common.sh@17 -- # digest=0 00:33:27.136 12:09:17 -- keyring/common.sh@18 -- # mktemp 00:33:27.136 12:09:17 -- keyring/common.sh@18 -- # path=/tmp/tmp.9SFRxTdAed 00:33:27.136 12:09:17 -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:33:27.136 12:09:17 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:33:27.136 12:09:17 -- nvmf/common.sh@691 -- # local prefix key digest 00:33:27.136 12:09:17 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:33:27.136 12:09:17 -- nvmf/common.sh@693 -- # key=112233445566778899aabbccddeeff00 00:33:27.136 12:09:17 -- nvmf/common.sh@693 -- # digest=0 00:33:27.136 12:09:17 -- nvmf/common.sh@694 -- # python - 00:33:27.136 12:09:17 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.9SFRxTdAed 00:33:27.136 12:09:17 -- keyring/common.sh@23 -- # echo /tmp/tmp.9SFRxTdAed 00:33:27.136 12:09:17 -- keyring/file.sh@27 -- # key1path=/tmp/tmp.9SFRxTdAed 00:33:27.136 12:09:17 -- keyring/file.sh@30 -- # tgtpid=2702894 00:33:27.136 12:09:17 -- keyring/file.sh@32 -- # waitforlisten 2702894 00:33:27.136 12:09:17 -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:33:27.136 12:09:17 -- common/autotest_common.sh@817 -- # '[' -z 2702894 ']' 00:33:27.136 12:09:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:27.136 12:09:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:33:27.136 12:09:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:27.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:27.136 12:09:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:33:27.136 12:09:17 -- common/autotest_common.sh@10 -- # set +x 00:33:27.394 [2024-04-18 12:09:17.729944] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:33:27.394 [2024-04-18 12:09:17.730055] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2702894 ] 00:33:27.394 EAL: No free 2048 kB hugepages reported on node 1 00:33:27.394 [2024-04-18 12:09:17.854011] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:27.652 [2024-04-18 12:09:18.062352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:28.587 12:09:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:33:28.587 12:09:18 -- common/autotest_common.sh@850 -- # return 0 00:33:28.587 12:09:18 -- keyring/file.sh@33 -- # rpc_cmd 00:33:28.587 12:09:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:28.587 12:09:18 -- common/autotest_common.sh@10 -- # set +x 00:33:28.587 [2024-04-18 12:09:18.960808] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:28.587 null0 00:33:28.587 [2024-04-18 12:09:18.992860] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:33:28.587 [2024-04-18 12:09:18.993297] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:28.587 [2024-04-18 12:09:19.000901] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:33:28.587 12:09:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:28.587 12:09:19 -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:28.587 12:09:19 -- common/autotest_common.sh@638 -- # local es=0 00:33:28.587 12:09:19 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:28.587 12:09:19 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:33:28.587 12:09:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:28.587 12:09:19 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:33:28.587 12:09:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:28.587 12:09:19 -- common/autotest_common.sh@641 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:28.587 12:09:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:28.587 12:09:19 -- common/autotest_common.sh@10 -- # set +x 00:33:28.587 [2024-04-18 12:09:19.012909] nvmf_rpc.c: 769:nvmf_rpc_listen_paused: *ERROR*: A listener already exists with different secure channel option.request: 00:33:28.587 { 00:33:28.587 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:33:28.587 "secure_channel": false, 00:33:28.587 "listen_address": { 00:33:28.587 "trtype": "tcp", 00:33:28.587 "traddr": "127.0.0.1", 00:33:28.587 "trsvcid": "4420" 00:33:28.587 }, 00:33:28.587 "method": "nvmf_subsystem_add_listener", 00:33:28.587 "req_id": 1 00:33:28.587 } 00:33:28.587 Got JSON-RPC error response 00:33:28.587 response: 00:33:28.587 { 00:33:28.587 "code": -32602, 00:33:28.587 "message": "Invalid parameters" 00:33:28.587 } 00:33:28.587 12:09:19 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:33:28.587 12:09:19 -- common/autotest_common.sh@641 -- # es=1 00:33:28.588 12:09:19 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:33:28.588 12:09:19 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:33:28.588 12:09:19 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:33:28.588 12:09:19 -- keyring/file.sh@46 -- # bperfpid=2703147 00:33:28.588 12:09:19 -- keyring/file.sh@48 -- # waitforlisten 2703147 /var/tmp/bperf.sock 00:33:28.588 12:09:19 -- common/autotest_common.sh@817 -- # '[' -z 2703147 ']' 00:33:28.588 12:09:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:28.588 12:09:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:33:28.588 12:09:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:28.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:28.588 12:09:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:33:28.588 12:09:19 -- common/autotest_common.sh@10 -- # set +x 00:33:28.588 12:09:19 -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:33:28.588 [2024-04-18 12:09:19.099867] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:33:28.588 [2024-04-18 12:09:19.099962] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2703147 ] 00:33:28.850 EAL: No free 2048 kB hugepages reported on node 1 00:33:28.850 [2024-04-18 12:09:19.223037] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:29.108 [2024-04-18 12:09:19.444379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:29.366 12:09:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:33:29.366 12:09:19 -- common/autotest_common.sh@850 -- # return 0 00:33:29.366 12:09:19 -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.6DjK1Yrm95 00:33:29.366 12:09:19 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.6DjK1Yrm95 00:33:29.623 12:09:20 -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.9SFRxTdAed 00:33:29.624 12:09:20 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.9SFRxTdAed 00:33:29.882 12:09:20 -- keyring/file.sh@51 -- # get_key key0 00:33:29.882 12:09:20 -- keyring/file.sh@51 -- # jq -r .path 00:33:29.882 12:09:20 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:29.882 12:09:20 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:29.882 12:09:20 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:29.882 12:09:20 -- keyring/file.sh@51 -- # [[ /tmp/tmp.6DjK1Yrm95 == \/\t\m\p\/\t\m\p\.\6\D\j\K\1\Y\r\m\9\5 ]] 00:33:29.882 12:09:20 -- keyring/file.sh@52 -- # get_key key1 00:33:29.882 12:09:20 -- keyring/file.sh@52 -- # jq -r .path 00:33:29.882 12:09:20 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:29.882 12:09:20 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:29.882 12:09:20 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:30.141 12:09:20 -- keyring/file.sh@52 -- # [[ /tmp/tmp.9SFRxTdAed == \/\t\m\p\/\t\m\p\.\9\S\F\R\x\T\d\A\e\d ]] 00:33:30.141 12:09:20 -- keyring/file.sh@53 -- # get_refcnt key0 00:33:30.141 12:09:20 -- keyring/common.sh@12 -- # get_key key0 00:33:30.141 12:09:20 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:30.141 12:09:20 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:30.141 12:09:20 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:30.141 12:09:20 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:30.399 12:09:20 -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:33:30.399 12:09:20 -- keyring/file.sh@54 -- # get_refcnt key1 00:33:30.399 12:09:20 -- keyring/common.sh@12 -- # get_key key1 00:33:30.399 12:09:20 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:30.399 12:09:20 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:30.399 12:09:20 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:30.399 12:09:20 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:30.399 12:09:20 -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:33:30.399 12:09:20 -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:30.399 12:09:20 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:30.658 [2024-04-18 12:09:21.084476] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:30.658 nvme0n1 00:33:30.658 12:09:21 -- keyring/file.sh@59 -- # get_refcnt key0 00:33:30.658 12:09:21 -- keyring/common.sh@12 -- # get_key key0 00:33:30.658 12:09:21 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:30.658 12:09:21 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:30.658 12:09:21 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:30.658 12:09:21 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:30.916 12:09:21 -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:33:30.916 12:09:21 -- keyring/file.sh@60 -- # get_refcnt key1 00:33:30.916 12:09:21 -- keyring/common.sh@12 -- # get_key key1 00:33:30.917 12:09:21 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:30.917 12:09:21 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:30.917 12:09:21 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:30.917 12:09:21 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:31.175 12:09:21 -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:33:31.175 12:09:21 -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:31.175 Running I/O for 1 seconds... 00:33:32.127 00:33:32.127 Latency(us) 00:33:32.127 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:32.127 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:33:32.127 nvme0n1 : 1.01 9070.64 35.43 0.00 0.00 14019.11 11062.48 22754.10 00:33:32.127 =================================================================================================================== 00:33:32.127 Total : 9070.64 35.43 0.00 0.00 14019.11 11062.48 22754.10 00:33:32.127 0 00:33:32.127 12:09:22 -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:32.127 12:09:22 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:32.412 12:09:22 -- keyring/file.sh@65 -- # get_refcnt key0 00:33:32.412 12:09:22 -- keyring/common.sh@12 -- # get_key key0 00:33:32.412 12:09:22 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:32.412 12:09:22 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:32.412 12:09:22 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:32.412 12:09:22 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:32.670 12:09:22 -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:33:32.670 12:09:22 -- keyring/file.sh@66 -- # get_refcnt key1 00:33:32.670 12:09:22 -- keyring/common.sh@12 -- # get_key key1 00:33:32.671 12:09:22 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:32.671 12:09:22 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:32.671 12:09:22 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:32.671 12:09:22 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:32.671 12:09:23 -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:33:32.671 12:09:23 -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:32.671 12:09:23 -- common/autotest_common.sh@638 -- # local es=0 00:33:32.671 12:09:23 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:32.671 12:09:23 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:33:32.671 12:09:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:32.671 12:09:23 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:33:32.671 12:09:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:32.671 12:09:23 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:32.671 12:09:23 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:32.929 [2024-04-18 12:09:23.321177] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:32.929 [2024-04-18 12:09:23.321732] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000009240 (107): Transport endpoint is not connected 00:33:32.929 [2024-04-18 12:09:23.322714] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000009240 (9): Bad file descriptor 00:33:32.929 [2024-04-18 12:09:23.323711] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:32.929 [2024-04-18 12:09:23.323733] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:33:32.929 [2024-04-18 12:09:23.323744] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:32.929 request: 00:33:32.929 { 00:33:32.929 "name": "nvme0", 00:33:32.929 "trtype": "tcp", 00:33:32.929 "traddr": "127.0.0.1", 00:33:32.929 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:32.929 "adrfam": "ipv4", 00:33:32.929 "trsvcid": "4420", 00:33:32.929 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:32.929 "psk": "key1", 00:33:32.929 "method": "bdev_nvme_attach_controller", 00:33:32.929 "req_id": 1 00:33:32.929 } 00:33:32.929 Got JSON-RPC error response 00:33:32.929 response: 00:33:32.929 { 00:33:32.929 "code": -32602, 00:33:32.929 "message": "Invalid parameters" 00:33:32.929 } 00:33:32.929 12:09:23 -- common/autotest_common.sh@641 -- # es=1 00:33:32.929 12:09:23 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:33:32.929 12:09:23 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:33:32.929 12:09:23 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:33:32.929 12:09:23 -- keyring/file.sh@71 -- # get_refcnt key0 00:33:32.929 12:09:23 -- keyring/common.sh@12 -- # get_key key0 00:33:32.929 12:09:23 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:32.929 12:09:23 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:32.929 12:09:23 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:32.929 12:09:23 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:33.188 12:09:23 -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:33:33.188 12:09:23 -- keyring/file.sh@72 -- # get_refcnt key1 00:33:33.188 12:09:23 -- keyring/common.sh@12 -- # get_key key1 00:33:33.188 12:09:23 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:33.188 12:09:23 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:33.188 12:09:23 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:33.188 12:09:23 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:33.188 12:09:23 -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:33:33.188 12:09:23 -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:33:33.188 12:09:23 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:33.446 12:09:23 -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:33:33.446 12:09:23 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:33:33.705 12:09:24 -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:33:33.705 12:09:24 -- keyring/file.sh@77 -- # jq length 00:33:33.705 12:09:24 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:33.705 12:09:24 -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:33:33.705 12:09:24 -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.6DjK1Yrm95 00:33:33.705 12:09:24 -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.6DjK1Yrm95 00:33:33.705 12:09:24 -- common/autotest_common.sh@638 -- # local es=0 00:33:33.705 12:09:24 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.6DjK1Yrm95 00:33:33.705 12:09:24 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:33:33.705 12:09:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:33.705 12:09:24 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:33:33.705 12:09:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:33.705 12:09:24 -- common/autotest_common.sh@641 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.6DjK1Yrm95 00:33:33.705 12:09:24 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.6DjK1Yrm95 00:33:33.963 [2024-04-18 12:09:24.349228] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.6DjK1Yrm95': 0100660 00:33:33.963 [2024-04-18 12:09:24.349263] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:33:33.963 request: 00:33:33.963 { 00:33:33.963 "name": "key0", 00:33:33.963 "path": "/tmp/tmp.6DjK1Yrm95", 00:33:33.963 "method": "keyring_file_add_key", 00:33:33.963 "req_id": 1 00:33:33.963 } 00:33:33.963 Got JSON-RPC error response 00:33:33.963 response: 00:33:33.963 { 00:33:33.963 "code": -1, 00:33:33.963 "message": "Operation not permitted" 00:33:33.963 } 00:33:33.963 12:09:24 -- common/autotest_common.sh@641 -- # es=1 00:33:33.963 12:09:24 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:33:33.963 12:09:24 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:33:33.963 12:09:24 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:33:33.963 12:09:24 -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.6DjK1Yrm95 00:33:33.963 12:09:24 -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.6DjK1Yrm95 00:33:33.963 12:09:24 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.6DjK1Yrm95 00:33:34.221 12:09:24 -- keyring/file.sh@86 -- # rm -f /tmp/tmp.6DjK1Yrm95 00:33:34.221 12:09:24 -- keyring/file.sh@88 -- # get_refcnt key0 00:33:34.221 12:09:24 -- keyring/common.sh@12 -- # get_key key0 00:33:34.221 12:09:24 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:34.221 12:09:24 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:34.221 12:09:24 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:34.221 12:09:24 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:34.221 12:09:24 -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:33:34.221 12:09:24 -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:34.221 12:09:24 -- common/autotest_common.sh@638 -- # local es=0 00:33:34.221 12:09:24 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:34.221 12:09:24 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:33:34.221 12:09:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:34.221 12:09:24 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:33:34.221 12:09:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:34.221 12:09:24 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:34.221 12:09:24 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:34.479 [2024-04-18 12:09:24.866626] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.6DjK1Yrm95': No such file or directory 00:33:34.479 [2024-04-18 12:09:24.866660] nvme_tcp.c:2570:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:33:34.479 [2024-04-18 12:09:24.866685] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:33:34.479 [2024-04-18 12:09:24.866696] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:34.479 [2024-04-18 12:09:24.866710] bdev_nvme.c:6191:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:33:34.479 request: 00:33:34.479 { 00:33:34.479 "name": "nvme0", 00:33:34.479 "trtype": "tcp", 00:33:34.479 "traddr": "127.0.0.1", 00:33:34.479 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:34.479 "adrfam": "ipv4", 00:33:34.479 "trsvcid": "4420", 00:33:34.479 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:34.479 "psk": "key0", 00:33:34.479 "method": "bdev_nvme_attach_controller", 00:33:34.479 "req_id": 1 00:33:34.479 } 00:33:34.479 Got JSON-RPC error response 00:33:34.479 response: 00:33:34.479 { 00:33:34.479 "code": -19, 00:33:34.479 "message": "No such device" 00:33:34.479 } 00:33:34.479 12:09:24 -- common/autotest_common.sh@641 -- # es=1 00:33:34.479 12:09:24 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:33:34.479 12:09:24 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:33:34.479 12:09:24 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:33:34.479 12:09:24 -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:33:34.479 12:09:24 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:34.738 12:09:25 -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:34.738 12:09:25 -- keyring/common.sh@15 -- # local name key digest path 00:33:34.738 12:09:25 -- keyring/common.sh@17 -- # name=key0 00:33:34.738 12:09:25 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:34.738 12:09:25 -- keyring/common.sh@17 -- # digest=0 00:33:34.738 12:09:25 -- keyring/common.sh@18 -- # mktemp 00:33:34.738 12:09:25 -- keyring/common.sh@18 -- # path=/tmp/tmp.K9zDuPgmtk 00:33:34.738 12:09:25 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:34.738 12:09:25 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:34.738 12:09:25 -- nvmf/common.sh@691 -- # local prefix key digest 00:33:34.738 12:09:25 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:33:34.738 12:09:25 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:33:34.738 12:09:25 -- nvmf/common.sh@693 -- # digest=0 00:33:34.738 12:09:25 -- nvmf/common.sh@694 -- # python - 00:33:34.738 12:09:25 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.K9zDuPgmtk 00:33:34.738 12:09:25 -- keyring/common.sh@23 -- # echo /tmp/tmp.K9zDuPgmtk 00:33:34.738 12:09:25 -- keyring/file.sh@95 -- # key0path=/tmp/tmp.K9zDuPgmtk 00:33:34.738 12:09:25 -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.K9zDuPgmtk 00:33:34.738 12:09:25 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.K9zDuPgmtk 00:33:34.996 12:09:25 -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:34.996 12:09:25 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:34.996 nvme0n1 00:33:34.996 12:09:25 -- keyring/file.sh@99 -- # get_refcnt key0 00:33:34.996 12:09:25 -- keyring/common.sh@12 -- # get_key key0 00:33:34.996 12:09:25 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:34.996 12:09:25 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:34.996 12:09:25 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:34.996 12:09:25 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:35.253 12:09:25 -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:33:35.253 12:09:25 -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:33:35.253 12:09:25 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:35.511 12:09:25 -- keyring/file.sh@101 -- # get_key key0 00:33:35.511 12:09:25 -- keyring/file.sh@101 -- # jq -r .removed 00:33:35.511 12:09:25 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:35.511 12:09:25 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:35.511 12:09:25 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:35.770 12:09:26 -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:33:35.770 12:09:26 -- keyring/file.sh@102 -- # get_refcnt key0 00:33:35.770 12:09:26 -- keyring/common.sh@12 -- # get_key key0 00:33:35.770 12:09:26 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:35.770 12:09:26 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:35.770 12:09:26 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:35.770 12:09:26 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:35.770 12:09:26 -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:33:35.770 12:09:26 -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:35.770 12:09:26 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:36.029 12:09:26 -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:33:36.029 12:09:26 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:36.029 12:09:26 -- keyring/file.sh@104 -- # jq length 00:33:36.287 12:09:26 -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:33:36.287 12:09:26 -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.K9zDuPgmtk 00:33:36.287 12:09:26 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.K9zDuPgmtk 00:33:36.287 12:09:26 -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.9SFRxTdAed 00:33:36.287 12:09:26 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.9SFRxTdAed 00:33:36.545 12:09:26 -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:36.545 12:09:26 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:36.803 nvme0n1 00:33:36.803 12:09:27 -- keyring/file.sh@112 -- # bperf_cmd save_config 00:33:36.803 12:09:27 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:33:37.063 12:09:27 -- keyring/file.sh@112 -- # config='{ 00:33:37.063 "subsystems": [ 00:33:37.063 { 00:33:37.063 "subsystem": "keyring", 00:33:37.063 "config": [ 00:33:37.063 { 00:33:37.063 "method": "keyring_file_add_key", 00:33:37.063 "params": { 00:33:37.063 "name": "key0", 00:33:37.063 "path": "/tmp/tmp.K9zDuPgmtk" 00:33:37.063 } 00:33:37.063 }, 00:33:37.063 { 00:33:37.063 "method": "keyring_file_add_key", 00:33:37.063 "params": { 00:33:37.063 "name": "key1", 00:33:37.063 "path": "/tmp/tmp.9SFRxTdAed" 00:33:37.063 } 00:33:37.063 } 00:33:37.063 ] 00:33:37.063 }, 00:33:37.063 { 00:33:37.063 "subsystem": "iobuf", 00:33:37.063 "config": [ 00:33:37.063 { 00:33:37.063 "method": "iobuf_set_options", 00:33:37.063 "params": { 00:33:37.063 "small_pool_count": 8192, 00:33:37.063 "large_pool_count": 1024, 00:33:37.063 "small_bufsize": 8192, 00:33:37.063 "large_bufsize": 135168 00:33:37.063 } 00:33:37.063 } 00:33:37.063 ] 00:33:37.063 }, 00:33:37.063 { 00:33:37.063 "subsystem": "sock", 00:33:37.063 "config": [ 00:33:37.063 { 00:33:37.063 "method": "sock_impl_set_options", 00:33:37.063 "params": { 00:33:37.063 "impl_name": "posix", 00:33:37.063 "recv_buf_size": 2097152, 00:33:37.063 "send_buf_size": 2097152, 00:33:37.063 "enable_recv_pipe": true, 00:33:37.063 "enable_quickack": false, 00:33:37.063 "enable_placement_id": 0, 00:33:37.063 "enable_zerocopy_send_server": true, 00:33:37.063 "enable_zerocopy_send_client": false, 00:33:37.063 "zerocopy_threshold": 0, 00:33:37.063 "tls_version": 0, 00:33:37.063 "enable_ktls": false 00:33:37.063 } 00:33:37.063 }, 00:33:37.063 { 00:33:37.063 "method": "sock_impl_set_options", 00:33:37.063 "params": { 00:33:37.063 "impl_name": "ssl", 00:33:37.063 "recv_buf_size": 4096, 00:33:37.063 "send_buf_size": 4096, 00:33:37.063 "enable_recv_pipe": true, 00:33:37.063 "enable_quickack": false, 00:33:37.063 "enable_placement_id": 0, 00:33:37.063 "enable_zerocopy_send_server": true, 00:33:37.063 "enable_zerocopy_send_client": false, 00:33:37.063 "zerocopy_threshold": 0, 00:33:37.063 "tls_version": 0, 00:33:37.063 "enable_ktls": false 00:33:37.063 } 00:33:37.063 } 00:33:37.063 ] 00:33:37.063 }, 00:33:37.063 { 00:33:37.063 "subsystem": "vmd", 00:33:37.063 "config": [] 00:33:37.063 }, 00:33:37.063 { 00:33:37.063 "subsystem": "accel", 00:33:37.063 "config": [ 00:33:37.063 { 00:33:37.063 "method": "accel_set_options", 00:33:37.063 "params": { 00:33:37.063 "small_cache_size": 128, 00:33:37.063 "large_cache_size": 16, 00:33:37.063 "task_count": 2048, 00:33:37.063 "sequence_count": 2048, 00:33:37.063 "buf_count": 2048 00:33:37.063 } 00:33:37.063 } 00:33:37.063 ] 00:33:37.063 }, 00:33:37.063 { 00:33:37.063 "subsystem": "bdev", 00:33:37.063 "config": [ 00:33:37.063 { 00:33:37.063 "method": "bdev_set_options", 00:33:37.063 "params": { 00:33:37.063 "bdev_io_pool_size": 65535, 00:33:37.063 "bdev_io_cache_size": 256, 00:33:37.063 "bdev_auto_examine": true, 00:33:37.063 "iobuf_small_cache_size": 128, 00:33:37.063 "iobuf_large_cache_size": 16 00:33:37.063 } 00:33:37.063 }, 00:33:37.063 { 00:33:37.063 "method": "bdev_raid_set_options", 00:33:37.063 "params": { 00:33:37.063 "process_window_size_kb": 1024 00:33:37.063 } 00:33:37.063 }, 00:33:37.063 { 00:33:37.063 "method": "bdev_iscsi_set_options", 00:33:37.063 "params": { 00:33:37.063 "timeout_sec": 30 00:33:37.063 } 00:33:37.063 }, 00:33:37.063 { 00:33:37.063 "method": "bdev_nvme_set_options", 00:33:37.063 "params": { 00:33:37.063 "action_on_timeout": "none", 00:33:37.063 "timeout_us": 0, 00:33:37.063 "timeout_admin_us": 0, 00:33:37.063 "keep_alive_timeout_ms": 10000, 00:33:37.063 "arbitration_burst": 0, 00:33:37.063 "low_priority_weight": 0, 00:33:37.063 "medium_priority_weight": 0, 00:33:37.063 "high_priority_weight": 0, 00:33:37.063 "nvme_adminq_poll_period_us": 10000, 00:33:37.063 "nvme_ioq_poll_period_us": 0, 00:33:37.063 "io_queue_requests": 512, 00:33:37.063 "delay_cmd_submit": true, 00:33:37.063 "transport_retry_count": 4, 00:33:37.063 "bdev_retry_count": 3, 00:33:37.063 "transport_ack_timeout": 0, 00:33:37.064 "ctrlr_loss_timeout_sec": 0, 00:33:37.064 "reconnect_delay_sec": 0, 00:33:37.064 "fast_io_fail_timeout_sec": 0, 00:33:37.064 "disable_auto_failback": false, 00:33:37.064 "generate_uuids": false, 00:33:37.064 "transport_tos": 0, 00:33:37.064 "nvme_error_stat": false, 00:33:37.064 "rdma_srq_size": 0, 00:33:37.064 "io_path_stat": false, 00:33:37.064 "allow_accel_sequence": false, 00:33:37.064 "rdma_max_cq_size": 0, 00:33:37.064 "rdma_cm_event_timeout_ms": 0, 00:33:37.064 "dhchap_digests": [ 00:33:37.064 "sha256", 00:33:37.064 "sha384", 00:33:37.064 "sha512" 00:33:37.064 ], 00:33:37.064 "dhchap_dhgroups": [ 00:33:37.064 "null", 00:33:37.064 "ffdhe2048", 00:33:37.064 "ffdhe3072", 00:33:37.064 "ffdhe4096", 00:33:37.064 "ffdhe6144", 00:33:37.064 "ffdhe8192" 00:33:37.064 ] 00:33:37.064 } 00:33:37.064 }, 00:33:37.064 { 00:33:37.064 "method": "bdev_nvme_attach_controller", 00:33:37.064 "params": { 00:33:37.064 "name": "nvme0", 00:33:37.064 "trtype": "TCP", 00:33:37.064 "adrfam": "IPv4", 00:33:37.064 "traddr": "127.0.0.1", 00:33:37.064 "trsvcid": "4420", 00:33:37.064 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:37.064 "prchk_reftag": false, 00:33:37.064 "prchk_guard": false, 00:33:37.064 "ctrlr_loss_timeout_sec": 0, 00:33:37.064 "reconnect_delay_sec": 0, 00:33:37.064 "fast_io_fail_timeout_sec": 0, 00:33:37.064 "psk": "key0", 00:33:37.064 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:37.064 "hdgst": false, 00:33:37.064 "ddgst": false 00:33:37.064 } 00:33:37.064 }, 00:33:37.064 { 00:33:37.064 "method": "bdev_nvme_set_hotplug", 00:33:37.064 "params": { 00:33:37.064 "period_us": 100000, 00:33:37.064 "enable": false 00:33:37.064 } 00:33:37.064 }, 00:33:37.064 { 00:33:37.064 "method": "bdev_wait_for_examine" 00:33:37.064 } 00:33:37.064 ] 00:33:37.064 }, 00:33:37.064 { 00:33:37.064 "subsystem": "nbd", 00:33:37.064 "config": [] 00:33:37.064 } 00:33:37.064 ] 00:33:37.064 }' 00:33:37.064 12:09:27 -- keyring/file.sh@114 -- # killprocess 2703147 00:33:37.064 12:09:27 -- common/autotest_common.sh@936 -- # '[' -z 2703147 ']' 00:33:37.064 12:09:27 -- common/autotest_common.sh@940 -- # kill -0 2703147 00:33:37.064 12:09:27 -- common/autotest_common.sh@941 -- # uname 00:33:37.064 12:09:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:33:37.064 12:09:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2703147 00:33:37.064 12:09:27 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:33:37.064 12:09:27 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:33:37.064 12:09:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2703147' 00:33:37.064 killing process with pid 2703147 00:33:37.064 12:09:27 -- common/autotest_common.sh@955 -- # kill 2703147 00:33:37.064 Received shutdown signal, test time was about 1.000000 seconds 00:33:37.064 00:33:37.064 Latency(us) 00:33:37.064 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:37.064 =================================================================================================================== 00:33:37.064 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:37.064 12:09:27 -- common/autotest_common.sh@960 -- # wait 2703147 00:33:37.998 12:09:28 -- keyring/file.sh@117 -- # bperfpid=2704737 00:33:37.998 12:09:28 -- keyring/file.sh@119 -- # waitforlisten 2704737 /var/tmp/bperf.sock 00:33:37.998 12:09:28 -- common/autotest_common.sh@817 -- # '[' -z 2704737 ']' 00:33:37.998 12:09:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:37.998 12:09:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:33:37.998 12:09:28 -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:33:37.998 12:09:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:37.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:37.998 12:09:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:33:37.998 12:09:28 -- keyring/file.sh@115 -- # echo '{ 00:33:37.998 "subsystems": [ 00:33:37.998 { 00:33:37.998 "subsystem": "keyring", 00:33:37.998 "config": [ 00:33:37.998 { 00:33:37.998 "method": "keyring_file_add_key", 00:33:37.998 "params": { 00:33:37.998 "name": "key0", 00:33:37.998 "path": "/tmp/tmp.K9zDuPgmtk" 00:33:37.998 } 00:33:37.998 }, 00:33:37.998 { 00:33:37.998 "method": "keyring_file_add_key", 00:33:37.998 "params": { 00:33:37.998 "name": "key1", 00:33:37.998 "path": "/tmp/tmp.9SFRxTdAed" 00:33:37.998 } 00:33:37.998 } 00:33:37.998 ] 00:33:37.998 }, 00:33:37.998 { 00:33:37.998 "subsystem": "iobuf", 00:33:37.998 "config": [ 00:33:37.998 { 00:33:37.998 "method": "iobuf_set_options", 00:33:37.998 "params": { 00:33:37.998 "small_pool_count": 8192, 00:33:37.998 "large_pool_count": 1024, 00:33:37.998 "small_bufsize": 8192, 00:33:37.998 "large_bufsize": 135168 00:33:37.998 } 00:33:37.998 } 00:33:37.998 ] 00:33:37.998 }, 00:33:37.998 { 00:33:37.998 "subsystem": "sock", 00:33:37.998 "config": [ 00:33:37.998 { 00:33:37.998 "method": "sock_impl_set_options", 00:33:37.998 "params": { 00:33:37.998 "impl_name": "posix", 00:33:37.998 "recv_buf_size": 2097152, 00:33:37.998 "send_buf_size": 2097152, 00:33:37.998 "enable_recv_pipe": true, 00:33:37.998 "enable_quickack": false, 00:33:37.998 "enable_placement_id": 0, 00:33:37.998 "enable_zerocopy_send_server": true, 00:33:37.998 "enable_zerocopy_send_client": false, 00:33:37.998 "zerocopy_threshold": 0, 00:33:37.998 "tls_version": 0, 00:33:37.998 "enable_ktls": false 00:33:37.998 } 00:33:37.998 }, 00:33:37.998 { 00:33:37.998 "method": "sock_impl_set_options", 00:33:37.998 "params": { 00:33:37.998 "impl_name": "ssl", 00:33:37.998 "recv_buf_size": 4096, 00:33:37.998 "send_buf_size": 4096, 00:33:37.998 "enable_recv_pipe": true, 00:33:37.998 "enable_quickack": false, 00:33:37.998 "enable_placement_id": 0, 00:33:37.998 "enable_zerocopy_send_server": true, 00:33:37.998 "enable_zerocopy_send_client": false, 00:33:37.998 "zerocopy_threshold": 0, 00:33:37.998 "tls_version": 0, 00:33:37.998 "enable_ktls": false 00:33:37.998 } 00:33:37.998 } 00:33:37.998 ] 00:33:37.998 }, 00:33:37.998 { 00:33:37.999 "subsystem": "vmd", 00:33:37.999 "config": [] 00:33:37.999 }, 00:33:37.999 { 00:33:37.999 "subsystem": "accel", 00:33:37.999 "config": [ 00:33:37.999 { 00:33:37.999 "method": "accel_set_options", 00:33:37.999 "params": { 00:33:37.999 "small_cache_size": 128, 00:33:37.999 "large_cache_size": 16, 00:33:37.999 "task_count": 2048, 00:33:37.999 "sequence_count": 2048, 00:33:37.999 "buf_count": 2048 00:33:37.999 } 00:33:37.999 } 00:33:37.999 ] 00:33:37.999 }, 00:33:37.999 { 00:33:37.999 "subsystem": "bdev", 00:33:37.999 "config": [ 00:33:37.999 { 00:33:37.999 "method": "bdev_set_options", 00:33:37.999 "params": { 00:33:37.999 "bdev_io_pool_size": 65535, 00:33:37.999 "bdev_io_cache_size": 256, 00:33:37.999 "bdev_auto_examine": true, 00:33:37.999 "iobuf_small_cache_size": 128, 00:33:37.999 "iobuf_large_cache_size": 16 00:33:37.999 } 00:33:37.999 }, 00:33:37.999 { 00:33:37.999 "method": "bdev_raid_set_options", 00:33:37.999 "params": { 00:33:37.999 "process_window_size_kb": 1024 00:33:37.999 } 00:33:37.999 }, 00:33:37.999 { 00:33:37.999 "method": "bdev_iscsi_set_options", 00:33:37.999 "params": { 00:33:37.999 "timeout_sec": 30 00:33:37.999 } 00:33:37.999 }, 00:33:37.999 { 00:33:37.999 "method": "bdev_nvme_set_options", 00:33:37.999 "params": { 00:33:37.999 "action_on_timeout": "none", 00:33:37.999 "timeout_us": 0, 00:33:37.999 "timeout_admin_us": 0, 00:33:37.999 "keep_alive_timeout_ms": 10000, 00:33:37.999 "arbitration_burst": 0, 00:33:37.999 "low_priority_weight": 0, 00:33:37.999 "medium_priority_weight": 0, 00:33:37.999 "high_priority_weight": 0, 00:33:37.999 "nvme_adminq_poll_period_us": 10000, 00:33:37.999 "nvme_ioq_poll_period_us": 0, 00:33:37.999 "io_queue_requests": 512, 00:33:37.999 "delay_cmd_submit": true, 00:33:37.999 "transport_retry_count": 4, 00:33:37.999 "bdev_retry_count": 3, 00:33:37.999 "transport_ack_timeout": 0, 00:33:37.999 "ctrlr_loss_timeout_sec": 0, 00:33:37.999 "reconnect_delay_sec": 0, 00:33:37.999 "fast_io_fail_timeout_sec": 0, 00:33:37.999 "disable_auto_failback": false, 00:33:37.999 "generate_uuids": false, 00:33:37.999 "transport_tos": 0, 00:33:37.999 "nvme_error_stat": false, 00:33:37.999 "rdma_srq_size": 0, 00:33:37.999 "io_path_stat": false, 00:33:37.999 "allow_accel_sequence": false, 00:33:37.999 "rdma_max_cq_size": 0, 00:33:37.999 "rdma_cm_event_timeout_ms": 0, 00:33:37.999 "dhchap_digests": [ 00:33:37.999 "sha256", 00:33:37.999 "sha384", 00:33:37.999 "sha512" 00:33:37.999 ], 00:33:37.999 "dhchap_dhgroups": [ 00:33:37.999 "null", 00:33:37.999 "ffdhe2048", 00:33:37.999 "ffdhe3072", 00:33:37.999 "ffdhe4096", 00:33:37.999 "ffdhe6144", 00:33:37.999 "ffdhe8192" 00:33:37.999 ] 00:33:37.999 } 00:33:37.999 }, 00:33:37.999 { 00:33:37.999 "method": "bdev_nvme_attach_controller", 00:33:37.999 "params": { 00:33:37.999 "name": "nvme0", 00:33:37.999 "trtype": "TCP", 00:33:37.999 "adrfam": "IPv4", 00:33:37.999 "traddr": "127.0.0.1", 00:33:37.999 "trsvcid": "4420", 00:33:37.999 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:37.999 "prchk_reftag": false, 00:33:37.999 "prchk_guard": false, 00:33:37.999 "ctrlr_loss_timeout_sec": 0, 00:33:37.999 "reconnect_delay_sec": 0, 00:33:37.999 "fast_io_fail_timeout_sec": 0, 00:33:37.999 "psk": "key0", 00:33:37.999 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:37.999 "hdgst": false, 00:33:37.999 "ddgst": false 00:33:37.999 } 00:33:37.999 }, 00:33:37.999 { 00:33:37.999 "method": "bdev_nvme_set_hotplug", 00:33:37.999 "params": { 00:33:37.999 "period_us": 100000, 00:33:37.999 "enable": false 00:33:37.999 } 00:33:37.999 }, 00:33:37.999 { 00:33:37.999 "method": "bdev_wait_for_examine" 00:33:37.999 } 00:33:37.999 ] 00:33:37.999 }, 00:33:37.999 { 00:33:37.999 "subsystem": "nbd", 00:33:37.999 "config": [] 00:33:37.999 } 00:33:37.999 ] 00:33:37.999 }' 00:33:37.999 12:09:28 -- common/autotest_common.sh@10 -- # set +x 00:33:38.257 [2024-04-18 12:09:28.585796] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:33:38.257 [2024-04-18 12:09:28.585904] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2704737 ] 00:33:38.257 EAL: No free 2048 kB hugepages reported on node 1 00:33:38.257 [2024-04-18 12:09:28.709687] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:38.516 [2024-04-18 12:09:28.918406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:39.083 [2024-04-18 12:09:29.360241] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:39.083 12:09:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:33:39.083 12:09:29 -- common/autotest_common.sh@850 -- # return 0 00:33:39.083 12:09:29 -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:33:39.083 12:09:29 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:39.083 12:09:29 -- keyring/file.sh@120 -- # jq length 00:33:39.342 12:09:29 -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:33:39.342 12:09:29 -- keyring/file.sh@121 -- # get_refcnt key0 00:33:39.342 12:09:29 -- keyring/common.sh@12 -- # get_key key0 00:33:39.342 12:09:29 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:39.342 12:09:29 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:39.342 12:09:29 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:39.342 12:09:29 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:39.342 12:09:29 -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:33:39.342 12:09:29 -- keyring/file.sh@122 -- # get_refcnt key1 00:33:39.342 12:09:29 -- keyring/common.sh@12 -- # get_key key1 00:33:39.342 12:09:29 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:39.342 12:09:29 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:39.342 12:09:29 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:39.342 12:09:29 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:39.601 12:09:29 -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:33:39.601 12:09:29 -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:33:39.601 12:09:29 -- keyring/file.sh@123 -- # jq -r '.[].name' 00:33:39.601 12:09:29 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:33:39.860 12:09:30 -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:33:39.860 12:09:30 -- keyring/file.sh@1 -- # cleanup 00:33:39.860 12:09:30 -- keyring/file.sh@19 -- # rm -f /tmp/tmp.K9zDuPgmtk /tmp/tmp.9SFRxTdAed 00:33:39.860 12:09:30 -- keyring/file.sh@20 -- # killprocess 2704737 00:33:39.860 12:09:30 -- common/autotest_common.sh@936 -- # '[' -z 2704737 ']' 00:33:39.860 12:09:30 -- common/autotest_common.sh@940 -- # kill -0 2704737 00:33:39.860 12:09:30 -- common/autotest_common.sh@941 -- # uname 00:33:39.860 12:09:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:33:39.860 12:09:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2704737 00:33:39.860 12:09:30 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:33:39.860 12:09:30 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:33:39.860 12:09:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2704737' 00:33:39.860 killing process with pid 2704737 00:33:39.860 12:09:30 -- common/autotest_common.sh@955 -- # kill 2704737 00:33:39.860 Received shutdown signal, test time was about 1.000000 seconds 00:33:39.860 00:33:39.860 Latency(us) 00:33:39.860 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:39.860 =================================================================================================================== 00:33:39.860 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:33:39.860 12:09:30 -- common/autotest_common.sh@960 -- # wait 2704737 00:33:40.796 12:09:31 -- keyring/file.sh@21 -- # killprocess 2702894 00:33:40.796 12:09:31 -- common/autotest_common.sh@936 -- # '[' -z 2702894 ']' 00:33:40.796 12:09:31 -- common/autotest_common.sh@940 -- # kill -0 2702894 00:33:40.796 12:09:31 -- common/autotest_common.sh@941 -- # uname 00:33:40.796 12:09:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:33:40.796 12:09:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2702894 00:33:40.796 12:09:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:33:40.796 12:09:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:33:40.796 12:09:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2702894' 00:33:40.796 killing process with pid 2702894 00:33:40.796 12:09:31 -- common/autotest_common.sh@955 -- # kill 2702894 00:33:40.796 [2024-04-18 12:09:31.310847] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:33:40.796 12:09:31 -- common/autotest_common.sh@960 -- # wait 2702894 00:33:43.330 00:33:43.330 real 0m16.270s 00:33:43.330 user 0m33.345s 00:33:43.330 sys 0m3.645s 00:33:43.330 12:09:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:43.330 12:09:33 -- common/autotest_common.sh@10 -- # set +x 00:33:43.330 ************************************ 00:33:43.330 END TEST keyring_file 00:33:43.330 ************************************ 00:33:43.330 12:09:33 -- spdk/autotest.sh@294 -- # [[ n == y ]] 00:33:43.330 12:09:33 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:33:43.330 12:09:33 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:33:43.330 12:09:33 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:33:43.330 12:09:33 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:33:43.330 12:09:33 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:33:43.330 12:09:33 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:33:43.330 12:09:33 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:33:43.330 12:09:33 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:33:43.330 12:09:33 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:33:43.330 12:09:33 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:33:43.330 12:09:33 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:33:43.330 12:09:33 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:33:43.330 12:09:33 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:33:43.330 12:09:33 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:33:43.330 12:09:33 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:33:43.330 12:09:33 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:33:43.330 12:09:33 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:33:43.330 12:09:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:33:43.330 12:09:33 -- common/autotest_common.sh@10 -- # set +x 00:33:43.330 12:09:33 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:33:43.330 12:09:33 -- common/autotest_common.sh@1378 -- # local autotest_es=0 00:33:43.330 12:09:33 -- common/autotest_common.sh@1379 -- # xtrace_disable 00:33:43.330 12:09:33 -- common/autotest_common.sh@10 -- # set +x 00:33:49.895 INFO: APP EXITING 00:33:49.895 INFO: killing all VMs 00:33:49.895 INFO: killing vhost app 00:33:49.895 INFO: EXIT DONE 00:33:53.219 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:33:53.219 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:33:53.219 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:33:53.219 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:33:53.219 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:33:53.219 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:33:53.219 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:33:53.219 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:33:53.219 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:33:53.219 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:33:53.219 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:33:53.219 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:33:53.219 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:33:53.219 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:33:53.219 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:33:53.219 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:33:53.219 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:33:55.756 Cleaning 00:33:55.756 Removing: /var/run/dpdk/spdk0/config 00:33:55.756 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:55.756 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:55.756 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:55.756 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:56.015 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:33:56.015 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:33:56.015 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:33:56.015 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:33:56.015 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:56.015 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:56.015 Removing: /var/run/dpdk/spdk1/config 00:33:56.015 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:33:56.015 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:33:56.015 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:33:56.015 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:33:56.015 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:33:56.015 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:33:56.015 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:33:56.015 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:33:56.015 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:33:56.015 Removing: /var/run/dpdk/spdk1/hugepage_info 00:33:56.015 Removing: /var/run/dpdk/spdk1/mp_socket 00:33:56.015 Removing: /var/run/dpdk/spdk2/config 00:33:56.015 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:33:56.015 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:33:56.015 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:33:56.015 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:33:56.015 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:33:56.015 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:33:56.015 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:33:56.015 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:33:56.015 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:33:56.015 Removing: /var/run/dpdk/spdk2/hugepage_info 00:33:56.015 Removing: /var/run/dpdk/spdk3/config 00:33:56.015 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:33:56.015 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:33:56.015 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:33:56.015 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:33:56.015 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:33:56.015 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:33:56.015 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:33:56.015 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:33:56.015 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:33:56.015 Removing: /var/run/dpdk/spdk3/hugepage_info 00:33:56.015 Removing: /var/run/dpdk/spdk4/config 00:33:56.015 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:33:56.015 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:33:56.015 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:33:56.015 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:33:56.015 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:33:56.015 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:33:56.015 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:33:56.015 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:33:56.015 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:33:56.015 Removing: /var/run/dpdk/spdk4/hugepage_info 00:33:56.015 Removing: /dev/shm/bdev_svc_trace.1 00:33:56.015 Removing: /dev/shm/nvmf_trace.0 00:33:56.015 Removing: /dev/shm/spdk_tgt_trace.pid2288658 00:33:56.015 Removing: /var/run/dpdk/spdk0 00:33:56.015 Removing: /var/run/dpdk/spdk1 00:33:56.015 Removing: /var/run/dpdk/spdk2 00:33:56.015 Removing: /var/run/dpdk/spdk3 00:33:56.015 Removing: /var/run/dpdk/spdk4 00:33:56.015 Removing: /var/run/dpdk/spdk_pid2284257 00:33:56.015 Removing: /var/run/dpdk/spdk_pid2286077 00:33:56.016 Removing: /var/run/dpdk/spdk_pid2288658 00:33:56.016 Removing: /var/run/dpdk/spdk_pid2289943 00:33:56.016 Removing: /var/run/dpdk/spdk_pid2291301 00:33:56.016 Removing: /var/run/dpdk/spdk_pid2292004 00:33:56.016 Removing: /var/run/dpdk/spdk_pid2293515 00:33:56.016 Removing: /var/run/dpdk/spdk_pid2293727 00:33:56.016 Removing: /var/run/dpdk/spdk_pid2294455 00:33:56.016 Removing: /var/run/dpdk/spdk_pid2296438 00:33:56.016 Removing: /var/run/dpdk/spdk_pid2298140 00:33:56.016 Removing: /var/run/dpdk/spdk_pid2299030 00:33:56.016 Removing: /var/run/dpdk/spdk_pid2299772 00:33:56.016 Removing: /var/run/dpdk/spdk_pid2300518 00:33:56.016 Removing: /var/run/dpdk/spdk_pid2301384 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2301743 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2302228 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2302561 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2303818 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2307687 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2308524 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2309353 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2309583 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2311276 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2311545 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2313452 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2313718 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2314547 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2314795 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2315398 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2315664 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2317128 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2317555 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2318011 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2318854 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2319151 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2319508 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2320060 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2320615 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2321164 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2321505 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2322033 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2322591 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2323132 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2323507 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2324009 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2324554 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2325114 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2325509 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2325975 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2326532 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2327085 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2327535 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2327961 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2328513 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2329068 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2329616 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2329964 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2330848 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2335474 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2385357 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2390396 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2400284 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2406144 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2410669 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2411238 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2424269 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2424297 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2425328 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2426133 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2427166 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2427733 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2427739 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2428009 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2428270 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2428273 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2429148 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2430140 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2430955 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2431700 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2431741 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2432017 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2434477 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2435925 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2444947 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2445472 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2450318 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2456739 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2459489 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2471188 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2481393 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2483503 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2484597 00:33:56.275 Removing: /var/run/dpdk/spdk_pid2503144 00:33:56.534 Removing: /var/run/dpdk/spdk_pid2507669 00:33:56.534 Removing: /var/run/dpdk/spdk_pid2512667 00:33:56.534 Removing: /var/run/dpdk/spdk_pid2514363 00:33:56.534 Removing: /var/run/dpdk/spdk_pid2516488 00:33:56.534 Removing: /var/run/dpdk/spdk_pid2516772 00:33:56.534 Removing: /var/run/dpdk/spdk_pid2517071 00:33:56.534 Removing: /var/run/dpdk/spdk_pid2517574 00:33:56.534 Removing: /var/run/dpdk/spdk_pid2518431 00:33:56.534 Removing: /var/run/dpdk/spdk_pid2520555 00:33:56.534 Removing: /var/run/dpdk/spdk_pid2522526 00:33:56.534 Removing: /var/run/dpdk/spdk_pid2523459 00:33:56.534 Removing: /var/run/dpdk/spdk_pid2526059 00:33:56.534 Removing: /var/run/dpdk/spdk_pid2527168 00:33:56.534 Removing: /var/run/dpdk/spdk_pid2528269 00:33:56.534 Removing: /var/run/dpdk/spdk_pid2532919 00:33:56.534 Removing: /var/run/dpdk/spdk_pid2543879 00:33:56.534 Removing: /var/run/dpdk/spdk_pid2548512 00:33:56.534 Removing: /var/run/dpdk/spdk_pid2555202 00:33:56.534 Removing: /var/run/dpdk/spdk_pid2557508 00:33:56.534 Removing: /var/run/dpdk/spdk_pid2559930 00:33:56.534 Removing: /var/run/dpdk/spdk_pid2565594 00:33:56.534 Removing: /var/run/dpdk/spdk_pid2570334 00:33:56.534 Removing: /var/run/dpdk/spdk_pid2578684 00:33:56.534 Removing: /var/run/dpdk/spdk_pid2578691 00:33:56.534 Removing: /var/run/dpdk/spdk_pid2583849 00:33:56.534 Removing: /var/run/dpdk/spdk_pid2584098 00:33:56.534 Removing: /var/run/dpdk/spdk_pid2584312 00:33:56.534 Removing: /var/run/dpdk/spdk_pid2584837 00:33:56.534 Removing: /var/run/dpdk/spdk_pid2584855 00:33:56.534 Removing: /var/run/dpdk/spdk_pid2589791 00:33:56.535 Removing: /var/run/dpdk/spdk_pid2590435 00:33:56.535 Removing: /var/run/dpdk/spdk_pid2595454 00:33:56.535 Removing: /var/run/dpdk/spdk_pid2598393 00:33:56.535 Removing: /var/run/dpdk/spdk_pid2604522 00:33:56.535 Removing: /var/run/dpdk/spdk_pid2610024 00:33:56.535 Removing: /var/run/dpdk/spdk_pid2618635 00:33:56.535 Removing: /var/run/dpdk/spdk_pid2618638 00:33:56.535 Removing: /var/run/dpdk/spdk_pid2637926 00:33:56.535 Removing: /var/run/dpdk/spdk_pid2638761 00:33:56.535 Removing: /var/run/dpdk/spdk_pid2639644 00:33:56.535 Removing: /var/run/dpdk/spdk_pid2640529 00:33:56.535 Removing: /var/run/dpdk/spdk_pid2641917 00:33:56.535 Removing: /var/run/dpdk/spdk_pid2642656 00:33:56.535 Removing: /var/run/dpdk/spdk_pid2643374 00:33:56.535 Removing: /var/run/dpdk/spdk_pid2644098 00:33:56.535 Removing: /var/run/dpdk/spdk_pid2648966 00:33:56.535 Removing: /var/run/dpdk/spdk_pid2649443 00:33:56.535 Removing: /var/run/dpdk/spdk_pid2656210 00:33:56.535 Removing: /var/run/dpdk/spdk_pid2656463 00:33:56.535 Removing: /var/run/dpdk/spdk_pid2659528 00:33:56.535 Removing: /var/run/dpdk/spdk_pid2668095 00:33:56.535 Removing: /var/run/dpdk/spdk_pid2668101 00:33:56.535 Removing: /var/run/dpdk/spdk_pid2673921 00:33:56.535 Removing: /var/run/dpdk/spdk_pid2676192 00:33:56.535 Removing: /var/run/dpdk/spdk_pid2678406 00:33:56.535 Removing: /var/run/dpdk/spdk_pid2679681 00:33:56.535 Removing: /var/run/dpdk/spdk_pid2682125 00:33:56.535 Removing: /var/run/dpdk/spdk_pid2683433 00:33:56.535 Removing: /var/run/dpdk/spdk_pid2693151 00:33:56.535 Removing: /var/run/dpdk/spdk_pid2693679 00:33:56.535 Removing: /var/run/dpdk/spdk_pid2694216 00:33:56.535 Removing: /var/run/dpdk/spdk_pid2697154 00:33:56.535 Removing: /var/run/dpdk/spdk_pid2697696 00:33:56.535 Removing: /var/run/dpdk/spdk_pid2698352 00:33:56.535 Removing: /var/run/dpdk/spdk_pid2702894 00:33:56.535 Removing: /var/run/dpdk/spdk_pid2703147 00:33:56.535 Removing: /var/run/dpdk/spdk_pid2704737 00:33:56.535 Clean 00:33:56.794 12:09:47 -- common/autotest_common.sh@1437 -- # return 0 00:33:56.794 12:09:47 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:33:56.794 12:09:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:33:56.794 12:09:47 -- common/autotest_common.sh@10 -- # set +x 00:33:56.794 12:09:47 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:33:56.794 12:09:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:33:56.794 12:09:47 -- common/autotest_common.sh@10 -- # set +x 00:33:56.794 12:09:47 -- spdk/autotest.sh@385 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:56.794 12:09:47 -- spdk/autotest.sh@387 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:33:56.794 12:09:47 -- spdk/autotest.sh@387 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:33:57.053 12:09:47 -- spdk/autotest.sh@389 -- # hash lcov 00:33:57.053 12:09:47 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:33:57.053 12:09:47 -- spdk/autotest.sh@391 -- # hostname 00:33:57.053 12:09:47 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-22 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:33:57.053 geninfo: WARNING: invalid characters removed from testname! 00:34:18.983 12:10:07 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:19.241 12:10:09 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:21.144 12:10:11 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:23.047 12:10:13 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:24.451 12:10:14 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:25.852 12:10:16 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:27.754 12:10:18 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:27.754 12:10:18 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:27.754 12:10:18 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:34:27.754 12:10:18 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:27.754 12:10:18 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:27.754 12:10:18 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.754 12:10:18 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.754 12:10:18 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.754 12:10:18 -- paths/export.sh@5 -- $ export PATH 00:34:27.754 12:10:18 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.754 12:10:18 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:34:27.754 12:10:18 -- common/autobuild_common.sh@435 -- $ date +%s 00:34:27.754 12:10:18 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713435018.XXXXXX 00:34:27.754 12:10:18 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713435018.dovTOO 00:34:27.754 12:10:18 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:34:27.754 12:10:18 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:34:27.754 12:10:18 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:34:27.754 12:10:18 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:34:27.754 12:10:18 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:34:27.754 12:10:18 -- common/autobuild_common.sh@451 -- $ get_config_params 00:34:27.754 12:10:18 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:34:27.754 12:10:18 -- common/autotest_common.sh@10 -- $ set +x 00:34:27.754 12:10:18 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user' 00:34:27.754 12:10:18 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:34:27.754 12:10:18 -- pm/common@17 -- $ local monitor 00:34:27.754 12:10:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:27.754 12:10:18 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2719176 00:34:27.754 12:10:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:27.754 12:10:18 -- pm/common@21 -- $ date +%s 00:34:27.754 12:10:18 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2719178 00:34:27.754 12:10:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:27.754 12:10:18 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2719181 00:34:27.754 12:10:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:27.754 12:10:18 -- pm/common@21 -- $ date +%s 00:34:27.754 12:10:18 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2719183 00:34:27.754 12:10:18 -- pm/common@26 -- $ sleep 1 00:34:27.754 12:10:18 -- pm/common@21 -- $ date +%s 00:34:27.754 12:10:18 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713435018 00:34:27.754 12:10:18 -- pm/common@21 -- $ date +%s 00:34:27.754 12:10:18 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713435018 00:34:27.754 12:10:18 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713435018 00:34:27.754 12:10:18 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713435018 00:34:27.754 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713435018_collect-cpu-load.pm.log 00:34:27.754 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713435018_collect-vmstat.pm.log 00:34:27.754 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713435018_collect-cpu-temp.pm.log 00:34:27.754 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713435018_collect-bmc-pm.bmc.pm.log 00:34:28.687 12:10:19 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:34:28.687 12:10:19 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:34:28.687 12:10:19 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:28.687 12:10:19 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:34:28.687 12:10:19 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:34:28.688 12:10:19 -- spdk/autopackage.sh@19 -- $ timing_finish 00:34:28.688 12:10:19 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:28.688 12:10:19 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:34:28.688 12:10:19 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:34:28.688 12:10:19 -- spdk/autopackage.sh@20 -- $ exit 0 00:34:28.688 12:10:19 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:34:28.688 12:10:19 -- pm/common@30 -- $ signal_monitor_resources TERM 00:34:28.688 12:10:19 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:34:28.688 12:10:19 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:28.688 12:10:19 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:34:28.688 12:10:19 -- pm/common@45 -- $ pid=2719189 00:34:28.688 12:10:19 -- pm/common@52 -- $ sudo kill -TERM 2719189 00:34:28.688 12:10:19 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:28.688 12:10:19 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:34:28.688 12:10:19 -- pm/common@45 -- $ pid=2719192 00:34:28.688 12:10:19 -- pm/common@52 -- $ sudo kill -TERM 2719192 00:34:28.945 12:10:19 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:28.945 12:10:19 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:34:28.945 12:10:19 -- pm/common@45 -- $ pid=2719194 00:34:28.945 12:10:19 -- pm/common@52 -- $ sudo kill -TERM 2719194 00:34:28.945 12:10:19 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:28.945 12:10:19 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:34:28.945 12:10:19 -- pm/common@45 -- $ pid=2719193 00:34:28.945 12:10:19 -- pm/common@52 -- $ sudo kill -TERM 2719193 00:34:28.945 + [[ -n 2174370 ]] 00:34:28.945 + sudo kill 2174370 00:34:28.955 [Pipeline] } 00:34:28.973 [Pipeline] // stage 00:34:28.978 [Pipeline] } 00:34:28.996 [Pipeline] // timeout 00:34:29.000 [Pipeline] } 00:34:29.017 [Pipeline] // catchError 00:34:29.021 [Pipeline] } 00:34:29.038 [Pipeline] // wrap 00:34:29.043 [Pipeline] } 00:34:29.059 [Pipeline] // catchError 00:34:29.067 [Pipeline] stage 00:34:29.069 [Pipeline] { (Epilogue) 00:34:29.082 [Pipeline] catchError 00:34:29.083 [Pipeline] { 00:34:29.097 [Pipeline] echo 00:34:29.098 Cleanup processes 00:34:29.103 [Pipeline] sh 00:34:29.382 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:29.382 2719178 sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713435018 00:34:29.382 2719192 bash /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713435018 00:34:29.382 2719285 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:34:29.382 2719644 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:29.394 [Pipeline] sh 00:34:29.671 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:29.671 ++ grep -v 'sudo pgrep' 00:34:29.671 ++ awk '{print $1}' 00:34:29.671 + sudo kill -9 2719178 2719192 2719285 00:34:29.682 [Pipeline] sh 00:34:29.960 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:29.960 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:34:35.216 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:34:38.500 [Pipeline] sh 00:34:38.780 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:38.780 Artifacts sizes are good 00:34:38.792 [Pipeline] archiveArtifacts 00:34:38.798 Archiving artifacts 00:34:38.920 [Pipeline] sh 00:34:39.198 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:34:39.211 [Pipeline] cleanWs 00:34:39.219 [WS-CLEANUP] Deleting project workspace... 00:34:39.219 [WS-CLEANUP] Deferred wipeout is used... 00:34:39.225 [WS-CLEANUP] done 00:34:39.227 [Pipeline] } 00:34:39.248 [Pipeline] // catchError 00:34:39.259 [Pipeline] sh 00:34:39.539 + logger -p user.info -t JENKINS-CI 00:34:39.547 [Pipeline] } 00:34:39.561 [Pipeline] // stage 00:34:39.565 [Pipeline] } 00:34:39.579 [Pipeline] // node 00:34:39.582 [Pipeline] End of Pipeline 00:34:39.617 Finished: SUCCESS